HAWASSA UNIVERSITY
Wondo Genet College of Forestry and Natural Resources
Course title: Remote Sensing Digital Image Processing
Course code: GISc 3113
Reading material
Compiled by
Serawit Mengistu
3/17/2023
2023 1
Contents to be covered:-
‾ Unit 1. Introduction
‾ Unit 2. Remote sensing image
‾ Unit 3. Digital image restoration and registration
‾ Unit 4.Image enhancement
‾ Unit 5: Digital image analysis and transformation
‾ Unit 6.Digital Image Classification
‾ Unit 7. Techniques of Multitemporal Image Analysis
‾ Unit 8. Image segmentation
‾ Unit 9. Remote sensing data Integration
‾ Unit 10. Lidar, Drone and Radar Data Processing
• Brainstorming:
• What is remote sensing?
• How it works, in terms of principle, energy, sensor, platform?
• What are the product of RS and how produced?
• How remote sensing data acquired?
• What kind of images exist in terms of sensor, resolution, etc?
• How to use/important of remote sensing technology, images?
• Application of RS images for forestry, landcover, climate change, soil, etc.
• If some challenges or problem arise, which software, which method and which
image will be used to answer?
Understanding, Knowledge, and skill based answers
Unit 1. Introduction
1.1. Introduction to digital image and
display /visualization
Early History Of Space Image
• Today high accuracy satellite images are available to everyone: you
can view them on Google Maps, buy them from websites, you can
also treat them as project.
• The first ever picture from the outer space has been taken 70 years
ago on October, 24th, 1946 with a camera installed on a rocket that
launched from White Sands Missile Range, New Mexico. By the
standards of today, it's just a grainy black and white photo.
• Prior to 1946, people had never seen the Earth from outer space.
The Soviets may have been the first to launch a satellite into orbit, but
American scientists and researchers in New Mexico captured the first
photos from space.
On Oct. 24, 1946, soldiers and scientists launched V-2 missile carrying
a 35-millimeter motion picture camera which took the first shots of
Earth from space. These images were taken at altitude of 65 miles, just
above the accepted beginning of outer space. The film survived the
crash landing because it was covered in a steel cassette. The beginning
of outer-space is from 50 miles.
oThe images taken by the camera were black-and-
white and they didn’t show anything else than the
Earth’s curvature and a cloud cover over the
American Southwest but they paved the way
remote sensing as we know it today.
History of DIP(cont…)
• 1980s - Today: The use of digital image processing techniques has
exploded and are now used for-all kinds of tasks in all kinds ofareas
– Image enhancement/restoration
– Artistic effects
– Medical visualisation
– Industrial inspection
– Law enforcement
– Human computer interfaces
Images
1.1. Images
• An image is a two-dimensional representation of objects in a
real scene.
• An image is a picture, photograph or any form of a 2D
representation of objects or a scene.
• A digital image is a 2D-array of pixels. The value of a pixel (DN)
in the case of 8bits range from 0 to 255.
• “One picture is worth more than ten thousand words”
• Remote sensing images are representations of parts of earth
surface as seen from space.
• The images may be analog or digital.
• Satellite images acquired using electronic sensors are examples
of digital images while
• Aerial photographs are examples of analog images
Analog Image
Analog images are type of images that we humans look at.
Analog image includes;
otelevision images,
ophotographs,
opaintings, and medical images etc.
Which recorded on film or displayed on various display devices.
Analog image can be displayed by using medium like paper or film.
Black and white images only require one gray level.
Digital Images
• A digital remotely sensed image is typically composed of picture
elements (pixels) located at the intersection of each row i and
column j in each K bands of imagery.
• Each pixel associated with is a number known as Digital Number
(DN) or Brightness Value (BV), that indicates average radiance
of a relatively small area within a scene
• A digital image is a representation of a two-dimensional image
as a finite set of digital values, called picture elements or pixels.
• Real world is continuous – an image is simply a digital approximation of this (real-world).
• Digital image-------continued
oIt is a representation of 2D image as a finite set of digital values,
called picture elements or pixels
oEach pixel represents an area on the Earth's surface.
opixel contain intensity value and a location address in 2D image.
oThis value is normally the average value for the whole ground area
covered by the pixel. The intensity of a pixel is recorded as a digital
number.
•
Pixel values typically represent gray levels, colours, heights,
opacities etc.
Properties of Digital Image:
1. The area is covered with a grid of cells
2. Each cell has digital number indicating the amount of energy
received from the cell (in a certain wavelength range)
3. The cell is called a pixel (a picture element)
4. The size of the pixel is the spatial resolution
oEg. Landsat and sentinel 2 pixel size and their ground resolution?
30m and 10m respectively
• Satellite Image
• Satellite images are images captured by satellites at regular intervals (usually hourly)
and used by different application areas.
30m 10m
esolution resolution
• Pixel: is the smallest item/unit of information in an image
• In an 8-bit gray scale image, the value of the pixel between 0 and 255
which correspond to the intensity of the light striking at that point
• Brightness value: is a single number that represents the brightness of pixel
• the higher the number, the brighter the color that assigned to them
• A smaller number indicates low average radiance from the area.
• Densitometer: a device for measuring the density of a material
• device that measures the density, or the degree of darkening, of a
photographic film or plate by recording photometrically its transparency
Types of Images
• There are three types of images. They are as following:
1. Binary Images
• It is the simplest type of image. It takes only two values i.e, Black and White or 0
and 1. The binary image consists of a 1-bit image and it takes only 1 binary digit to
represent a pixel. Binary images are mostly used for general shape or outline.
• Binary images are generated using threshold operation. When a pixel is above the
threshold value, then it is turned white('1') and which are below the threshold value
then they are turned black('0')
2. Gray-scale images
• Grayscale images are monochrome images, Means they have only one color.
Grayscale images do not contain any information about color. Each pixel
determines available different grey levels.
• A normal grayscale image contains 8 bits/pixel data, which has 256 different
grey levels. In medical images and astronomy, 12 or 16 bits/pixel images are
used.
3. Color images
• Colour images are three band monochrome images in which, each band
contains a different color and the actual information is stored in the digital
image. The color images contain gray level information in each spectral
band.
• The images are represented as red, green and blue (RGB images). And each
color image has 24 bits/pixel means 8 bits for each of the three color
band(RGB).
8-bit color format
• 8-bit color is used for storing image information in a computer's memory or
in a file of an image. In this format, each pixel represents one 8 bit byte. It
has 0-255 range of colors, in which 0 is used for black, 255 for white and
127 for gray color. The 8-bit color format is also known as a grayscale
image. Initially, it was used by the UNIX operating system.
16-bit color format
• The 16-bit color format is also known as high color format. It has 65,536
different color shades. It is used in the system developed by Microsoft. The
16-bit color format is further divided into three formats which are Red,
Green, and Blue also known as RGB format.
• In RGB format, there are 5 bits for R, 6 bits for G, and 5 bits for B. One
additional bit is added in green because in all the 3 colors green color is
soothing to eyes.
24-bit color format
• The 24-bit color format is also known as the true color format. The
24-bit color format is also distributed in Red, Green, and Blue. As 24
can be equally divided on 8, so it is distributed equally between 3
different colors like 8 bits for R, 8 bits for G and 8 bits for B.
Digital Imaging
• Digital imaging is the art of making digital images – photographs,
printed texts, or artwork - through the use of a digital camera or
image machine, or by scanning them as a document.
• Each image is compiled of a certain amount of pixels, which are then
mapped onto a grid and stored in a sequence by a computer.
• Every pixel in an image is given a total value to determine its hue or
color.
1.4. Digital image display
oWe live in a world of color.
oThe colors of objects are the result of selective absorption and
reflection of electromagnetic radiation from illumination sources.
oPerception by the human eye is limited to the spectral range of
0.38–0.75 mm, that is a very small part of the solar spectral range
(visible band range).
oThe world is actually far more colorful than we can see.
1. Monochromatic display
• Any image, either a panchromatic image or a spectral band of a multi
spectral image, can be displayed as a black and white (B/W) image by
a monochromatic display.
• The display is implemented by DNs in a series of energy levels that
generate different grey tones (brightness) from black to white.
• Most image processing systems support an 8 bit graphical display,
which corresponds to 256 grey levels, and displays DNs from 0 (black)
to 255 (white).
(a) An image in grey-scale black and white(B/W) display
2. Tristimulus color theory and RGB color display
oIf you understand the structure and principle of a color TV, you must
know that the tube is composed of three color guns of red, green and
blue.
oThese three colors are known as primary colors. The mixture of light
from these three primary colors can produce any color.
oThis property of the human perception of color can be explained by
the tristimulus color theory.
• The human retina has three types of cones and the response by each
type of cone is a function of the wavelength of the incident light; it
peaks at 440 nm (blue), 545 nm (green) and 680 nm (red).
• In other words, each type of cone is primarily sensitive to
one of the primary colors: blue, green or red.
• A color perceived by a person depends on the proportion of each of
these three types of cones being stimulated.
• Digital image color display is based entirely on the tristimulus color
theory.
• In the red gun, pixels of an image are displayed in reds of different
intensity (i.e. dark red, light red, etc.) depending on their DNs.
• The same is true of the green and blue guns.
• Thus if the red, green and blue bands of a multi-spectral image are
displayed in red, green and blue simultaneously, a colour image is
generated.
• The RGB color model is an additive color model in which the red, green,
and blue primary colors of light are added together in various ways to
reproduce a broad array of colors.
• The name of the model comes from the initials of the three additive
primary colors, red, green, and blue.
• The main purpose of the RGB color model is for:
the sensing,
representation, and
display of images in electronic systems,
It is used in web graphics.
oMost standard RGB display system can display 8 bits per pixel
per channel, up to 24 bits 2563 different colors.
oThis capacity is enough to generate a so-called ‘true color’
image.
oOtherwise, if the image bands displayed in red, green and blue
do not match the spectra of these three primary colours, a false
color composite (FCC) image is produced.
• Typical RGB input devices are:
color TV
video cameras,
image scanners, and
digital cameras.
• Typical RGB output devices are TV sets of various technologies:
LCD, plasma, computer and mobile phone displays,
video projectors, multicolor LED displays and
Large screens such as Color printers,
False color true-color
Yellow-Magenta-Cyan (YMC) Model/ display
• This model contains the secondary colors.
• In this model, any secondary color when passed through white light
will not reflect the color from which a combination of colors is made.
• For example- when cyan is illuminated with white light, no red light
will be reflected from the surface which means that the cyan
subtracts the red light from the reflected white light (which itself is
composed of red, green and white light).
• The CMYK color model is a subtractive color model, based on
the CMY color model.
• CMYK refers to the four ink plates used in some color
printing: cyan, magenta, yellow, and key (black).
• Such a model is called subtractive because inks "subtract" the
colors red, green and blue from white light.
• White light minus red leaves cyan, white light minus green leaves
magenta, and white light minus blue leaves yellow.
Color: Cyan Color: magenta
Color: Yellow Color: Black
Figure: C, M and Y components of an image
HSI Model (Hue, Saturation, Intensity)
It is a very important and attractive color model because it represents the
colors the same way as the human eye senses colors.
Hue is a color component that describes pure color (yellow, orange/ red)
Saturation component represents measure of the degree to which color is
mixed with white color.
Intensity value represents measured solar radiance in a given wavelength
band reflected from the ground. (0 means black, 1 means white)
3. Pseudo colour display
oThe human eye can recognize far more colors than
primary colors.
oSo color can be used very effectively to enhance small
grey-level differences in a Black/White image.
oThe technique to display a monochrome image as a color
image is called pseudo color display.
(a) An image in grey-scale (Black/White) display; (b) the same image in a pseudo color display;
There are two types of color composition on image display:
• False Color and True Color
• The RGB displays are used extensively in digital processing to display
normal color, false color infrared, and arbitrary color composites.
•
• A false-color image is an image that shows an object in colors by using
infrared, red and green composite. Best to distinguish vegetated and non-
vegetated parts on image
• Ratio images can also be used to generate false color composites by
combining three monochromatic ratio data sets. Such composites have the
twofold advantage of combining data from more than two bands and
presenting the data in color, which further facilitates the interpretation of
subtle spectral reflectance differences. Choosing which ratios to include in
a color composite and selecting colors in which to portray them can
sometimes be difficult.
• An image is called a "true-color" image when it offers a natural color version,
or when it comes close to it.
• It uses RGB composite (Red, Green and Blue)
• This means that the colors of bject in an image appear to a human observer the
same way as if this observer were to directly view the object:
• A green tree appears green in the image, a red apple red, a blue sky blue, and
so on.
true-color
Digital image data format
•Band Interleaved by Pixel Format
•Band Interleaved by Line format
•Band sequential Format(BSQ)
•Run-length Encoding Format
• Band Interleaved By Pixel Format (BIP)
• One of the earliest digital formats used for satellite data
• This format treats pixels as the separate storage unit.
• Brightness values for each pixel are stored one after another.
• Often data in BIP format is organized into four separate panels,
• It shows the logic of how the data is recorded to the computer tape in
sequential values for a four-band image in BIP format.
Band Interleaved By Line Format (BIL)
• Just as the BIP format treats each pixel of data as the separate unit, the
band interleaved by line (BIL) format is stored by lines.
• Each line is represented in all 4 bands before the next line is recorded.
• shows the data is recorded to computer tape in sequential values for a four band
image in BIL format
Band Sequential Format
• This format requires that all data for a single band
covering the entire scene be written as one file
• Thus, if an analyst wanted to extract the area in the
center of scene in four bands, it would be necessary to
read into this location in four separate files to
extract the desired information.
Run-‐Length Encoding
• Run‐length encoding is a band sequential format that keeps tr
ack of both the brightness value and the number of times the
brightness value occurs along a given scan line.
• For example, if a body of water were encountered with bright
ness values of 10 for 60 pixels along a scan line, this could
be stored in the computer in integer format as 060010, meani
ng that the following 60 pixels will each have brightness valu
e of 10.
Advantages of digital images include:
• The images do not change with environmental factors as hard
copy pictures and photographs do
• The images can be identically duplicated without any change or
loss of information
• The images can be mathematically processed to generate new
images without altering the original images
• The images can be electronically transmitted from or to remote
locations without loss of information
•CHAPTER---2
•Remote sensing image
• Remotely sensed images are acquired by sensor systems
onboard aircraft or spacecraft, such as Earth observation
satellites.
• Satellite imagery is collected by a host of:
• national and international government, and private agencies.
• Free access is possible through collaboration with NASA and
NASA funded institutions.
•Remote sensing data are currently acquired by
aerial camera systems and a variety of both
active and passive sensor systems that operate
at wavelengths throughout the electromagnetic
spectrum.
• These remote sensing data sources typically include:
commercial photogrammetric agencies,
local planning agencies
transportation agencies,
national mapping organizations,
environmental and research agencies, and
university collections
Remote sensing companies (GLOVIS, NASA, USGS Earth
Explorer, Sentinel, NOAA
Types of remote sensing imagery
• There are many types of remotely sensed data.
• Remote Sensing Data Types
1. Digital aerial photos
2. Multispectral satellite Image
3. High and Course -resolution satellite imagery
4. Microwave (RADAR) image
5. Laser scanning ( LiDAR) image
6. Hyperspectral Image
7. UAV/Drone image
1. Digital aerial photos
• The invention of the photographic process in the mid 19th century
stimulated the formation of small but dedicated, scientific and
industrial groups whose goals were to develop creative applications
of photography and apparatus for exploiting its potential. In the late
1850s, Aimé Laussedat carried out the first topographical survey of
an area by means of a pair of photographs suitably distanced from
each other.
• Concurrently, Ignazio Porro developed the “photogoniometer” and many other
ingenious apparatus. Laussedat named the method “Metrical Photography”, which
after further development was later named “Photogrammetry by Intersection”.
• Early applications of photogrammetry were primarily for terrestrial purposes,
although placing cameras on balloons had been attempted as early as the 1860s. By
the end of the 19th century, the development of binocular measuring
methods using stereopairs of photographs, led by Carl Pulfrich, resulted in a new
field of “stereoscopic photogrammetry”. It was realized that for extracting metric
information from photographs, instruments were required to overcome the need for
significant manual computations.
• With the invention of the airplane in 1903, and subsequently the
development of aerial cameras, opportunities for applications of
aerial photogrammetry expanded rapidly.
• Aerial photography defined as the science of taking photograph
from a point in the air for the purpose of making some type of study
on earth surface.
• Aerial photography is the taking of photographs from above with a
camera mounted, or hand held, on an aircraft, helicopter, balloon,
rocket, kite, skydiver or similar vehicle.
• “Photo-interpretation (PI) has been informally defined as the act of identifying
objects imaged on photographs and deducing their significance.”
• The typical applications of aerial photogrammetry prior to 1980 were for
orthophotography and line mapping based originally on manual plotting and later
on digitization of features. Line mapping was partly automated using on-line
computers in the semi-analytical and the analytical stereoplotters, but the process
was still time consuming. However, by the 1980s, spatial information systems,
referred to also as GIS, were being developed in many countries. There was a need
for production of geo-coded digital spatial data that could be input to a local GIS
with an appropriate structure, thus enabling overlaying of this data with other
layers of spatial data for display and spatial analysis.
• The quality of photograph depends on:
• Flight and weather condition
• Camera lens
• Film and filters
• Developing and printing process
• Types of Aerial Photography
1. Black and White: are older and lower cost surveys.
• Multiple generations are ideal for comparing for
recent change detection of the land surface.
2. Color: More recent or higher cost aerial photo
surveys are on color media.
• Aerial photography is used in:
• cartography,
• land-use planning,
• archaeology,
• geology
• Military
• environmental studies, and other fields.
2. Multispectral satellite Image
• Multispectral remote sensing involves the acquisition of
visible, near infrared, and short-wave infrared images in
several broad wavelength bands.
• Multispectral imaging means methods for spectral imaging
where one obtain images corresponding to at least a couple of
spectral channels sometimes more than ten.
Multispectral Scanning (MSS)
• A scanning system used to collect data over a variety of different
wavelength ranges is called a multispectral scanner (MSS), and is the
most commonly used scanning system.
• Multispectral remote sensing involves the acquisition of visible, near
infrared, and short-wave infrared images in several broad wavelength
bands. Different materials reflect and absorb differently at different
wavelengths. As such, it is possible to differentiate among materials by
their spectral reflectance signatures as observed in these remotely
sensed images.
• A multispectral image is therefore composed of several channels
or bands, each one containing, the amount of radiation measured in
very specific wavelength ranges for each pixel (for example, green,
red or near infra-red).
• Multispectral imagery generally refers to 3 to 15 bands.
• Spectral imaging can allow extraction of additional information that
the human eye fails to capture.
• Many electronic remote sensors acquire data using scanning
systems, which employ a sensor with a narrow field of view
(i.e. IFOV) that sweeps over the terrain to build up and
produce a two-dimensional image of the surface.
• Scanning systems can be used on both aircraft and satellite
platforms and have essentially the same operating principles.
• Landsat Multispectral Scanner (MSS): It was the primary sensor
system for the Landsats 1-3 and 4-5. This sensor had four spectral
bands in the electromagnetic spectrum that record reflected radiation
from the Earth’s surface.
• These bands are:
• Band 1 Visible (0.45 - 0.52 µm) 30 m.
• Band 2 Visible (0.52 - 0.60 µm) 30 m.
• Band 3 Visible (0.63 - 0.69 µm) 30 m.
• Band 4 Near-Infrared (0.76 - 0.90 µm) 30 m.
• There are two main modes or methods of scanning
employed to acquire multispectral image data
- across-track scanning, and
- along-track scanning
1. Across-track scanners
• Across-truck scanners scan the Earth in a series of lines. The lines are oriented
perpendicular to the direction of motion of the sensor platform (across the swath).
• Scanner sweeps perpendicular to the path or swath, centered directly under the
platform, i.e. at 'nadir'. The forward movement of the aircraft or satellite allows the
next line of data to be obtained in the order 1, 2, 3, 4 etc. In this way, an image is
built up in a sequential manner.
• Image is acquired pixel by pixel
• It is called Whiskbroom, or mirror scanner system, e.g. LANDSAT MSS /TM
(Thematic mapper).
Scanning
• Each line is scanned from one side of the sensor to the
other, using a rotating mirror (A). As the platform moves
forward over the Earth, successive scans build up a two-
dimensional image of the Earth´s surface. The incoming
reflected or emitted radiation is separated into several
spectral components that are detected independently.
• The UV, visible, near-infrared, and thermal radiation are dispersed
into their constituent wavelengths. A bank of internal detectors
(B), each sensitive to a specific range of wavelengths, detects and
measures the energy for each spectral band and then, as an
electrical signal, they are converted to digital data and recorded for
subsequent computer processing.
• The IFOV (C) of the sensor and altitude of the platform determine
the ground resolution cell viewed (D), and thus the spatial resolution.
The angular field of view (E) is the sweep of the mirror, measured in
degrees, used to record a scan line, and determines the width of the
imaged swath (F).
• Instantaneous Field of View (IFOV) is the total view angle of the camera, which
defines the swath.
Across-truck scanners
• What is a cross-track scanner?
• Cross-track scanner uses “back and forth” motion of the fore-optics. It scans each
ground resolution cell one by one. Instantaneous Field Of View of instrument
determines pixel size. Across-track scanning was accomplished by an oscillating
mirror.
• Example
• The first five Landsats carried the MSS sensor which responded to Earth-reflected
sunlight in four spectral bands. Landsat 3 carried MSS sensor with an additional
band, designated band 8, that responded to thermal (heat) infrared radiation.
• Airborne scanners typically sweep large angles (between 90º and
120º), while satellites, because of their higher altitude need only to
sweep fairly small angles (10-20º) to cover a broad region.
2. Along-track scanners
• Along-track scanners also use the forward motion of the platform to
record successive scan lines and build up a two-dimensional image,
perpendicular to the flight direction. However, instead of a scanning
mirror, they use a linear array of detectors (A) located at the focal
plane of the image (B) formed by lens systems (C), which are "pushed"
along in the flight track direction (i.e. along track). These systems are
also referred to as pushbroom scanners, as the motion of the detector
array is analogous to the bristles of a broom being pushed along a floor.
• Each individual detector measures the energy for a single ground
resolution cell (D) and thus the size and IFOV of the detectors determines
the spatial resolution of the system. A separate linear array is required to
measure each spectral band or channel. For each scan line, the energy
detected by each detector of each linear array is sampled electronically
and digitally recorded.
focal plane of the image
detectors
lens systems
ground resolution
• Eg. Landsat8 is multispectral sensor. It produce 11 images with the following bands
1.Coastal Aerosol : in band 1 (0.43-0.45 um)
2.Blue :in band 2 (0.45-0.51 um)
3.Green :in band 3 (0.53-0.59 um)
4.Red :in band 4 (0.64-0.67 um)
5.Near Infrared (NIR) :in band 5 (0.85-0.88 um)
6.Short-wave Infrared (SWIR 1): in band 6 (1.57-1.65 um)
7.Short-wave Infrared (SWIR 2) :in band 7 (2.11-2.29 um)
8.Panchromatic :in band 8 (0.50-0.68 um)
9.Cirrus :in band 9 (1.36-1.38 um)
10.Thermal Infrared (TIRS 1) :in band 10 (10.60-11.19 um)
11.Thermal Infrared (TIRS 2) :in band 11 (11.50-12.51 um)
• Each band has a spatial resolution of 30 meters
except for band 8, 10, and 11.
• While band 8 has a spatial resolution of 15 meters,
band 10 and 11 have a 100-meter pixel size.
3. Hyperspectral Image (HIS)
• Hyperspectral imaging is a technique that analyzes wide spectrum of
light instead of just assigning primary colors to each pixel.
• Hyperspectral remote sensing is the science of acquiring digital
imagery of earth materials in many narrow contiguous spectral
bands.
• Hyperspectral sensors measure earth materials and produce
complete spectral signatures with no wavelength omissions.
• The light striking each pixel is broken down into many
different spectral bands in order to provide more information
on what is imaged.
• Hyperspectral remote sensing involves breaking band from
the visible and infra-red into hundreds of spectral parts, which
allows a very precise match of ground characteristics.
• The most popular are visible(VIS), NIR, middle-infrared
(MIR).
• A hyperspectral image could have hundreds or thousands of
bands. In general, they don’t have descriptive channel names.
• Hyperspectral imaging measures continuous spectral bands, as
opposed to multiband imaging which measures spaced spectral
bands.
• Sensors that have hundreds to even thousands of bands are
considered to be hyperspectral.
Hyperspectral vs Multispectral Imaging
• Hyperspectral imaging systems acquire images in over one hundred contiguous
spectral bands. While multispectral imagery is useful to discriminate land surface
features and landscape patterns, hyperspectral imagery allows for identification and
characterization of materials. In addition to mapping distribution of materials,
assessment of individual pixels is often useful for detecting unique objects in the
scene.
• The high spectral resolution of a hyperspectral imager allows for detection,
identification and quantification of surface materials. Having a higher level of
spectral detail in hyperspectral images gives the better capability to see the unseen.
• The main difference between multispectral and hyperspectral is
the number of bands and how narrow the bands are.
Multispectral imagery generally refers to 3 to 10 bands. Each band
has a descriptive title. For example, the channels below include red,
green, blue, near-infrared, and short-wave infrared.
• Hyperspectral imagery consists of much narrower bands
(10-20 nm). A hyperspectral image could have hundreds
or thousands of bands. In general, they don’t have
descriptive channel names.
• The narrower the range of wavelengths for a given band, the finer the
spectral resolution.
• Hyperspectral imagery consists of much narrower bands (10-20nm).
• The goal of hyperspectral imaging is to obtain the spectrum for each
pixel in the image of a scene, with the purpose of:
• finding objects,
• identifying materials, or detecting processes.
•
Table 1. Some example of hyperspectral systems
•
Sensor Wavelength range (nm) Band width (nm) Number of bands
AVIRIS 400-2500 10 224
TRWIS III 367-2328 5.9 335
HYDICE 400-2400 10 210
CASI 400-900 1.8 288
OKSI AVS 400-1000 10 61
MERIS 412-900 10,7.5,15,20 15
Hyperion 325-2500 10 242
• Application of Hyperspectral Image Analysis
1. Mineral targeting and mapping.
2. Detect soil properties like moisture, organic content, and salinity.
3. To identify Vegetation species, vegetation stress, plant canopy chemistry
4. To military target detection objectives.
5. Study of atmospheric parameters such as clouds, aerosol and Water vapour
for monitoring long term, atmospheric variations as a result of
environmental change. Study of cloud characteristics, i.e. structure and its
distribution.
6. Oceanography: Investigations of water quality, monitoring coastal
erosion.
7. Snow and Ice: Spatial distribution of snow cover, surface albedo and
snow water equivalent. estimation of snow properties-snow grain size,
snow depth and liquid water content.
8. Oil Spills: When oil spills in an area effected by wind, waves, and
tides, a rapid and assessment of the damage can help to maximize the
cleanup efforts
4. Microwave Radar Image
• The word RADAR is derived from the phrase RAdio Detection
And Ranging.
• Satellite-based synthetic aperture radar scans the earth's surface
by means of microwave radiation.
• In RADAR, radio waves are transmitted in to the atmosphere,
which scatters some of the power back to the receiver/sensor.
• It is a mapping process based on the use of an active sensor radar,
that transmits and receives wideband signals along a measurement
trajectory.
• It is possible to observe the Earth's surface even on cloudy days and
at night.
• It applies to electronic equipment designed for detecting and
tracking objects (targets) at considerable distances.
• Each pixel in the radar image represents the radar backscatter
for that area on the ground: brighter areas represent high
backscatter, darker areas represents low backscatter.
• Imaging radar is an application of radar which is used to create
two-dimensional images, typically of landscapes.
• Imaging radar provides its light to illuminate an area on the
ground and take a picture at radio wavelengths.
• Another important feature of radar remote sensing is the
penetration depth, respectively the penetration of loose
sediments and capability to penetrate through vegetation
is an important characteristic for geologic and tectonic
mapping.
• Applications of RADAR image includes:
• surface topography & costal change;
• land use monitoring,
• agricultural monitoring,
• ice monitoring
• environmental monitoring
• weather radar- storm monitoring,
• wind shear warning
• medical microwave tomography
• through wall radar imaging
• 3-D measurements
5. Light Detection And Ranging (LIDAR)
• LIDAR is a remote sensing technology which uses much shorter
wavelength of electromagnetic spectrum.
• Light Detection And Ranging (LiDAR) is a laser-based remote sensing
technology.
• A technique that can measure the distance to and other properties of a
target by illuminating the target.
• LIDAR is a remote sensing method used to examine the surface of the
Earth.
Lidar (Light Detection and Ranging)
• Distance to the object is determined by recording the time
between the transmitted and backscattered pulses and
using the speed of light to calculate the distance traveled.
Lidars can determine atmospheric profiles of aerosols,
clouds, and other constituents of the atmosphere.
• LiDAR uses near-infrared light to image objects.
• When the laser light strikes an object, the light is reflected. A sensor
detects the reflected laser light and records the time from the laser
pulse to the received reflection. This value is converted into a
distance, and these measured distances are combined point by point
to produce a 3-D image. Each point is assigned a classification such
as ground, water, vegetation, man-made object, and others.
• LIDAR uses:
• Ultraviolet
• Visible and
• Infrared light to image or capture the target
• It can be used with wide range of targets including:
• Non-metalic objects
• rocks
• Chemical compounds
LiDAR tools offer a number of advantages over visual cameras:
• high accuracy
• small FOV for detailed mapping on a local scale
• large FOV for more complete relational maps
• all data is geo-referenced cover larger area per time
• less impacted by shadows and steep terrain
• large dataset to manipulate
• no external light source required
• LiDAR, is used for measuring the exact distance of an
object on the earth’s surface.
• LiDAR uses a pulsed laser to calculate an object’s variable
distances from the earth surface.
• This technology is used in (GIS) to produce a digital
elevation model (DEM) or a digital terrain model (DTM)
for 3D mapping.
• LIDAR Operating Principle
• Emission of a laser pulse
• Record of the backscattered signal
• Distance measurement (Time of travel x speed of light)
• Retrieving plane position and altitude
• Computation of precise echo position
• LiDAR for drones matches perfectly with:
• Small areas to fly over
• Mapping under vegetation
• Hard-to-access zones
• Data needed in near real-time or frequently
• Accuracy range required between 2.5 and 10 cm
• LiDAR systems integrate 3 main components whether they
are mounted on automotive vehicles, aircraft or UAV:
• These 3 main components are:
• Laser scanner
• Navigation and positioning system
• Computing technology
1. Laser Scanner
• LiDAR system pulse a laser light from various mobile systems
(automobiles, airplanes, drones…) through air and vegetation
(aerial Laser) and even water (bathymetric Laser).
• A scanner receives the light back (echoes), measuring distances
and angles.
• The choice of optic and scanner influences greatly the resolution
and the range in which you can operate the LiDAR system.
2. Navigation and positioning systems
• Whether a LiDAR sensor is mounted on aircraft, car or UAS
(unmanned aerial systems), it is crucial to determine the absolute
position and orientation of the sensor to make sure data captured are
useable data.
• Global Navigation Satellite Systems provide accurate geographical
information regarding the position of the sensor (latitude, longitude,
height) and the precise orientation of the sensor.
3. Computing technology
• In order to make most of the data : computation is
required to make the LiDAR system work by defining
precise echo position.
• It is required for on-flight data visualization or data post-
processing as well to increase precision and accuracy
delivered in the 3D mapping point cloud.
• There are two basic types of lidar: airborne and
terrestrial.
1. Airborne
• With airborne lidar, the system is installed in either a
fixed-wing aircraft or helicopter.
• The infrared laser light is emitted toward the ground and
returned to the moving airborne lidar sensor.
• There are two types of airborne sensors: topographic and
bathymetric.
i. Topographic LiDAR
• Topographic LiDAR can be used to derive surface models
for use in many applications, such as forestry, hydrology,
geomorphology, urban planning, landscape ecology,
coastal engineering, survey assessments, and volumetric
calculations.
ii. Bathymetric Lidar
• Bathymetric lidar is a type of airborne acquisition that is water
penetrating.
• Most bathymetric lidar systems collect elevation and water depth
simultaneously, which provides an airborne lidar survey of the
land-water interface.
• Bathymetric information is also used to locate objects on the
ocean floor.
2. Terrestrial lidar
• Terrestrial lidar collects very dense and highly accurate
points, which allows precise identification of objects.
• These dense point clouds can be used to manage
facilities, conduct highway and rail surveys, and even
create 3D city models for exterior and interior spaces.
• There are two main types of terrestrial lidar: mobile and
static.
• In the case of mobile acquisition, the lidar system is
mounted on a moving vehicle.
• In the case of static acquisition, the lidar system is
typically mounted on a tripod or stationary device.
LiDAR applications
• Power Utilities: power line survey to detect line sagging issues or for planning
activity
• Mining: surface/volume calculation to optimize mine operations
• Civil engineering: mapping to help leveling, planning and infrastructure
optimization (roads, railways, bridges, pipelines, golf courses) or renovating after
natural disasters, beach erosion survey to build emergency plan
• Archaeology: mapping through the forest canopy to speed up discoveries
• Forestry: mapping forests to optimize activities or help tree counting
• Environmental research: measuring growth speed, disease spreading
• Meteorology (wind speed, atmospheric condition, clouds, aerosols)
6. High and Course -resolution satellite imagery
• Resolution refers to the detail that can be represented in an
image.
• The ability of a remote sensing sensor to detect details is
referred to as spatial resolution.
• Higher resolution means that pixel sizes are smaller,
providing more detail.
Image Fusion
• Digital aerial cameras and many high resolution spaceborne cameras record
simultaneously multispectral and panchromatic data. Most of these cameras use
for the panchromatic channel a spatial resolution that is about a factor four higher
than the resolution of RGB and NIR channels. Color images are easier to
interpret than grey scale images. Higher resolution images are easier to interpret
than lower resolution ones.
• Image fusion/pan-sharpening is a technique which combines images from
multispectral channels with the higher resolution panchromatic image. It is about
combining RS data from different sources
Several types of available high resolution satellite images
What is low resolution satellite imagery?
• These images do not allow you to distinguish tiny details, yet they
cover larger ground areas.
• In the image with the lower resolution, much more different objects
must be included in one pixel
• Landsat and Sentinel is among the low- and medium-resolution
imagery (stands for 60m per pixel and 10–30m/pixel, respectively)
• The bigger a pixel, the more objects on the surface of the earth are
captured and the lower the spatial resolution of a raster image.
Low and medium resolution satellite images
Easily accessible and affordable
Various spectral bands for remote sensing
Historical imagery that dates as far back as decades ago
But
low detail of ground coverage/targets
High resolution satellite images
High level of detail
On demand coverage of any area at any time
But
Expensive
Small ground coverage
Hard to get
7. UAV/Drone image
• UAV stands for Unmanned Aerial Vehicle, something that
can fly without a pilot onboard.
• the term Drone and UAV mean the same thing, and can
be used interchangeably.
• UAV/drone – is the actual aircraft being piloted/operated
by remote control or onboard computers.
Example of Drones
• UAV/Drone image is an imagery acquired by drones which is
essential for creating geospatial products like orthomosaics,
digital terrain models, or 3D textured interconnects.
• An unmanned aerial vehicle system has two parts, the drone itself
and the control system.
• Unmanned aerial vehicle (UAV) technology bridges the
gap among space borne, airborne, and ground-based
remote sensing data.
• Its characteristics of light weight and low price enable
affordable observations with very high spatial and
temporal resolutions.
• Drones are excellent for taking high-quality aerial photographs and
video, and collecting vast amounts of imaging data.
• Uses of drone/UAV
— Mapping of Landslide Affected Area:
— Infested Crop Damage Assessment:
— 3-Dimensioinal Terrain Model Construction:
— Archaeological surveys
— Agriculture
— Wildlife monitoring
— Weather forecasting
— Military
Data characteristics
• The quality of remote sensing data consists of its spatial, spectral,
radiometric and temporal resolutions.
1. Spatial resolution: The size of a pixel that is recorded in a raster
image. It refers to the area measured
2. Spectral resolution: The wavelength of the different frequency
bands recorded – usually, this is related to the number of
frequency bands. It refers to the wavelength that the sensor is
sensitive to.
1. Spatial Resolution
• For some remote sensing instruments, the distance between the target
being imaged and the platform, plays a large role in determining the
detail of information obtained and the total area imaged by the sensor.
• It refers to the size of the smallest possible object that can be detected.
• It depends on Instantaneous field of view (IFOV) and the satellite
View height orbit
• Sensors onboard platforms far away from their targets, typically view a
larger area, but cannot provide great detail.
• Spatial resolution refers to the size of the smallest possible object that can
be detected.
• Spatial resolution is a measurement of how detailed objects are in an
image based on pixels.
• It tells the pixel size on the ground surface.
• The detail visible in an image is dependent on the spatial resolution of
the sensor.
• High spatial resolution means more detail and a smaller grid cell size.
Whereas, lower spatial resolution means less detail and larger pixel size.
distance
ground
IFOV to sensor
Resolution
Pixel Size of the Image
• Most remote sensing images are composed of a matrix of picture
elements, or pixels. Pixels are the smallest units of an image. Image
pixels are normally square and represent a certain area on an image.
• Images where only large features are visible are said to have coarse
or low resolution. In fine or high resolution images, small objects
can be detected. Commercial satellites provide imagery with
resolutions varying from a few meters to several kilometers.
Generally speaking, the finer the resolution, the less total ground
area can be seen.
2. Spectral Resolution
• Spectral resolution describes the ability of a sensor to define fine wavelength
intervals/ranges.
• spectral resolution is the amount of spectral detail in a band based on the number
and width of spectral bands.
• High spectral resolution means its bands are more narrow. Whereas low spectral
resolution has broader bands covering more of the spectrum.
• The finer the spectral resolution, the narrower the wavelength range for a
particular channel or band. Many remote sensing systems record energy over
several separate wavelength ranges at various spectral resolutions. These are
referred to as multi-spectral sensors.
Spectral resolution
• Advanced multi-spectral sensors called hyperspectral sensors,
detect hundreds of very narrow spectral bands throughout the
visible, near-infrared, and mid-infrared portions of the
electromagnetic spectrum. Their very high spectral resolution
facilitates fine discrimination between different targets based on
their spectral response in each of the narrow bands.
• Different classes of features and details in an image can often be
distinguished by comparing their responses over distinct wavelength
ranges.
Hyperspectral resolution
3. Radiometric Resolution
• While the arrangement of pixels describes the spatial structure of an
image, the radiometric characteristics describe the actual information
content in an image.
• It describes the ability of sensor to discriminate very slight differences in
energy
• The number of brightness levels depends upon the number of bits used.
• The finer the radiometric resolution of a sensor, the more sensitive it is to
detecting small differences in reflected or emitted energy.
3. Radiometric resolution: The number of different intensities of radiation the sensor
is able to distinguish.
• Refers to the energy levels that are measured by the sensor.
• Typically, this ranges from 8 to 14 bits, corresponding to 256 levels of gray
scale and up to 16,384 intensities or shades of color, in each band.
• The maximum number of brightness levels available depends on the number of
bits used in representing the energy recorded. Thus, if a sensor used 8 bits to
record the data, there would be 28=256 digital values available, ranging from 0 to
255.
• However, if only 4 bits were used, then only 24=16 values ranging from 0 to 15
would be available. Thus, the radiometric resolution would be much less.
• By comparing a 2-bit image with an 8-bit image, we can see that there is a large
difference in the level of detail discernible depending on their radiometric
resolutions.
4. Temporal Resolution
• In addition to spatial, spectral, and radiometric resolution, the
concept of temporal resolution is also important to consider in a
remote sensing system.
• It refers to how often it records imagery of a particular area, which
means the frequency of repetitive coverage.
• The revisit period, refers to the length of time it takes for a satellite
to complete one entire orbit cycle. The revisit period of a satellite
sensor is usually several days.
• The actual temporal resolution of a sensor depends on a variety of
factors, including the:
‾ satellite/sensor capabilities,
‾ the swath overlap, and
‾ latitude.
• Some specific uses of remotely sensed images include:
• Large forest fires can be mapped from space, allowing rangers to see a much
larger area than from the ground.
• Tracking clouds to help predict the weather or watch erupting volcanoes, dust
storms.
• Tracking the growth of a city and changes in farmland etc. over several
decades.
• Discovery and mapping of the rugged topography of the ocean floor (e.g., huge
mountain ranges, deep canyons, and the “magnetic striping” on the ocean
floor).
Unit 3.
Digital Image Restoration and
Registration
Scaling of digital images
• Scaling is a process for various image processes such as image matching,
feature extraction, image segmentation, image compression, image fusion,
image rectification and image classification.
• Scaling-up is an aggregation process. The commonly used solution is the
pyramid structure. That is to aggregate n × n (e.g. 2 × 2) pixels into a new
one. The brightness of the new image could be the simple average of the n
× n pixels or a function of these n × n pixels. Wavelets have also been
employed to decompose an image into a pyramid structure.
• Scaling is a matured process used in digital image processing and there is not
much new development.
• However,:
‾ to maintain the sharpness of the new image,
‾ to make multiresolution image processes (e.g. registration between two
images) more reliable and
‾ to minimize aggregation effects in multi-scale image processing (e.g.
rectification) are still not easy tasks.
3.1. Image restoration and
registration techniques
•Image rectification and restoration procedures
are often termed preprocessing operations because
they normally precede further manipulation and
analysis of the image data to extract specific
information.
• The purpose of image restoration is to "compensate for"
or "undo" defects which degrade an image.
• Degradation comes in many forms such as:
• motion blur,
• noise, and
• camera miss-focus.
Noise
•Image Restoration:
•A process which aims to invert known
degradation operations applied to images.
•Remove effects of sensing environment
•Remove distortion from image, to go back to
the “original” objective process
• ------Image rectification and restoration-------
• It involves the initial processing of raw image data to correct for:
• geometric distortions,
• To calibrate the data radiometrically, and
• to eliminate noise present in the data.
• Thus, the nature of any particular image restoration process
is highly dependent upon the characteristics of the sensor used to
acquire the image data.
Image restoration and registration methods
1. Radiometric correction method,
2. Atmospheric correction method,
3. Geometric correction methods.
1. Radiometric correction m ethod
• R a d i o m e t r i c errors are caused by detected imbalance of EME
and atmospheric deficiencies.
• It include correcting the data for:
• sensor irregularities
• unwanted sensor noise and atmospheric noise,
• Radiometric corrections are transformation on the data in order to
remove error.
• They are done to improve the visual appearance of the image.
• Eg. of radiometric error
• Striping ( thin line) noise is an anomaly commonly seen in remote-sensing
imagery and other geospatial data sets in raster formats.
• Any image in which individual detectors appear lighter or darker than their
neighboring detectors is said to have striping.
• Random noise
• Random noise is shown by fluctuation of the colors above the actual
intensity of the image.
• In a land acquisition, random noise can be created by the:
• acquisition truck,
• vehicles,
• wind, electrical power lines, etc
• Scan line drop-out: Dropped lines occur when there are systems errors
which result in missing data along a scan line.
•Radiance measured by a sensor at a given point is influenced by:
•Changes in illumination
•Atmospheric conditions (haze, clouds,…)
•Angle of view
•Objects response characteristics
•Elevation of the sun (seasonal change in sun angle)
•Earth-sun distance variation
• Radiometric correction includes:
– applying sensor calibration
– replacing missing scan lines
– de-striping
– applying atmospheric correction
11
Random noise correction
Correction for periodic line striping (Di-stripping)
12
2. Atmospheric correction method,
• The value recorded at any pixel location on the remotely sensed
image is not a record of the true ground – leaving radiant at that
point.
• The signal is weakened due to absorption and scattering.
• The atmosphere has effect on the measured brightness value of a
pixel.
• Atmospheric path radiance introduces haze in the imagery where by
decreasing the contrast of the data.
• Atmospheric correction is the process of removing the effects of the
atmosphere on the reflectance values of images taken by sensors.
• The objective of atmospheric correction is to determine true surface
reflectance values by removing atmospheric effects from images.
• Atmospheric correction removes the scattering and absorption effects
from the atmosphere to obtain the surface reflectance properties.
• The atmospheric correction done by:
• Dark object subtraction,
• Radiative transfer models, and atmospheric modeling
15
Atmospheric correction
• Haze (fog, and other atmospheric phenomena) is a main
degradation of outdoor images, weakening both colors
and contrasts.
• Haze removal algorithms are used to improve the
visual quality of an image, which is affected by light
scattering through haze particles.
15
Atmospheric correction
3. Geometric correction methods
• The transformation of remotely sensed image into a map
with the scale and projection properties is called
geometric corrections.
• It include correcting for geometric distortions due to
sensor-Earth geometry variations, and conversion of the
data to real world coordinates (e.g. latitude and longitude)
on the Earth's surface.
• Geometric Correction
• Raw digital images usually contain geometric distortions so that
they cannot be used directly as a map base without subsequent
processing.
• The sources of these distortions range from:
o variations in the altitude
o attitude and velocity of the sensor platform to factors such
earth curvature, atmospheric refraction, relief displacement,
and nonlinearities in the sweep of a sensor's IFOV
• Geometric errors may be due to a variety of factors, including one
or more of the following, to name only a few:
• the perspective of the sensor optics,
• the motion of the scanning system,
• the motion and (in)stability of the platform,
• the platform altitude, attitude, and velocity,
• the terrain relief, and
• the curvature and rotation of the Earth.
……Geometric Correction……
What is Geometric correction?
• The geometric registration process involves identifying the
image coordinates of several clearly visible points,
called ground control points, in the distorted image and
matching them to their true positions in ground coordinates
(e.g. latitude, longitude).
• Geometric correction is undertaken to avoid geometric distortions
from a distorted image, and is achieved by establishing relationship
between the image coordinate system and geographic coordinate
system using:
• calibration data of the sensor,
• measured data of position and attitude,
• ground control points (GCPs)
Geometric Restoration Methods
• Georeferencing digital image ,
• Image-to-Map Rectification ,
• Image-to-Image Registration ,
• Spatial Interpolation Using Coordinate Transformations,
• Relief displacement,
• geometric correction with ground control points (GCP),
• Geocoding( Resampling and Interpolation)
1. Georeferencing digital image
• It is a method to define its existence in physical space.
• This process is completed by selecting pixels in the digital image and
assigning them geographic coordinates.
• A method of assigning ground location value to image.
• This involves the calculation of the appropriate transformation from image
to ground coordinates.
• It used when establishing the relation between raster or vector data and
coordinates and when determining the spatial location of other
geographical features.
2. Image-to-map registration refers to transformation of one image
coordinate system to a map coordinate system resulted from a
particular map projection.
• it is the method of rectification in which the geometry of imagery is
made planimetric. The image-to-map rectification process normally
involves selecting GCP image pixel coordinates (row and column)
with their map coordinate counterparts (e.g., meters northing and
easting in a Universal Transverse Mercator map projection).
Figure. Image to map rectification left side image is satellite imagery and on the
right, a rectified toposheet is represented.
• The following types of maps are normally used for rectifying the
sensed imagery:-
1. Hard copy planimetric maps
2. Digital planimetric
3. Digital orthophotosquads which are already geometrically
rectified
4. Global positing system (GPS) points
3. Image-to-image registration refers to transforming one image coordinate system
into another image coordinating system
• his technique includes the process by which two images of a common area are
positioned coincident with respect to one another so that corresponding elements
of the same ground appear in the same place on the registered images.
4. Spatial Interpolation Using Coordinate Transformations
• A spatial transformation of an image is a geometric
transformation of the image coordinate system.
• A coordinate transformation brings spatial data into an Earth
based map coordinate system so that each data layer aligns
with every other data layer
5. Relief displacement
• is the radial distance between where an object appears
in an image to where it actually should be according to a
Planimetric coordinate system.
• The images of ground positions are shifted or displaced due
to terrain relief, in the central projection of an aerial
photograph.
6. Geometric correction with ground control points (GCP)
• Ground control points are large marked targets on the ground,
spaced strategically throughout your area of interest.
• GCPs are defined as points on the surface of earth of known
location .
• GCPs help to ensure that the latitude and longitude of any point on
your map corresponds accurately with actual GPS coordinates.
A number of GCPs are defined on each of the images you want to correct. The
best GCPs are:
– road intersections,
– airport runways,
– edges of dams or buildings,
– corners of agricultural fields
– other permanent features.
Which are easily identifiable in image and on ground too.
7.Geocoding :
• This step involves resembling the image to obtain a new image in
which all pixels are correctly positioned within the terrain
coordinate system.
Resampling is used to determine the
digital values to place in the new pixel
locations of the corrected output image.
• Resampling
• The resampling process calculates the new pixel values from
the original digital pixel values in the uncorrected image.
• There are three common methods for resampling.
1. Nearest Neighborhood,
2. Bilinear Interpolation, and
3. Cubic Convolution.
• Nearest Neighbourhood
• Nearest neighbour resampling uses the digital value from the pixel
in the original image which is nearest to the new pixel location in
the corrected image.
• This is the simplest method and does not alter the original values,
but may result in some pixel values being duplicated while others
are lost.
Bi-linear interpolation
• Bilinear interpolation resampling takes a weighted average of four
pixels in the original image nearest to the new pixel location.
• The averaging process alters the original pixel values and creates
entirely new digital values in the output image.
•
• Cubic Convolution
• Resampling goes even further to calculate a distance weighted
average of a block of sixteen pixels from the original image which
surround the new output pixel location.
•
18
: Sample Geometric Distortions
3.2. Image processing
Digital Image Processing
• In order to process remote sensing imagery digitally, the first requirement
is that the data must be recorded and made available in a digital form,
suitable for storage on a computer tape or disk.
• The other requirement for digital image processing is a computer system,
sometimes referred to as an image analysis system, with the appropriate
hardware and software to process the data.
• Several commercially available software systems have been developed
specifically for remote sensing image processing and analysis.
Digital image processing refers to processing digital
images by means of a digital computer.
Digital Image Processing is manipulation of digital
data with the help of the computer hardware and
software to produce digital maps in which specific
information has been extracted and highlighted.
• Digital image processing focuses on two major tasks
Improvement of pictorial information for human
interpretation
Processing of image data for storage, transmission
and representation for autonomous machine
perception
• Digital processing and analysis carried out automatically by
identifying targets and extract information without manual
intervention by a human interpreter. Often, it is done to
supplement and assist the human analyst.
• Manual interpretation requires little specialized equipment,
while digital analysis requires specialized and expensive
equipment.
• Manual interpretation is often limited to analyzing only a
single image at a time due to the difficulty in performing
visual interpretation with multiple images.
• The computer environment is more amenable to handle
complex images of many channels or from several dates.
• Digital image processing techniques are used
extensively to manipulate satellite imagery for
• Terrain classification and
• Meteorology
R e m o t e l y sensed raw data generally contains errors
and deficiencies received from imaging sensor.
The correction of shortages and removal of errors in the
data through some methods are termed as pre–processing
methods.
Raw remotely sensed image data contain faults and
correction is required prior to image processing
Image Processing
• It is enhancing an image or extracting information from image.
• It is analyzing and manipulating images with a computer for
information extraction.
• Digital Image Processing is manipulation of digital data with the
help of computer hardware and software to produce digital maps in
which specific information has been extracted and highlighted.
• Digital image processing is the task of processing and analyzing the
digital data using image processing algorithm.
• In order to function smoothly, a digital image analysis system must
encompass a few essential components in hardware, software, the
operating system, and peripheral devices. Featured prominently
among various hardware components is the computer, which, among
other things, is made up of a central processing unit, a monitor, and a
keyboard. As the heart of the system, the central processing unit
determines the speed of computation.
• Digital image processing is a branch in which both the input and output of a
process are images.
Image processing generally involves three steps:
• Import an image with optical scanner/directly through digital photography
• Manipulate or analyze the image in some way.
This stage can include image enhancement or the image may be analyzed to
find patterns that aren't visible by human eye.
• Output the result.
The result might be the image altered in some way or it might be a report based
on analysis of the image.
Advantages of Digital Image Processing
• Digital image processing has a number of advantages
over the conventional visual interpretation of remote
sensing imagery, such as increased efficiency and
reliability, and marked decrease in costs.
Efficiency
• Owing to the improvement in computing capability, a huge amount of
data can be processed quickly and efficiently. A task that used to take
days or even months for a human interpreter to complete can be
finished by the machine in a matter of seconds. This process is
speedup if the processing is routinely set up.
• Computer-based processing is even more advantageous than visual
interpretation for multiple bands of satellite data.
Flexibility
• Digital analysis of images offers high flexibility. The same processing can
be carried out repeatedly using different parameters to explore the effect of
alternative settings. If a classification is not satisfactory, it can be repeated
with different algorithms or with updated inputs in a new trial. This process
can continue until the results are satisfactory. Such flexibility makes it
possible to produce results not only from satellite data that are recorded at
one time only, but also from data that are obtained at multiple times or even
from different sensors.
Reliability
• Unlike the human interpreter, the computer’s performance in an
image analysis is not affected by the working conditions and the
duration of analysis. In contrast, the results obtained by a human
interpreter are likely to deteriorate owing to mental fatigue after the
user has been working for a long time, as the interpretation process is
highly demanding mentally. By comparison, the computer can
produce the same results with the same input no matter who is
performing the analysis.
Portability
• As digital data are widely used in the geoinformatics community,
the results obtained from digital analysis of remote sensing data
are seldom an end product in themselves. Instead, they are likely
to become a component in a vast database. Digital analysis means
that all processed results are available in the digital format.
• Digital results can be shared readily with other users who
are working in a different, but related, project.
• These results are fully compatible with other existent data that
have been acquired and stored in the digital format already.
• The results of digital analysis can be easily exported to a GIS for
further analysis, such as spatial modeling, land cover change
detection, and studying the relationship between land cover
change and socioeconomic factors (e.g., population growth).
Disadvantages of Digital Image Processing
• Digital image analysis has four major disadvantages, the critical ones
being the initial high costs in setting up the system and limited
classification accuracy.
• High Setup Costs
• Limited Accuracy
• Complexity
• Limited Choices All image processing systems are tailored for a
certain set of routine applications.
Purpose of Image Processing
The purpose of image processing is divided into 5 groups
1. Visualization - Observe the objects that are not visible
2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Image Recognition – Distinguish objects in an image.
5. Measurement of pattern – Measures various objects in an image.
Digital image processing Functions
• Most of the common image processing functions available in
image analysis systems can be categorized into the following 4
categories:
1.Preprocessing (Image rectification and restoration)
2.Image Enhancement
3.Image Classification and Analysis
4.Data Merging and GIS Interpretation
1. Image Rectification and Restoration ( or Preprocessing)
o These are corrections needed for distortion of raw data. Radiometric and
geometric correction are applicable to this.
Pre-processing: is an operation which take place before further manipulation
and analysis of the image data to extract specific information.
These operations aims to correct distorted or degraded image data to create
correct representation of the original scene.
These process corrects the data for sensor irregularities by removing
unwanted sensor distortion or atmospheric noise.
2. Image Enhancement
o This used to improve the appearance of imagery and to assist visual interpretation
and analysis. This involves techniques for increasing the visual distinction between
features by improving tone of various features in a scene.
3. Image Classification
• The objective of classification is to replace visual analysis of the image data
with quantitative techniques for automating identification of features in a scene
4. Data Merging and GIS Interpretation: These procedures are used to
combine image data for a given geographic area with other geographically
referenced data sets for the same area.
Generally, Image Processing Includes
• Image quality and statistical evaluation
• Radiometric correction
• Geometric correction
• Image enhancement and sharpening
• Image classification
• Pixel based
• Object-oriented based
• Accuracy assessment of classification
• Post-classification and GIS
• Change detection
Why Image Processing?
For Human Perception
To make images more beautiful or understandable
Automatic Perception of Image
We call it Machine Vision, Computer Vision, Machine
Perception, Computer Recognition
For Storage and Transmission
Smaller, faster, more effective
For New Image Generation (New trends)
Fundamental steps in Digital Image Processing
Image acquisition
Image enhancement
Image Restoration
Color Image Processing
Image Compression
Image Segmentation
Representation and description
Recognition/Acknowledgment
Chapter-4
Image Enhancement
4.1. Image Enhancement
• Image enhancement algorithms are commonly applied
to remotely sensed data to improve the appearance of image and a
new enhanced image is produced.
• Image enhancement is the procedure of improving the quality
and information content of original data before processing.
• It is the procedure of improving the quality and information
content of original data before processing.
• Enhancements are used to make image easier for visual
interpretation and understanding of imagery.
• Image enhancement is the modification of an image to alter its
impact on the viewer.
• To make image easier for visual interpretation enhancements are
used.
• The enhanced image is generally easier to interpret than the
original image.
Examples: Image Enhancement
• One of the most common uses of DIP techniques: improve
quality, remove noise etc
Examples: Medicine
• Take slice from MRI scan of canine heart, and find boundaries
between types of tissue
• Image with gray levels representing tissue density
• Use a suitable filter to highlight edges
Original MRI Image of a Dog Heart Edge Detection Image
• The enhancement process does not increase the inherent
information content in the data. But it does increase the
dynamic range of the chosen features so that they can be
detected easily.
• Image enhancement is used to improve the quality of an
image for visual perception of human beings. It is also used
for low level vision applications.
• Image enhancement refers to sharpening of image
features such as:
• edges,
• boundaries, or
• contrast to make a graphic display more useful for
analysis.
• Image enhancement refers to the process of highlighting certain
information of an image, as well as weakening or removing any
unnecessary information according to specific needs.
• For example,
• eliminating noise,
• Sharpen an image or Brighten an Image.
• revealing blurred details, and
• adjusting levels to highlight features of an image.
• The aim of image enhancement is:
• to improve the interpretability or perception of information in
images for human viewers, or to provide `better' input for other
automated image processing techniques.
• Image enhancement techniques can be based on:
• Spatial domain techniques, which operate directly on pixels
• Frequency domain techniques, which operate on the Fourier
transform of an image.
•Generally, Enhancement is employed to:
•emphasize,
•sharpen and
•smooth image features for display and analysis
• Image Magnification
• The process of resizing an image by assuming that there is a
known relationship between the original image and the high-
resolution result.
• It is a process which virtually increases image resolution in
order to highlight implicit information present in the original
image.
• Digital image magnification is often referred to as zooming.
This technique most commonly employed for two purposes:
• to improve the scale of the image for enhanced visual
interpretation to match the scale of another image.
• It can be looked upon as a scale transformation.
• Types of Image Enhancement Techniques:
• The aim of image enhancement is to improve the interpretability
or perception of information in images for human viewers, or
• To provide `better' input for other automated image processing
techniques.
• Although radiometric corrections, atmospheric influences, and sensor
characteristics may be done, but the image may still not be optimized
for visual interpretation.
• As Image enhancement designed to improve the usefulness
of image data for various applications
• Basic Image enhancement methods are:
1. Contrast enhancement
2. Density slicing
3. Frequency filtering/ spatial enhancement
4. Band rationing/spectral enhancement
– Contrast Enhancement - maximizes the performance
of the image for visual display.
– Spatial Enhancement - increases or decreases the level
of spatial detail in the image
– Spectral Enhancements - makes use of the spectral
characteristics of different physical features to highlight
specific features
1. Contrast Enhancement
• Stretching is performed by linear transformation expanding the original
range of gray level.
• Contrast enhancement involves increasing the contrast between targets
and their backgrounds.
• Generally, the “contrast” term refers to the separation of dark and
bright areas present in an image.
• In raw imagery, the useful data often populates only a small portion of
the available range of DV (8 bits or 256 levels).
• It stretches spectral reflectance values
• Contrast enhancement technique plays a vital role in image
processing to bring out the information that exists within low
dynamic range of that gray level image.
• To improve the quality of an image, it required to perform
the operations like contrast enhancement and reduction or
removal of noise.
• The key to understanding contrast enhancement is to
understand the concept of an image histogram.
•A histogram is a graphical representation of the brightness values that
comprise an image.
•The brightness values (i.e. 0-255) are displayed along the x-axis of
the graph.
•The frequency of occurrence of each of these values in the image is
shown on the y-axis.
• Histogram shows the statistical frequency of data distribution
in a dataset.
• In the case of remote sensing, the data distribution is the
frequency of the pixels in the range of 0 to 255, which is the
range of the 8-byte numbers used to store image information
on computers.
• This histogram is a graph showing the number of pixels in
an image at each different intensity value found in that image.
Techniques of contrast enhancement
•Linear contrast enhancement
•Histogram-equalized stretch
a. Linear contrast stretch
• It is the simplest type of enhancement technique.
• This involves identifying lower and upper bounds from the
histogram and apply a transformation to stretch this range to fill the
full range.
before After
------------Linear contrast stretch-------------
• This method enhances the contrast in the image with
light toned areas appearing lighter and dark areas
appearing darker, making visual interpretation much
easier.
b. Histogram-equalized stretch
• A uniform distribution of input range of values across the full
range.
• Histogram equalization is effective algorithm of image enhancing
technique.
• Histogram equalization is a technique for adjusting image intensities
to enhance contrast.
• This allows for areas of lower local contrast to gain a higher
contrast.
• Histogram Equalization
• Histogram equalization is a technique for adjusting
image intensities to enhance contrast. It is not
necessary that contrast will always be increase in this.
• The histogram of an image represents the relative
frequency of occurrence of grey levels within an
image.
• Histogram Equalization is an image processing technique that
adjusts the contrast of an image by using its histogram.
• To enhance the image's contrast, it spreads out the most frequent
pixel intensity values or stretches out the intensity range of the
image.
• Histogram Equalization is a computer image processing technique
used to improve contrast in images .
• This allows for areas of lower local contrast to gain a higher contrast.
• The original image and its histogram, and the equalized versions. Both
images are quantized to grey levels.
• The histogram of an image represents the relative frequency of
occurrence of grey levels within an image.
• To enhance the image's contrast, it spreads out the most
frequent pixel intensity values or stretches out the intensity
range of the image through out area.
2. Density slicing
• This technique normally applied to a single-band monochrome
image for highlighting areas that appear to be uniform in an image.
• Density slicing converts the continuous gray tone range into a series
of density intervals marked by a separate color or symbol to
represent different features.
o Mapping a range of attached grey levels of a single band to a single
level and color
o Each range of level is called a slice.
• Grayscale values (0-255) are converted into a series of
intervals, or slices, and different colors are assigned to
each slice.
• Density slicing is often used to highlight variations in
features.
----Density slicing---
o Range of 0-255 normally converted to several slices
o Effective for highlighting different but homogenous areas
within image.
o Effective if slice boundaries/colors carefully chosen
• 3. Spatial Filtering
• Filters - used to emphasize or deemphasize spatial information
contained on the image.
• The processed value for the current pixel depends on both itself and
surrounding pixels.
• Hence Filtering is a neighborhood operation, in which the value of
any pixel in the output image is determined by applying some
algorithm to the values of the pixels in the neighborhood of the
corresponding input pixel.
• Filtering is a technique for modifying or enhancing an image.
• Spatial domain filtering depends on both itself and surrounding pixels.
• Hence Filtering is a neighborhood operation, in which the value of any
given pixel in the output image is determined by applying some algorithm
to the values of the pixels in the neighborhood of the corresponding input
pixel.
• A pixel's neighborhood is some set of pixels, defined by their locations
relative to that pixel.
• A pixel's neighborhood is some set of pixels, defined by their
locations relative to that pixel.
• Image sharpening
• The main aim in image sharpening is to highlight fine detail in the
image, or to enhance detail that has been blurred (perhaps due to
noise or other effects, such as motion).
• With image sharpening, we want to enhance the high-frequency
components.
• The basic filters that can be used in frequency domain are low pass
filters, high pass filters.
• A. Low pass filter- Low pass filtering involves the elimination of
high frequency components from the image resulting in the sharp
transitions reduction that are associated with noise.
• Low pass Filters can reduce the amplitude of high-frequency
components and also can eliminate the effects of high-frequency
noise.
• A low-pass filter is designed to emphasize larger, homogeneous areas of
similar tone and reduce the smaller detail in an image.
• This serve to smooth the appearance of an image.
• A low-pass filter (LPF) is a circuit that only passes signals below its
cutoff frequency while attenuating all signals above it. It is the
complement of a high-pass filter, which only passes signals above its
cutoff frequency and attenuates all signals below it.
• Emphasize large area changes and de-emphasize local detail
• Low pass filters are very useful for reducing random noise.
……
B. High pass filter
• These filters are basically used to make the image appear sharper.
• High pass filtering works in exactly the same way as low pass filters but uses the
different convolution kernel and it emphasizes on the fine details of the image.
• High pass filters let the high frequency content of the image pass through the filter
and block the low frequency content.
• High-pass filter is a filter designed to pass all frequencies above its cut-off
frequency. A high-pass filter is used in an audio system to allow high frequencies
to get through while filtering or cutting low frequencies.
• While high pass filter can improve the image by sharpening and, overdoing of this
filter can actually degrade the image quality.
• Emphasize local detail and deemphasize large
area changes.
• Directional, or edge detection filters are
designed to highlight linear features, such as
roads or field boundaries.
Median filter
• This replaces the pixel at the center of the filter with the median
value of the pixels falling beneath the mask. Median filter does not
blur the image but it rounds the corners.
• The median filter is a non-linear digital filtering technique, often
used to remove noise from an image or signal. Such noise
reduction is a typical pre-processing step to improve the results of
later processing (for example, edge detection on an image).
Bandpass Filter
• Unlike the low pass filter which only pass signals of a low frequency range or the high pass
filter which pass signals of a higher frequency range, a Band Pass Filters passes signals
within a certain “band” or “spread” of frequencies without distorting the input signal or
introducing extra noise.
• Band-pass filter, arrangement of electronic components that allows only those electric
waves lying within a certain range, or band, of frequencies to pass and blocks all
others.
• There are applications where a particular band, or spread, or frequencies need to be filtered
from a wider range of mixed signals. Filter circuits can be designed to accomplish this task
by combining the properties of low-pass and high-pass into a single filter. The result is
called a band-pass filter.
4. Band Rationing (Spectral)
• Often involve taking ratios or other mathematical combinations
of multiple input bands to produce a derived index of some sort,
• e.g.:Normalized Difference Vegetation Index (NDVI)
•Designed to contrast heavily-vegetated areas with areas containing
little vegetation, by taking advantage of vegetation’s strong
absorption of red and reflection of near infrared:
– NDVI = (NIR-R) / (NIR + R)
-----Band Rationing…..
• Image division or spectral ratioing is one of the most common
transforms applied to image data.
• Image rationing serves to highlight variations in the spectral
responses of various surface covers.
• Healthy vegetation reflects strongly in the near-infrared portion
of the spectrum while absorbing strongly in the visible red.
• Other surface types, such as soil and water, show near equal
reflectance in both the near-infrared and red portions.
5. False Color Composite
• FCC is commonly used in remote sensing compared to true colors because
of the absence of a pure blue color band because further scattering is
dominant in the blue wavelength.
• The FCC is standardized because it gives maximum identical information of
the objects on Earth. In FCC, vegetation looks red, because vegetation is
very reflective in NIR and the color applied is red.
• Water bodies look dark if they are clear or deep because IR is an absorption
band for water. Water bodies give shades of blue depending on
their turbidity or shallowness.
• Spatial Convolution Filtering
1. Edge enhancement is an image processing filter that
enhances the edge contrast of an image in an attempt to
improve its acutance (apparent sharpness).
• Edge enhancement can be either an analog or a digital
process.
2. Fourier Transform
• The Fourier transform is a representation of an image as a sum of complex
exponentials of varying magnitudes, frequencies, and phases. The Fourier
transform plays a critical role in a broad range of image processing
applications, including enhancement, analysis, restoration, and
compression.
• Fourier Transform is a mathematical model which helps to transform
the signals between two different domains, such as transforming signal
from frequency domain to time domain or vice versa.
• The Fourier transform is a mathematical function that
decomposes a waveform, which is a function of time, into the
frequencies that make it up. The result produced by the Fourier
transform is a complex valued function of frequency.
• Fourier transform has many applications in Engineering and
Physics, such as signal processing, RADAR, and so on.
• The Fourier transform can be used to interpolate functions and to smooth signals.
For example, in the processing of pixelated images, the high spatial frequency edges of
pixels can easily be removed with the aid of a two-dimensional Fourier transform.
• The (2D) Fourier transform is a very classical tool in image processing. It is the
extension of the well known Fourier transform for signals which decomposes a
signal into a sum of sinusoids. So, the Fourier transform gives information about the
frequency content of the image.
• The main advantage of Fourier analysis is that very little information is lost from the
signal during the transformation. The Fourier transform maintains information on
amplitude, harmonics, and phase and uses all parts of the waveform to translate the
signal into the frequency domain.
Unit 5:
Digital image analysis and transformation
• Topics to be covered:
Spatial Transformations of image
Principal Component Analysis
Texture transformation
Image stacking and compositing
Image mosaicking and sub-setting
Spectral Vegetation Indices
5.1.Spatial Transformations
of Image
• Spatial transformation of an image is a geometric transformation of
the image coordinate system.
• Spatial transformations refer to changes to coordinate systems that
provide a new approach.
• In a spatial transformation each point (x, y) of image A is mapped to
a point (u, v) in a new coordinate system.
• A digital image array has an implicit grid that is mapped to discrete
points in the new domain.
• It is often necessary to perform a spatial transformation to:
• Align images that were taken at different times or sensors
• Correct images for lens distortion
• Correct effects of camera orientation
• Image morphing/change or other special effects
• Principal Component Analysis.
• PCA, is a statistical procedure that allows to summarize the
information content in large data set by means of a smaller set of
“summary indices” that can be more easily visualized and analyzed.
• The new variables/dimensions – Are linear combinations of the
original ones – Are uncorrelated with one another.
• PCA capture as much of the original variance in the data as possible
– are called Principal.
• PCA is the way of identifying patterns in data, and expressing the data to highlight
their similarities and differences.
• PCA is a technique used to emphasize variation and bring out strong patterns in a
dataset: to make data easy to explore and visualize.
• PCA should be used mainly for variables which are strongly correlated.
• If the relationship is weak between variables, PCA does not work well to reduce
data.
• Organizing information in principal components this way, will
allow you to reduce dimensionality without losing much
information, and this by discarding the components with low
information and considering the remaining components as your new
variables.
• principal components represent the directions of the data that
explain a maximal amount of variance, that is to say, the lines that
capture most information of the data.
• Image dimensions are the length and width of a digital image
which is usually measured in pixels.
• PCA is a method of compressing image using PCA.
• PCA is technique which transforms the original highly correlated
image data, to a new set of uncorrelated variables called principal
components.
• Each principal component is called an eigenchannel.
Applications of Principal Component Analysis.
• PCA is predominantly used as a dimensionality reduction
technique in domains like:
• facial recognition,
• computer vision and image compression.
• for finding patterns in data of high dimension, etc.
• How do you do PCA?
• Standardize the range of continuous initial variables
• Compute the covariance matrix to identify correlations
• Compute the eigenvectors and eigenvalues of the covariance matrix
to identify the principal components
• Create feature vector to decide which principal components to keep
• Recast the data along the principal components axes
Identifying objects based on texture
• What are different types of image texture?
• Texture consists of texture elements, sometimes called texels.
• Texture can be described as:
• fine,
• coarse,
• grained,
• smooth, etc.–Such features are found in the tone and
structure of a texture.
Identifying objects based on texture
• Image stacking and compositing
• Layer stacking is a process of combining multiple separate bands in
order to produce a new multi band image.
• In order to layer-stack, multiple image bands should have same
extent (no. of rows and columns).
• This type of multi band images are useful in visualizing and
identifying the available Land Use Land Cover classes.
• Compositing is assigning ‘suitable band arrangement’ for better
analysis.
Image mosaicking and sub-setting
• A mosaic is a combination or merge of two or more images.
• A mosaic combines two or more raster datasets together.
• Mosaics are used to create a continuous image surface across
large areas.
• In ArcGIS, you can create a single raster dataset from multiple raster
datasets by mosaicking them together.
• Image sub-setting
• A subset is a section of a larger downloaded image.
• Since satellite data downloads usually cover more area than you are
interested in and near 1 GB in size, you can select a portion of the
larger image to work with.
• Subset function to extract a subgroup of variable data from a
multidimensional raster object.
Spectral Vegetation Indices
• Remote sensing may be applied to a Varity of vegetated
landscapes, including:
Agriculture;
Forests’
Rangeland;
Wetland,
Urban vegetation
The basic assumption behind the use of vegetation indices is that remotely
sensed spectral bands can reveal valuable information such as:
vegetation structure,
state of vegetation cover,
photosynthetic capacity,
leaf density and distribution,
water content in leaves,
mineral deficiencies, and
evidence of parasitic shocks or attacks.
A vegetation index is formed from combinations of several spectral
values that are
added,
divided,
Subtracted, or
multiplied in a manner designed to yield a single value that
indicates the amount or vigor of vegetation within a pixel.
Exploring Some Commonly Used Vegetation Indices
1 Simple Ratio (SR) Index (ratio vegetation index (RVI))
Ratios are effective in revealing underlying information when
there is an inverse relationship between the two spectral responses
to the same biophysical phenomenon .
SR used to indicate status of vegetation.
SR is the earliest and simplest form of VI.
SR/RVI is calculated: as: as:
2 Normalized Difference Vegetation Index (NDVI)
• NDVI is one of the earliest and the most widely used in various
applications.
• NDVI responds to changes in:
• amount of green biomass,
• chlorophyll content, and
• canopy water stress.
• It is calculated as:
• NDVI conveys the same kind of information as the SR/RVI but is
constrained to vary within limits that preserve desirable statistical
properties (-1<NDVI<1).
• Only positive values correspond to vegetated zones;
• the higher the index, the greater the chlorophyll content of the
target.
• The time of maximum NDVI corresponds to time of maximum
photosynthesis.
Some Application Areas of NDVI
i. Land-Use and Land-Cover Change
oNDVI is helpful in identifying LULC changes such as deforestation,
its rate, and the area affected.
oNDVI differencing image is able to provide insights into the nature
of detected change.
oThis method is much more accurate than other change detection
methods in detecting vegetation changes.
However, one potential problem with NDVI-based
image differencing is the difficulty in setting the
appropriate threshold for significant change, such as
change from full vegetation to no vegetation.
ii. Drought and Drought Early Warning
Generally, meteorological (dry weather patterns) and hydrological
(low water supply) droughts would not be detected by NDVI
before they impact the vegetation cover.
But NDVI is found to be useful for detecting and monitoring
drought effects on the vegetation cover, especially agricultural
droughts, which in turn is useful for early warning.
iii. Soil Erosion
• NDVI has been proved to be a useful indicator of land-
cover condition and a reliable input in providing land-
cover management factor in to soil erosion models to
determine the vulnerability of soils to erosion.
3 Green-Red Vegetation Index (GRVI)
NDVI is not sensitive enough to leaf-color change from green to yellow
or red because green reflectance is not used in the calculation of NDVI.
Green-Red Vegetation Index (GRVI) is an indicator of plant phenology
due to seasonal variations.
It is calculated as:
Green vegetation, soils, and water/snow have positive, negative, and
near-zero values of GRVI, respectively.
4. Soil-Adjusted Vegetation Index (SAVI)
SAVI is used to eliminate the effects from background soils observed
in NDVI and other Vis.
Adjusts soil effect on reflectance either dense or sparse canopy.
• Where pn and pr are reflectance in the near infrared and red
• Where L is the coefficient that should vary with vegetation density, ranging from
0 for very high vegetation cover to 1 for very low vegetation cover.
It is obvious that if L = 0, then SAVI is equivalent to NDVI.
5.Normalized Difference Water Index (NDWI)
• The normalized difference water index (NDWI), is proposed to
monitor moisture conditions of vegetation canopies over large
areas from space.
It is defined as :
Where p represents radiance in reflectance units
Operations Between Images
Arithmetic operations
• An image is an array with numbers. So mathematical
operations can be performed on these numbers. In this section,
we consider 2D images but the generalization to different
dimensions is obvious.
• Image math includes addition, subtraction, multiplication,
and division of each one of the pixels of one image with the
corresponding pixel of the other image.
Image Addition
• The addition of two images f and g of the same size results in a
new image h of the same size whose pixels are to the sum of the
pixels in the original images.
• Addition can also be used to denoise a series of images.
Subtraction
The subtraction of two images is used for example to detect changes.
h(m,n) = f(m,n) - g(m,n)
Division
• The division of two images is used to correct non-homogeneous
illumination. Figure below illustrates the removal of shadow.
Fig.The right image is the division of the left image by the right image.
Spectral enhancement using principal component analysis
• Principal Components Analysis is a mathematical technique which
transforms the original image data, typically highly correlated, to a
new set of uncorrelated variables called principal components.
• Principal Components Analysis (PCA) is a mathematical
formulation used in the reduction of data dimensions. Thus, the
PCA technique allows the identification of standards in data and their
expression in such a way that their similarities and differences are
emphasized.
• Principal component analysis (PCA) simplifies the
complexity in high-dimensional data while retaining trends
and patterns. It does this by transforming the data into fewer
dimensions, which act as summaries of features.
• PCA reduces data by geometrically projecting them onto
lower dimensions called principal components (PCs), with
the goal of finding the best summary of the data using a
limited number of PCs.
• Principal component analysis (PCA) is a statistical technique whose
purpose is to condense the information of a large set of correlated
variables into a few variables (“principal components”), while not
throwing overboard the variability present in the data set.
• Each principal component is called an Eigen-channel. The benefits of
the technique from a geological standpoint are principally that
information not visible in false colour composite images can be
highlighted in one of the resulting component images.
Interpret the key results for Principal Components Analysis
• Step 1: Determine the number of principal components.
• Step 2: Interpret each principal component in terms of the
original variables.
• Step 3: Identify outliers.
Orthophoto Creation
• An orthophoto, orthophotograph, orthoimage or orthoimagery is
an aerial photograph or satellite imagery geometrically corrected
("orthorectified") such that the scale is uniform: the photo or image
follows a given map projection.
• Unlike an uncorrected aerial photograph, an orthophoto can be used
to measure true distances, because it is an accurate representation of
the Earth's surface, having been adjusted for topographic relief, lens
distortion, and camera tilt.
• Orthophotographs are commonly used in GIS as a "map accurate"
background image. An orthoimage and a "rubber sheeted" image can
both be said to have been "georeferenced"; however, the overall
accuracy of the rectification varies.
• Software can display the orthophoto and allow an operator to
digitize or place line-work, text annotations or geographic symbols
(such as hospitals, schools, and fire stations). Some software can
process the orthophoto and produce the linework automatically.
• An orthophoto mosaic is a raster image made by merging orthophotos —
aerial or satellite photographs which have been transformed to correct
for perspective so that they appear to have been taken from vertically
above at an infinite distance. Google Earth images are of this type.
• The document representing an orthophotomosaic with additional marginal
information like a title, north arrow, scale bar and cartographical
information is called an orthophotomap or image map. Often these
maps show additional point, line or polygon layers (like a traditional map)
on top of the orthophotomosaic.
• Ortho-rectification is a process that corrects for many artifacts
related to remotely sensed imagery to produce a map-accurate
orthoimage. Ortho-images can then be edge-matched and color
balanced to produce a seamless ortho-mosaic. This ortho-
mosaic is accurate to a specified map scale accuracy and can
be used to make measurements as well as generate and update
GIS feature class layers.
DEM creation from point cloud data
• Point clouds are datasets that represent objects or space. These points
represent the X, Y, and Z geometric coordinates of a single point on an
underlying sampled surface. Point clouds are a means of collating a large
number of single spatial measurements into a dataset that can then represent
a whole.
• point cloud dataset is the name given to point clouds that resemble an
organized image (or matrix) like structure, where the data is split into rows
and columns. Examples of such point clouds include data coming from
stereo cameras or Time of Flight cameras.
At a given (X,Y) point, there can be multiple points at different heights. Imagine a Single Tree.
You will get the return points from both the Canopy, as well as the ground underneath it. Or
Alternatively, image the points representing the Side of a building. You will have multiple
points at the same (X,Y) location.
• Compared to this, DEM raster is much simpler. It will have only one elevation value for a
given cell. A Cell does not exactly represent a (X,Y) point either, since it has a cell width and
cell height. Hence a Cell is not the exact equivalent of a point.
• So if in some way you create a point cloud from a DEM, it
really won't be an accurate representation of reality;
Additionally it won't be equal to what a real point cloud of
that location will be like.
• The first step to creating a DEM (or DTM) from a LIDAR
point cloud is to identify the LIDAR returns that represent
the ground and not vegetation or other
• LIDAR point clouds present some challenges because of
how much data they contain. One fundamental challenge
with LIDAR data is how to effectively extract the points
that represent the Earth’s surface (not vegetation, building,
or other objects) and create a DEM (or DTM).
Image Analysis And Pattern Recognition
• Pattern recognition is a data analysis method that uses machine learning
algorithms to automatically recognize patterns and regularities in data.
This data can be anything from text and images to sounds or other
definable qualities. Pattern recognition systems can recognize familiar
patterns quickly and accurately.
• Major areas of image analysis methodologies are matching, segmentation,
shape analysis and description. On the other hand, pattern recognition
studies the classification processes and has as its main streams of research
the statistical, the syntactic and finally hybrid methods of the previous two
approaches.
Unit 6
Digital Image Classification
6.1. Introduction to digital image
classification
• Topics to be covered
• Digital image classification
• Algorithms used in classification
• Image Classification methods
• Validating classification result
• Digital image classification uses the spectral information
represented by digital numbers in one or more spectral bands,
and attempts to classify each individual pixel based on this spectral
information. This type of classification is termed spectral pattern
recognition.
• Image classification is the process of categorizing and labeling
groups of pixels within an image based on specific rules.
• Image classification is the process of assigning land cover classes to
pixels.
• Classification is usually performed on multi-channel data sets and assigns
each pixel in an image to a particular class based on statistical
characteristics of the pixel brightness values.
• Classification is the process of sorting pixels into a finite number of
individual classes, or categories of data, based on their data file values.
• If a pixel satisfies a certain set of criteria, then the pixel is assigned to the
class that corresponds to that criteria.
• Image classification is the process of assigning land cover classes
to pixels.
• For example, classes include
• water,
• urban,
• forest,
• agriculture, and
• grassland.
Different classes be assigned
• Image Classification Process
i. The process of Image Classification involves 5 steps:
ii. Selection and preparation of the RS images
iii. Definition of the clusters in feature (supervised/unsupervised)
iv. Selection of classification algorithm
v. Running the actual classification
vi. Validation of the result
• Classification algorithms
• The following are classification algorithms that used during
classification, such as:
• Maximum likelihood
• Minimum-distance
• Principal components
• Parallel piped
• Decision tree
• Spectral angle mapping
1. Maximum likelihood Classification
• This classification is a statistical decision criterion to assist in the
classification of overlapping signatures; pixels are assigned to the
class of highest probability.
• This classifier consider not only the cluster centers but also the
shape, size and orientation of clusters.
• It looks for highest similarity to assign or group pixels to a class.
• Each pixel is assigned to the class that has the highest probability
(that is, the maximum likelihood). This is the default.
• The Maximum Likelihood algorithm assumes that the
histograms of the bands of data have normal distributions. If
this is not the case, you may have better results with the
Parallelepiped or Minimum Distance Decision Rule, or by
performing a first-pass Parallelepiped Classification.
• The Maximum Likelihood Decision rule is based on the
probability that a pixel belongs to a particular class. It
calculates the probability of a pixel being in a certain class and
assigns it to the class with the highest probability.
Pros and Cons of Maximum Likelihood Classifier
• Pros:
• The most accurate of the classifiers in the ERDASIMAGINE system (if the input
samples/clusters have a normal distribution), because it takes the most variables
into consideration.
• Takes the variability of classes into account by using the covariance matrix
• Cons:
• An extensive equation that takes a longtime to compute. The computation time
increases with the number of input bands.
• Maximum Likelihood is parametric, that it relies heavily on a normal distribution
of the data in each input band.
2. Minimum distance Classification
• Minimum distance classifies image data on a database file using
class signature segments as specified by signature parameter.
• It uses the mean vectors for each class and calculates the Euclidean
distance from each unknown pixel to the mean vector for each class.
• During classification, the Euclidean distances from candidate feature
to all clusters are calculated.
• The pixels are classified to the nearest class.
• Minimum Distance Decision Rule (also called Spectral
Distance)calculates the spectral distance between the
measurement vector for the candidate pixel and the mean
vector for each signature. The candidate pixel is assigned
to the class with the closest mean.
• Pros: Since every pixel is spectrally closer to either one sample mean or another, there are no
unclassified pixels.
• The fastest decision rule to compute, except for parallelepiped.
• Cons:
• Pixels that should be unclassified become classified. However, this problem is improved
by thresholding out pixels that are farthest from the means of their classes.
• Does not consider class variability. For example, a class, like an urban land cover class is
made up of pixels with a high variance, which may tend to be farther from the mean of the
signature. Using this decision rule, outlying urban pixels may be improperly classified.
• Inversely, a class with less variance, like water, may tend to overclassify because the
pixels that belong to the class are usually spectrally closer to their mean than those of
other classes to their means.
3. Parallelepiped Classification
oThe parallelepiped classifier uses the class limits and stored in each
class signature to determine if a given pixel falls within the class or
not. If the pixel falls inside the parallelepiped, it is assigned to the
class. However, if the pixel falls within more than one class, it is put
in the overlap class.
oIf the pixel does not fall inside any class, it is assigned to the null
class (code 0).
• In the Parallelepiped decision rule, the data file values of the
candidate pixel are compared to upper and lower limits. These limits
can be either of the following:
• The minimum & maximum data file values of each band in the
signature,
• The mean of each band, plus and minus a number of standard deviations
• Any limits that you specify, based on your knowledge of the data and
signatures.
•There are high and low limits for every
signature in every band. When a pixel’s data
file values are between the limits for every
band in a signature, then the pixel is assigned to
that signature’s class.
Pros and Cons of Parallelepiped Decision Rule
‾Does NOT assign every pixel to a class. Only the pixels that
fall within ranges.
‾Good for helping to decide if you need additional classes (if
there are unclassified pixels)
‾Problems when class ranges overlap-must develop rules to
deal with overlap areas.
4. The decision tree classifier
• This method creates the classification model by building a
decision tree.
• It is a tree-structured classifier, where internal nodes represent the
features of a dataset, branches represent the decision rules and each
leaf node represents the outcome.
• In a Decision tree, there are two nodes, which are the Decision Node
and Leaf Node.
5. Fuzzy Classification
• Fuzzy classification is the process of grouping elements into a fuzzy set
whose membership function is defined by the truth value function.
• Fuzzy classification is the process of grouping individuals having the same
characteristics into a fuzzy set.
• Each pixel is attached with a group to indicate the extent to which the pixel
belongs to certain classes depending on the similarity in features of the
patterns.
• Each pixel is attached with a group of membership classes to indicate the
extent to which the pixel belongs to.
6. Spectral angel mapping
• This is an automated method for directly comparing image spectra
to a known spectra.
• The result of classification indicates the best match at each pixel.
• It is a supervised classification algorithm which identify classes on
classification by calculating spectral angle.
• Smaller angles represent closer matches to the reference spectrum.
7. Neural Network Classification Method
• It is a type of deep learning method that uses convolutional multiplication based on artificial
neural networks. CNN is a deep neural learning technique used in many computer
vision tasks such as image classification, segmentation, and object detection. They use
convolutional filters to extract useful features from images during classification. CNN is
used to predict the land cover type for a given patch from Landsat 7 image.
• CNN or the convolutional neural network (CNN) is a class of deep learning neural
networks. In short think of CNN as a machine learning algorithm that can take in an input
image, assign importance (learnable weights and biases) to various aspects/objects in the
image, and be able to differentiate one from the other.
• Using algorithms, they can recognize hidden patterns and correlations in raw data,
cluster and classify it, and – over time – continuously learn and improve.
6.2. Image Classification methods:
• In general, these are three main image classification
techniques in remote sensing:
1. Supervised image classification
2. Unsupervised image classification
3. Object-based image analysis
1. Supervised classification:
• It is a human-guided type of classification.
• This method start with specifying information class on the image.
Algorithm is then used to summarize multispectral information
from specified areas on the image to form class signatures.
• This process is called supervised training.
• Supervised classification means that the operator participates in an
interactive process that assigns pixels to categories.
• The analyst identifies homogeneous representative training areas of
the different surface cover types of interest, in the imagery. The
analyst's familiarity with the geographical area and knowledge of the
actual surface cover types present in the image are basis for selecting
appropriate training areas in the image.
• The numerical information in all spectral bands for the pixels
comprising these areas are used to "train" the computer to recognize
spectrally similar areas for each class.
• The computer system must be trained to recognize
patterns in the data.
• Training is the process of defining the criteria by which
these patterns are recognized.
• The result of training is a set of signatures, which are
criteria for a set of proposed classes.
• In supervised classification, you select representative samples for
each land cover class. The software then uses these “training sites”
and applies them to the entire image.
• In Supervised Classification, you control the classification process
by creating, managing, evaluating, and editing signatures using the
Signature Editor.
Supervised classification example
•Supervised Classification uses knowledge of the
locations of informational classes to group pixels.
•It requires close attention to development of
training data. Typically, results in better maps than
unsupervised classifications, if you have good
training data.
• Steps of Supervised Classification
i. The first step is to locate the training samples for each potential class and
define their boundaries in the image.
ii. The second step is to collect signature for each potential class.
iii. The third step is to evaluate the signatures which can help determine whether
signature data are a true representation of pixels to be classified for each class
iv. The fourth step is to perform Supervised Classification process
Uses of Reference Data
• Reference data can be used as a Training data and Validation data.
• Training Data includes :
• field data,
• photo interpreted data
• used for training the classifier in supervised classification
• may be random or not (i.e., purposive)
• Validation Data
• Used for accuracy assessment
• Should be random
2. Unsupervised classification:
• It is a method based on the software analysis of an image without the
user providing sample classes.
• It first groups pixels into “clusters” based on their properties. Then,
you classify each cluster with a land cover class.
• Unsupervised classification is entirely based on the statistics of the
image data distribution, & is often called clustering.
• The process is automatically optimized according to cluster statistics
without the use of any knowledge-based control (i.e. ground truth).
• Unsupervised classification occurs automatically without instructions
from the operator.
• In unsupervised classification, it first groups pixels into “clusters” based
on their properties.
• After picking a clustering algorithm, you identify the number of groups
to generate. Eg, you can create 8, 20 or 42 clusters.
• For example, if you want to classify vegetation and non-vegetation,
you’ll have to merge clusters into only 2 clusters.
• it’s an easy way to segment and understand an image.
• Number of clusters in Unsupervised Classification is subjective
and depends on the following factors:
1. Size of area you are trying to classify
2. How diverse (heterogeneous) the landscape is
3. Resolution of the data you will be using
a. Spatial
b. Spectral
4. The number of classes you will be mapping
Unsupervised Classification Steps:
1. Generate clusters
2. Assign classes
• Unsupervised classification diagram
3. Object-Based (Object-Oriented) Image Analysis Classification
• Unlike Supervised and unsupervised classification, object-based image
classification groups pixels into representative shapes and sizes. This
process is multi-resolution segmentation or segment mean shift.
• Multiresolution segmentation produces homogenous image objects by
grouping pixels.
• These objects are more meaningful because they represent features in
the image.
• Applied when you have high resolution image.
• In Object-Based Image classification, you can use different methods to classify
objects. For example, you can use:
i. SHAPE: If you want to classify buildings, you can use a shape such as
“rectangular fit”. This tests an object’s geometry to the shape of a rectangle.
ii. TEXTURE: it is the homogeneity of an object. For example, water is mostly
homogeneous. But forests have shadows and are mix of green and black.
iii. SPECTRAL: You can use the mean value of spectral properties such as near-
infrared, short-wave infrared, red, green, or blue.
iv. GEOGRAPHIC CONTEXT: Objects have proximity and distance
relationships between neighbors.
Object based classification
• (OBIA) segmentation is a process that groups similar
pixels into objects
Object-Based classification Diagram
Growth of Object-Based Classification
• Pixels are the smallest unit represented in an image. Image
classification uses the reflectance statistics for individual pixels.
• The focus is shining on the object-based image analysis to deliver
quality products.
• Recently, object based classification has shown much growth due to
availability of high resolution images.
Spatial resolution of Images
High Medium Low
Unsupervised vs Supervised vs Object-Based Classification
• Overall, object-based classification outperformed both unsupervised
and supervised pixel-based classification methods.
• This study is a good example of some of the limitations of pixel-based
image classification techniques.
Image classification technique accuracy assessment
6.3. Validation/Evaluation of Classification
•After a classification is performed, the following methods are
available for evaluating and testing the accuracy of the
classification:
1. Create Classification Overlay (original image with classified
one)
2. Set a Distance Threshold
3. Recode Classes
4. Accuracy Assessment - Compare Data
Accuracy Assessment
• Compare certain pixels in thematic raster layer to reference pixels for
which the class is known.
• Accuracy Assessments determine the quality of the information derived
from remotely sensed data
• This is an organized way of comparing your classification with:
• ground truth data,
• previously tested maps,
• aerial photos, or other data.
All accuracy assessments include three fundamental steps:
1. Designing the accuracy assessment sample
2. Collecting data for each sample
3. Analyzing the results
•Accuracy Assessment: Evaluate a classified image file for
correctness.
• It is an organized way of comparing your classification with ground
truth data, previously tested maps, aerial photos, or other data.
Accuracy Assessment Reports:
1. Error Matrix report simply compares the reference class values to
the assigned class values in a c x c matrix, where c is the number
of classes (including class 0).
• An Error Matrix is a square array of numbers set out in rows and columns
which express the labels of samples assigned to a particular category in
one classification relative to the labels of samples assigned to a particular
category in another classification. One of the classifications, usually the
columns, is assumed to be correct and is termed as the reference data‘.
The rows are usually used to display the map labels or classified data
generated from the remotely sensed image.
1. Confusion matrix is a summary of prediction results on a classification
problem. The number of correct and incorrect predictions are
summarized with count values by each class.
3. User accuracy: refers how actually classified map is real on the
ground.
• – Calculated by dividing the number of correctly classified pixels in each
category by the total number of pixels in the corresponding row total.
4. Producer accuracy : is based on your classification point of view.
• – Calculated by dividing the number of correctly classified pixels in
each category by the total number of pixels in the corresponding
column total.
5. Omission error: refers to those sample points that are omitted in the
interpretation result. This is done by adding together the incorrect
classifications and dividing them by the total number of reference sites
for each class.
6. Commission error: refers to incorrectly classified samples
• Calculated by dividing correctly classified by row total
7. Overall accuracy: the number of correctly classified pixels (sum of
diagonals) divided by total number of pixels checked.
8. Kappa statistic is a measure of how closely the features classified by the
machine classifier matched the data labeled as ground truth.
• Overall accuracy is simply the sum of the major diagonal (i.e., the correctly
classified sample units) divided by the total number of sample units in the
entire error matrix. However, just presenting the overall accuracy is not
enough. It is important to present the entire matrix so that other accuracy
measures can be computed as needed and confusion between map classes is
clearly presented and understood.
•
Validating classification
Reference
1 2 3 4 5 Total
classified
2
Total
• 1 2, 3,--5 indicates classes of classied image
• Reference is about ground truth data
• Classified is classified image on software
Unit 7. Techniques of Multitemporal
Image Analysis
• Temporal refers to time.
• Spatiotemporal, or spatial temporal, is used in data analysis
when data is collected across both space and time.
• Temporal parts are the parts of an object that exist in
time.
• Temporal Pattern: A definite arrangement of features that
changes over time in a specific location.
• Temporal coverage is the span of time over which images are
recorded and stored in an image.
• Temporal effects are any factors that change the spectral
characteristics of a feature over time. For example, the spectral
characteristics of many species of vegetation are in a nearly
continual state of change throughout a growing season. These
changes often influence when we might collect sensor data for a
particular application.
• The temporal aspects of natural phenomena are important for image
interpretation.
• Eg: For crop identification by obtaining images at several times during
the annual growing cycle
• Observations of local vegetation emergence
• natural vegetation mapping
• Seasonal variations of weather caused significant short-term changes
• Eg. metsats afford the advantages of global coverage at very high temporal
resolution.
• Again, however, temporal and spatial effects might be the
keys to gathering the information sought in an analysis.
• For example, the spectral characteristics of many species
of vegetation are in a nearly continual state of change
throughout a growing season.
• What are multi temporal images?
• Multitemporal imaging is the acquisition of remotely
sensed data from more than one time period.
Multitemporal images and analysis techniques provide
the tools to monitor land use and land cover change
(LUCC) and have been instrumental in providing an
understanding of global environmental change.
• Multitemporal analysis focuses on the use of high-temporal scales like:
• daily
• every 8–16 days,
• Monthly, seasonal, annual
• Our understanding of the impacts of global change would not have been
possible without multi-temporal imaging of diverse phenomena such as:
• snow and ice extent, sea surface temperature,
• deforestation, drought, water resources, wildfire,
• desertification, and more.
• Multitemporal imaging enables to identify significant
information:
• about how our world is changing, and
• To get unique insight into the relationships between the
environment and
• the impact of human activities.
• For remote sensing applications in agriculture, multitemporal
analysis is particularly important for:
• crop-coverage mapping,
• Crop healthy status mapping
• crop yield estimation,
• crop phenology monitoring.
• Therefore, Multitemporal data registration of images/data taken at
different times, is a prerequisite procedure of significant importance
in many applications concerning temporal change detection, such as:
• land cover change detection,
• coastline monitoring,
• deforestation,
• urbanization,
• informal settlements detection, etc.
• During multi-temporal image analyses, it is very difficult
to extract common features that exist in both images
because of:
• harsh contrast changes,
• different sensors and
• scene changes.
• Multi-temporal Analysis Techniques
•Post-classification comparison
•Change detection
•Image differencing
•Multi-date classification
•Principal components analysis
•On-screen multi-temporal display and digitsing
Monitoring Change with Remote Sensing
• Satellite and aerial sensors provide consistent, repeatable
measurements. There is a lot of data available at different spatial,
temporal and spectral resolutions that make change detection
possible.
• Sensors take images of the earth at varying spatial, spectral
resolutions at different temporal resolutions and thus, there is an
archive of data available to identify changes on the landscape by
looking for spectral changes between images.
• Change Detection
• Change detection is a procedure to analyze changes in satellite
images taken at two different points in time.
• Timely and accurate change detection of Earth's surface features
is extremely important for understanding relationships and
interactions between human and natural phenomena in order to
promote better decision making.
• Remote sensing data are primary sources extensively used for change
detection in recent decades.
• Change detection involves quantifying temporal effects using
multi temporal data sets.
• When one is interested in knowing the changes over large areas and
at frequent interval satellite data are commonly used.
• Changes on landscape can be detected using spectral information of
satellite images. For example, a vegetated area shows different
spectral signature compared to non-vegetated area.
• Change Detection Analysis encompasses a broad range
of methods used to:
• identify,
• describe, and
• quantify differences between images of the same scene
at different times or under different conditions.
• To perform change detection, you select a working
image and a reference image.
• The working image is typically the more recent image.
The reference image is the older image.
• With each image—working and reference—you can
select whether to compare all layers in the image, or
select a specific layer.
• Post Classification Change Detection
• This expression stands for the analysis of changes using two
previously classified images.
• It can be seen which class has changed and how much - and to
which class the pixels are assigned afterwards.
• The new image can be interpreted easily and is ready for direct use,
on condition of well-classified original images.
• Thus, the post-classification change detection depends on the
accuracy of the image classification.
• Understanding the magnitude, direction and agents of land
use/land cover change (LU/LCC) are important for
planning sustainable management of natural resources.
• To compute the rate of LULCC; the following equation
was performed for computing rate and percentage of
LULC change.
• R=
• %R =
Where,
• R=rate of LULC change
• %R = Percentage of LULCC
• Q2=recent year of LULC in ha
• Q1= Initial year of LULC in ha
• T = Interval year between initial and recent year
Rate of change (Land covers)
1991 2001
percentage area coverage rate of change
Class-Name percentage area rate of change
coverage (1991-2001) (2001-2011)
Waterbody
Built-up
Forest
Grassland
Agriculture
Wetland
2011 percentage area coverage rate of change
area
Class-Name percentage coverage rate of change (1991-2021)
(2011-2021)
Waterbody
Built-up
Forest
Grassland
Agriculture
Wetland
Rate of change of LULC
1500
1000
500
1988-1998
Area_ha
0 1998-2008
Woodland Riverine forest Grass land Water body Degraded land Open land 2008-2018
1988-2018
-500
-1000
-1500 Year
Eg: LULC Trend from 1988 to 2018
100000
90000
80000 1988
Area_ha
70000 1998
60000 2008
50000 2018
40000
30000
20000
10000
0
Woodland Riverine Forest Grass land Water body Degraded land Bare land
Year
• The graph below show how the change trend looks like from 1988 to 2018 in a
given area.
Change matrix
1998
1988 Woodland Riverine forest Grass land Water body Degraded land Bare land
Area(ha) % Area(ha) % Area(ha) % Area(ha) % Area (ha) % Area (ha) % Total
Woodland 56131.9 59.02 3522.13 3.7 27240.06 28.64 184.59 0.19 7222.39 7.6 808.92 8.5 95109.99
Riverine
5055.91 35.84 7769.32 55.076 530.07 3.76 14.94 0.11 720.78 5.11 15.57 0.11 14106.59
forest
Grass land 18502.87 27.66 184.86 0.28 45768.97 68.41 4.95 0.001 971.28 1.45 1470.2 2.17 66903.13
Water body 50.6 19.81 1.89 0.64 31.14 10.5 194.01 68.66 0.99 0.36 0.18 0.06 278.81
Degraded
1938.16 74.504 24 0.00923 335.06 13.02 0.18 0.01 296.7 11.3 7.3 0.28 2601.4
land
Bare land 4207.664 53.19 1.8 0.92 3641.22 45.57 8.1 1.01 26.82 0.34 104.85 1.31 7990.454
Unit 8.
Image segmentation
• Image Segmentation is the process by which a digital image is
partitioned into various subgroups (of pixels) called Image
Objects, which can reduce the complexity of the image, and
thus analyzing the image becomes simpler.
• Segmentation techniques are used to isolate the desired
object from the image in order to perform analysis of the
object.
• Image segmentation refers to the process of decomposing
an input image into spatially discrete, contiguous,
nonintersecting, and semantically meaningful segments or
regions. These regions are patches comprising relatively
homogeneous pixels. These pixels share a higher internal
spectral homogeneity among themselves than external
homogeneity with pixels in other regions.
• Segmentation is the process of the breaking down the
digital image into multiple segments (that is divided into
the set of different pixels into an image).
• The primary goal of image segmentation is to simplify the
image for easier analysis.
• Image segmentation fulfils a number of image processing
functions. It is a vital preparatory step toward image classification
based on a hierarchy above the pixel level. Image segmentation is
also an indispensable step in certain applications that focus on a
particular type of land cover among several present within a study
area (e.g., water bodies in water quality analysis). The land covers
of no interest are stripped off an image via image segmentation.
• Image segmentation may be carried out in the top-down or bottom-
up strategies or a combination of both. In the top-down approach,
the input image is partitioned into many homogeneous regions. In
the bottom-up approach, pixels are linked together to form regions
that are amalgamated later. In either strategy homogeneous patches
are formed by generalizing the subtle spectral variation within an
identified neighborhood.
What is segmentation?
• Segmentation divides an image into groups of pixels
• Pixels are grouped because they share some local property (gray level,
color, texture, motion, etc.)
0
7 3
11
21
boundaries labels pseudocolors mean colors
(different ways of displaying the output)
• In image segmentation, you divide an image into various parts
that have similar attributes. The parts in which you divide the
image are called Image Objects.
• Segmentation is done with high resolution images. It is under
OBA tool in QGIS.
Segmentation example
• Segments exhibiting certain shapes, spectral, and spatial
characteristics can be further grouped into objects. The objects
can then be grouped into classes that represent real-world
features on the ground.
• Data output from one tool is the input to subsequent tools,
where the goal is to produce a meaningful object-oriented
feature class map.
Segmentation
• Segmentation = partitioning
Cut dense data set into (disjoint) regions
• Divide image based on pixel similarity
• Divide spatiotemporal volume based on image similarity (shot
detection)
• Figure / ground separation (background subtraction)
• Regions can be overlapping (layers)
• The difference between segmentation and classification is
clear at some extend. The classification process is easier
than segmentation, in classification all objects in a single
image is categorized into a classes based on pixel
characterstics. While in segmentation each object of a
single class in an image is highlighted with different shades
to make them recognizable to computer vision.
• Image segmentation is used for many practical applications
including:
• medical image analysis,
• computer vision for autonomous vehicles,
• face recognition and detection,
• video surveillance, and
• satellite image analysis.
• Following are the primary types of image segmentation
techniques:
• Thresholding Segmentation.
• Edge-Based Segmentation.
• Region-Based Segmentation.
• Clustering-Based Segmentation Algorithms.
• The most commonly used segmentation techniques are:
• region segmentation techniques that look for the
regions satisfying given homogeneity criterion, and
• edge-based segmentation techniques that look for
edges between regions with different characteristics
• By using image segmentation techniques:
• you can divide and group-specific pixels from an image,
• assign them labels and classify further pixels according to
these labels
• You can draw lines, specify borders, and separate particular
objects (important components) in an image from the rest of
the objects (unimportant components).
1. Thresholding Segmentation
• It is the simplest method for segmentation in image analysis.
• It divides the pixels in an image by comparing the pixel’s intensity
with a specified value (threshold).
• It is useful when the required object has a higher intensity than the
background (unnecessary parts).
• You can keep the threshold value constant or dynamic according to
your requirements.
• Thresholding is the simplest image segmentation method, dividing pixels
based on their intensity relative to a given value or threshold. It divides the
pixels in an image by comparing the pixel's intensity with a specified value
(threshold).
• It is suitable for segmenting objects with higher intensity than other objects or
backgrounds (unnecessary parts).
• The threshold value T can work as a constant in low-noise images. In some
cases, it is possible to use dynamic thresholds. Thresholding divides a
grayscale image into two segments based on their relationship to T, producing
a binary image.
1. Pixel-Based Segmentation
• Also known as thresholding, pixel-based image segmentation aims to
stratify an input image into pixels of two or more values through a
comparison of pixel values with the predefined threshold T individually. In
this method a pixel is examined in isolation to determine whether or not it
belongs to a region of interest based on its value in relation to the mean
value of all pixels inside this region.
• If its value is smaller than the specified threshold, then it is given a value
of 0 in the output image; otherwise it receives a value of 1. This method is
easy to implement and computationally simple.
• The thresholding method converts a grey-scale image into a binary
image by dividing it into two segments (required and not required
sections).
• According to the different threshold values, we can classify
thresholding segmentation in the following categories:
• Simple Thresholding:
• Otsu’s Binarization
• Adaptive Thresholding
i. Simple Thresholding: you replace the image’s pixels with either white or black.
if the intensity of a pixel at a particular position is less than threshold value,
you’d replace it with black, if it’s higher than threshold, you’d replace it with
white.
ii. Otsu’s Binarization: you calculate threshold value from the image’s histogram
if the image is bimodal. It is used for recognizing patterns and removing
unnecessary colors from a file. You can’t use it for images that are not bimodal.
iii. Adaptive Thresholding: Having one constant threshold value might not be a
suitable approach to take with every image. Different images have different
backgrounds and conditions. In this technique, you’ll keep different threshold
values for different sections of an image.
• 2. Edge-Based Segmentation
• Edge-based segmentation is one of the most popular method of
segmentation in image processing.
• It focuses on identifying the edges of different objects in an image.
• Edge detection is widely popular because it helps you in removing
unwanted and unnecessary information from the image.
• It reduces the image’s size considerably, making it easier to analyze
the same.
• Edge detection is an image processing technique for finding the
boundaries of objects within images. It works by detecting
discontinuities in brightness. Edge detection is used for image
segmentation and data extraction in areas such as image
processing, computer vision, and machine vision.
• It helps locate features of associated objects in the image using the
information from the edges.
• Algorithms used in edge-based segmentation identify edges in an
image according to the differences in:
• texture,
• contrast,
• grey level,
• colour, saturation, and other properties.
• You can improve the quality of your results by connecting all the
edges into edge chains that match the image borders more accurately.
• Segmentation is the finding of different regions based normally on the pixel
characteristics however edge detection refers to the findings of contour
(outlines) of any shape, object in the image to separate it from the background or
other objects.
• In an image, an edge is a curve that follows a path of rapid change in image
intensity. Edges are often associated with the boundaries of objects in a scene.
Edge detection is used to identify the edges in an image.
Edge Detection: Problems:
• neighborhood size
81 82 26 24
82 33 25 25
• how to detect change 81 82 26 24
• Edge-based segmentation is a popular image processing technique
that identifies the edges of various objects in a given image. It helps
locate features of associated objects in the image using the
information from the edges. Edge detection helps strip images of
redundant information, reducing their size and facilitating analysis.
• Edge-based segmentation algorithms identify edges based on
contrast, texture, color, and saturation variations. They can
accurately represent the borders of objects in an image using edge
chains comprising the individual edges.
• The most powerful edge-detection method that edge provides is
the Canny method. The Canny method differs from the other edge-
detection methods in that it uses two different thresholds (to detect
strong and weak edges), and includes the weak edges in the output
only if they are connected to strong edges.
3. Region-Based Segmentation
• Region-based segmentation algorithms divide the image
into sections with similar features.
• Based on these methods, we can classify region-based
segmentation into the following categories:
• Region Growing
• Region Splitting and Merging
• Region-based segmentation involves dividing an image
into regions with similar characteristics. Each region is a
group of pixels, which the algorithm locates via a seed
point. Once the algorithm finds the seed points, it can
grow regions by adding more pixels or shrinking and
merging them with other points.
• Region Growing
• In this method, you start with a small set of pixels and then start iteratively
merging more pixels according to particular similarity conditions.
• A region growing algorithm would pick an arbitrary seed pixel in the
image, compare it with the neighbouring pixels and start increasing the
region by finding matches to the seed point.
• You should use region growing algorithms for images that have a lot of
noise as the noise would make it difficult to find edges or use thresholding
algorithms.
Region growing
• Start with (random) seed pixel as cluster
• Aggregate neighboring pixels that are similar to cluster model
• Update cluster model with newly incorporated pixels
• This is a generalized floodfill
• When cluster stops growing, begin with new seed pixel and continue
• An easy cluster model:
• Store mean and covariance of pixels in cluster
• One danger: Since multiple regions are not grown simultaneously, threshold
must be appropriate, or else early regions will dominate
Results – Region grow
Region Growing (Merge)
• A simple approach to image segmentation is to start from some
pixels (seeds) representing distinct image regions and to grow them,
until they cover the entire image
• For region growing we need a rule describing a growth mechanism
and a rule checking the homogeneity of the regions after each
growth step
• Region growing techniques start with one pixel of a potential
region and try to grow it by adding adjacent pixels till the pixels
being compared are too dissimilar.
•The first pixel selected can be just the first unlabeled pixel in the
image or a set of seed pixels can be chosen from the image.
• Usually a statistical test is used to decide which pixels can be
added to a region.
Region Grow Example
image
segmentation
Region growing results
• Region Splitting and Merging
• As the name suggests, a region splitting and merging focused method would
perform two actions together – splitting and merging portions of the image.
• It would first split the image into regions that have similar attributes and
merge the adjacent portions which are similar to one another. In region
splitting, the algorithm considers the entire image while in region growth, the
algorithm would focus on a particular point.
• It divides the image into different portions and then matches them according
to its predetermined conditions.
Split-and-Merge
• Split-and-merge algorithm combines these two ideas
• Split image into quadtree, where each region satisfies homogeneity
criterion
• Merge neighboring regions if their union satisfies criterion (like
connected components)
image after split after merge
Split
• The opposite approach to region growing is region splitting.
• It is a top-down approach and it starts with the assumption that
the entire image is homogeneous
• If this is not true, the image is split into four sub images
• This splitting procedure is repeated recursively until we split the
image into homogeneous regions
Split
• Splitting techniques disadvantage, they create regions that
may be adjacent and homogeneous, but not merged.
• Split and Merge method is an iterative algorithm that
includes both splitting and merging at each iteration:
4. Clustering-Based Segmentation Algorithms
• The clustering algorithms are unsupervised algorithms and helps in
finding hidden data in the image that might not be visible to a normal
vision. This hidden data includes information like clusters, structures,
shadings, etc.
• As the name suggests, a clustering algorithm divides the image into
clusters (disjoint groups) of pixels that have similar features. It would
separate the data elements into clusters where the elements in a cluster are
more similar in comparison to the elements present in other clusters.
Techniques Description Advantages Disadvantages
Focuses on finding peak
Doesn’t require Many details can get
Thresholding values based on the
complicated pre- omitted, threshold
Method histogram of the image to
processing, simple errors are common
find similar pixels
Good for images
based on discontinuity
Edge Based having better Not suitable for noisy
detection unlike similarity
Method contrast between images
detection
objects.
Works really well for
images with a
based on partitioning an
Region-Based considerate amount of Time and memory
image into homogeneous
Method noise, can take user consuming
regions
markers for fasted
evaluation
5. Watershed Segmentation
• Watersheds are transformations in a grayscale image. Watershed
segmentation algorithms treat images like topographic maps, with
pixel brightness determining elevation (height). This technique
detects lines forming ridges and basins, marking the areas between
the watershed lines. It divides images into multiple regions based on
pixel height, grouping pixels with the same gray value.
• Image segmentation is a commonly used technique in digital image
processing and analysis to partition an image into multiple parts or
regions, often based on the characteristics of the pixels in the image.
Image segmentation could involve separating foreground from
background, or clustering regions of pixels based on similarities in
color or shape.
Unit 9.
Remote sensing data Integration
• Many applications of digital image processing are enhanced through the
merger of multiple data sets covering the same geographical area. These
data sets can be of a virtually unlimited variety of forms.
• Multi-temporal data merging can take on many different forms. One
such operation is simply combining images of the same area taken on
more than one date to create a product useful for visual interpretation.
For example, agricultural crop interpretation is often facilitated through
merger of images take nearly and late in the growing season.
• Data integration basically involves combining/merging
of data from multiple sources.
• The merging of data of higher spatial resolution with data
of lower resolution can significantly sharpen the spatial
detail of image and enhance it.
• The data integration helps to extract better and/or more
information.
• In the early days of analog remote sensing when the only data source
was aerial photography, the capability for integration of data from
different sources was limited.
• Today, most data are available in digital format from a wide array of
sensors, therefore, data integration is a common method used for
interpretation and analysis.
• Example: Imagery collected at different times integrated to identify
areas of change.
• For example, elevation data in digital form, called Digital Elevation
or Digital Terrain Models (DEMs/DTMs), may be combined with
remote sensing data for a variety of purposes.
• DEMs/DTMs may be useful in image classification, as effects due
to terrain and slope variability can be corrected, potentially
increasing the accuracy of the classification.
• Any data source which can be referenced spatially can be used in
this type of environment.
• Integration of data from different sources enables to improve the
ability to extract data from remotely sensed data.
• Most data available in digital form which are obtained from a wide
array of sensors, data integration is a common method used for
interpretation and analysis.
• Remote sensing data integration fundamentally involves the
combining of data from multiple resources in an effort to extract
more information.
• Remote sensing data are:
• Multi-platform
• Multi-stage
• Multi-scaled
• Multi-spectral
• Multi-temporal, Multi-resolution
• Multi-phase, Multi-polarized etc.
• By analyzing diverse data together, it is possible to extract better
and more accurate information.
Multi-sensor Image data
• A unique element of remote sensing platforms is their multi-
sensor capability, which enhances the capacity for
characterizing a variety of perspectives.
• Multi-sensor image fusion is a synthesis technique that can
fuse source images from multiple sensors into a high-quality
image with comprehensive information.
• Multi-spectral images
• Images from different wavelength bands.
• A multispectral image is a collection of a few image layers of the same scene, each
of them acquired at a particular wavelength band. The well-known high-resolution
visible sensor operates in the 3-band multispectral detects radiations in the
following wavelengths bands:
• Multi-spectral data are essential for creating color composites:
• Image classification, Indices/ratioing
• Principal component analysis and Image fusion
• Different wavelengths of incident energy affected differently by each target and
they are absorbed, reflected and transmitted in different proportions.
• Multispectral remote sensing involves the acquisition of
visible, near infrared, and short-wave infrared images in
several broad wavelength bands. Different materials reflect
and absorb differently at different wavelengths.
• Multispectral remote sensing (RS) image data are basically
complex in nature, which have both spectral features with
correlated bands and spatial features correlated within the
same band (also known as spatial correlation).
• Multi-scale image integration
• In remote sensing, image fusion techniques are used to fuse high spatial
resolution panchromatic and lower spatial resolution multispectral images
that are simultaneously recorded by one sensor. This is done to create high
resolution multispectral image datasets (pansharpening).
• Spatial resolution is the detail in pixels of an image. High spatial
resolution means more detail and a smaller grid cell size. Whereas,
lower spatial resolution means less detail and larger pixel size.
Typically, drones capture images with one of the highest spatial
resolutions.
• Multi-temporal/multi-seasonal images
• Information from multiple images taken over a period of
time is referred as multi-temporal information.
• Multi-temporal imagery and data are useful for studying
changes in the environment and to monitor various
processes.
• This kind of images are important to stay changes, trends,
etc related to time.
• Images taken over days, weeks, seasons, years apart.
• Eg. Vegetation phenology:
• How the vegetation changes throughout growing season
• Mapping of growth of cities, land use categories, etc.
• Satellites area ideal for monitoring changes on earth over
time.
• Multistage, multiplatform, multiscale & multiresolution images
• Multi-stage remote sensing is used for landscape characterization
that involves gathering and analyzing information at several
geographic scale.
• It includes complete coverage of target area with low resolution as
well as high resolution imagery.
• Multiresolution data merging is useful for a variety of applications.
SPOT data are well suited to this approach as the 10 meter
panchromatic data can be easily merged with the 20 meter
multispectral data.
• Data from different sensors also be merged, in the concept of multi-
sensor data fusion.
• Multi-sensor data integration generally require, the data be
geometrically registered to a common geographic coordinate system.
• The merging of data of a higher spatial resolution with data of lower
resolution can significantly sharpen the spatial detail in an image and
enhance the of features.
aerial photograph of Addis Ababa Landsat image of Addis Ababa
• Multiscale images require a series of images at different
scales taken at the same time.
• Multiscale images include images taken simultaneously by
space-borne (satellite) as well as air borne images.
• For interpreting multiscale images we use the larger-scale
images to interpret smaller-scale images and vice versa.
Multi-Source Data
• For this the data must be geometrically registered to a geographic
co-ordinate system or map base.
• So, all sort of data sources can be integrated together.
• Eg. DEM, and DTM model can be combined together with remote
sensing data for a variety of purposes.
• DEM/DTM are useful for image classification including slopes and
terrains also useful for 3D views and enhancing visualization.
• 9.2. Pan-sharpening
• It is known as Multiresolution image fusion.
• Pan-sharpening is a pixel level data fusion technique used to increase
the spatial resolution of the multi-spectral image using panchromatic
image while simultaneously preserving the spectral information. Also
known as resolution merge, image integration and multi-sensor data
fusion.
• Pan-sharpening increases the spatial resolution of the multispectral
image by merging it with the higher resolution panchromatic image.
• From a signal processing view, a general fusion filtering
framework (GFF) can be formulated, which is very well
suitable for a fusion of multiresolution and multisensor
data such as optical-optical and optical-radar imagery.
Pan-sharpening
+
Panchromatic – Resolution .5m
Bands 4,3,2 - Resolution 2m
Capricorn Seamount
Pan-sharpened 4,3,2 Resolution .5m
• Applications:
• Sharpen multi-spectral data
• Enhance features using complementary information
• Enhance the performance of change detection algorithms
using multi-temporal datasets
• Improve classification accuracy
Unit 10.
Lidar, Drone and Radar Data Processing
• There are several advantages to be gained from the use of active
sensors, which have their own energy source:
• It is possible to acquire data at any time, including during the night
(similar to thermal remote sensing)
• Since the waves are created actively, the signal characteristics are
fully controlled (e.g. wavelength, polarization, incidence angle, etc.)
and can be adjusted according to desired application
• Active sensors are divided into two groups: imaging and non-imaging
sensors.
• Figure: Principles of active microwave remote sensing
• Radar sensors belong to the group of most commonly used active
(imaging) microwave sensors.
• The term radar is an acronym for radio detection and ranging.
• Radio stands for the microwave and range is another term for
distance.
• Radar sensors were originally developed and used by the military.
Nowadays, radar sensors are widely used in civil applications as
well, such as environmental monitoring.
RADAR
oIt is about radio detection and ranging.
oA radar uses a transmitter operating at either radio or microwave
frequencies to emit electromagnetic radiation and a directional
antenna or receiver to measure the time of arrival of reflected pulses
of radiation from distant objects. Distance to the object can be
determined since electromagnetic radiation propagates at the speed of
light.
RADAR remote sensing--------------
oRadar consists fundamentally of:
oa transmitter, a receiver, an antenna, and an electronics system to
process and record the data.
oRadar uses EM wave to identify the range, altitude,
direction or speed of both moving and fixed objects such
as: aircraft, ships, motor vehicles, weather formations and
obstacles (mountain, tree, etc).
‾ The first demonstration of the transmission of radio
microwaves and reflection from various objects was achieved
by Hertz in 1886. Shortly after the turn of century, the first
rudimentary radar was developed for ship detection. In the
1920s and 1930s, experimental ground-based pulsed radars
were developed for detecting objects at a distance.
Imaging radars.
• A major advantage of radar is the capability of the
radiation to penetrate through cloud cover and most
weather conditions. Because radar is an active sensor, it
can also be used to image the surface at any time, day or
night. These are the two primary advantages of radar: all-
weather and day or night imaging.
• By measuring the time delay between the transmission of a pulse and
the reception of the backscattered "echo" from different targets, their
distance from the radar and thus their location can be determined. As
the sensor platform moves forward, recording and processing of the
backscattered signals builds up a two-dimensional image of the
surface.
a. pulses of microwave (A)
b. beam (B).
c. backscattered) beam (C).
Airborne versus Spaceborne Radars
Like other remote sensing systems, an imaging radar sensor may be
carried on either an airborne or space-borne platform. Regardless of
the platform used, a significant advantage of using a Synthetic
Aperture Radar (SAR) is that the spatial resolution is independent of
platform altitude. Thus, fine resolution can be achieved from both
airborne and space-borne platforms.
• Image characteristics such as foreshortening, layover, and
shadowing will be subject to wide variations, across a large
incidence angle range. Space-borne radars are able to avoid
some of these imaging geometry problems since they operate at
altitudes up to one hundred times higher than airborne radars.
oAn airborne radar is able to collect data anywhere and at any time
(as long as weather and flying conditions are acceptable).
oA space-borne radar does not have this degree of flexibility, as its
viewing geometry and data acquisition schedule is controlled by the
pattern of its orbit.
oHowever, satellite radars do have the advantage of being able to
collect imagery more quickly over a larger area than an airborne
radar, and provide consistent viewing geometry.
Radar Image Distortions
• As with all remote sensing systems, the viewing geometry of a radar results in
certain geometric distortions on the resultant imagery. Similar to the distortions
encountered when using cameras and scanners, radar images are also subject to
geometric distortions due to relief displacement. However, the displacement is
reversed with targets being displaced towards, instead of away from the sensor.
Radar foreshortening and layover are two consequences which result from relief
displacement.
• When the radar beam reaches the base of a tall feature tilted towards the radar (e.g.
a mountain) before it reaches the top foreshortening will occur. Again, because
the radar measures distance in slant-range, the slope (A to B) will appear
compressed and the length of the slope will be represented incorrectly (A' to B').
Maximum foreshortening occurs when the radar beam is perpendicular to the slope
such that the slope, the base, and the top are imaged simultaneously (C to D). The
length of the slope will be reduced to an effective length of zero in slant range
(C'D'). The foreshortened slopes appear as bright features on the image.
• Layover occurs when the radar beam reaches the top of a tall feature (B) before it
reaches the base (A). The return signal from the top of the feature will be received
before the signal from the bottom. As a result, the top of the feature is displaced
towards the radar from its true position on the ground, and "lays over" the base of
the feature (B‘ to A'). Layover effects on a radar image look very similar to
effects due to foreshortening. As with foreshortening, layover is most severe for
small incidence angles, at the near range of a swath, and in mountainous terrain.
oThe term radar is abbreviation made up of the words
radio, detection, and ranging.
o It refers to electronic equipment that detects the:
‾ presence,
‾ direction,
‾ height and
‾ distance/length of objects by using reflected EME
• Microwave remote sensing uses electromagnetic waves with
wavelengths between 1cm and 1m. These relatively longer
wavelengths have the advantage that they can penetrate clouds and
are independent of atmospheric conditions, such as haze.
• Although microwave remote sensing is primarily considered an
active technique, also passive sensors are used. They operate
similarly to thermal sensors by detecting naturally emitted
microwave energy.
•The recorder then stores the received signal.
Imaging radar acquires an image in which each
pixel contains a digital number according to the
strength of the backscattered energy that is received
from the ground.
where
Pr = received energy,
G = antenna gain,
λ = wavelength,
Pt = transmitted energy,
σ = radar cross section, which is a function of the object characteristics
and the size of the illuminated area, and
R = range from the sensor to the object.
• From this equation you can see that there are three main factors that
influence the strength of backscattered received energy:
i. radar system properties, i.e. wavelength, antenna & transmitted power,
ii. radar imaging geometry, that defines the size of the illuminated area
which is a function of, for example, beam-width, incidence angle and
range, and
iii. characteristics of interaction of the radar signal with objects, i.e. surface
roughness and composition, and terrain topography and orientation.
Principles of imaging radar
• Imaging radar systems include several components: a transmitter, a
receiver, an antenna and a recorder.
• The transmitter is used to generate the microwave signal and transmit
the energy to the antenna from where it is emitted towards the Earth’s
surface.
• The receiver accepts the backscattered signal as received by the
antenna, filters and amplifies it as required for recording.
Application of RADAR Images
o Agriculture: for crop type identification, crop condition monitoring, soil moisture
measurement, and soil tillage and crop residue identification;
o Forestry: for clear-cuts and linear features mapping, biomass estimation, species
identification and fire scar mapping;
o Geology: for geological mapping;
o Hydrology: for monitoring wetlands and snow cover;
o Oceanography: for sea ice identification, coastal wind-field measurement, and wave slope
measurement;
o Shipping: for ship detection and classification;
o Coastal Zone: for shoreline detection, substrate mapping, slick detection and general
vegetation mapping.
• Microwave polarizations
• The polarization of an electromagnetic wave is important in the
field of radar remote sensing. Depending on the orientation of the
transmitted and received radar wave, polarization will result in
different images.
• Using different polarizations and wavelengths, you can collect
information that is useful for particular applications,.
• In radar system, you will come across the following
abbreviations:
oHH: horizontal transmission and horizontal reception,
oVV: vertical transmission and vertical reception,
oHV: horizontal transmission and vertical reception, and
oVH: vertical transmission and horizontal reception.
• Spatial resolution
• In radar remote sensing, the images are created from the
backscattered portion of transmitted signals. Without
further sophisticated processing, the spatial resolutions in
slant range and azimuth direction are defined by pulse
length and antenna beam width, respectively. This setup is
called Real Aperture Radar (RAR).
• Due to the different parameters that determine the spatial
resolution in range and azimuth direction, it is obvious
that the spatial resolution in the two directions is different.
• For radar image processing and interpretation it is useful
to resample the image data to regular pixel spacing in both
directions.
• Distortions in radar images
• Due to the side-looking viewing geometry, radar images suffer from
serious geometric and radiometric distortions. In radar imagery, you
encounter variations in scale (caused by slant range to ground range
conversion), foreshortening, layover and shadows (due to terrain
elevation).
• Interference due to the coherence of the signal causes speckle effect.
• Scale distortions
• Radar measures ranges to objects in slant range rather than true
horizontal distances along the ground. Therefore, the image has
different scales moving from near to far range.
• This means that objects in near range are compressed with respect
to objects at far range. For proper interpretation, the image has to be
corrected and transformed into ground range geometry.
Platforms
Device which hold sensors like Ground based, Airborne and
Satellite.
The vehicles or carriers for remote sensors.
In Remote Sensing, the sensors are mounted on platforms
RS sensors are attached to moving platforms, such as aircraft or
satellites.
The platforms classified according to their heights and events to
be monitored.
• Platforms are classified into three categories.
• Ground-based, Airborne and Space-borne
1. Ground-Based Platforms
a. Ground based systems: in situ measurements are widely used to
calibrate and atmospherically correct airborne or satellite-derived data
that are acquired at the same time. It can be:
•Carried on vehicle
•Can be mounted on fence, high bridge and tower
•Extendable to a height of 15m above the surface.
•At the top of the platform there are:
Spectral reflectance meters
Photographic systems
IR or Microwave scanners
Limitation:
Vehicles limited to roads, and the range is confined to
small area along or around the road.
Examples of Ground-Based Platforms
Portable Masts Weather Surveillance Radar
2. Airborne platforms
oBalloon based missions and measurements
oHigh flying balloons are an important tool for searching atmosphere.
oThe three major advantages of the balloon are the following:
oCover extensive altitude range.
oIt provide opportunity for additional, correlative data for satellite based
measurements, including both validation ("atmospheric truth") and
complementary data (eg, measurement of species not measured from
space based instrument).
oProvide a unique way of covering a broad range of
altitudes for in-situ or remote sensing measurements in the
stratosphere.
o Ranged from 22-40 km
oIt is important and inexpensive venue for testing
instruments under development.
•They are potential for unmanned aerial vehicles (UAV)
Examples of airborne platform
• Radio-sonde, is airborne instrument used for measuring pressure,
temperature and relative humidity in the upper air.
Wind Finding Radar
o It determines the speed and direction
of winds in the air by means of radar
Rawin-sonde is an electronic device used for echoes.
measuring wind velocity, pressure,
temperature and humidity in the air.
Airborne Remote sensing
• In airborne remote sensing, downward or sideward looking sensors are
mounted on an aircraft to obtain images of the earth's surface.
• An advantage of airborne remote sensing, compared to satellite remote
sensing, is the capability of offering very high spatial resolution images.
• The disadvantages are low area coverage and high cost per unit area of
ground coverage.
• It is not cost-effective to map a large area using airborne remote sensing
system.
• Airborne remote sensing missions are often carried out as one-time
operations, whereas earth observation satellites offer the possibility
of continuous monitoring of the earth.
• Analog aerial photography, videography, and digital photography
are commonly used in airborne remote sensing.
• Digital photography permits real-time transmission of the remotely
sensed data to a ground station for immediate analysis.
Advantages of Aircraft platforms for remote sensing systems
• Aircraft can fly at relatively low altitudes allowing for sub-meter spatial resolution.
• Aircraft can easily change their schedule to avoid weather problems such as
clouds, which may block a passive sensor's view of the ground.
• Last minute timing changes can be made to adjust for illumination from the sun, the
location of the area to be visited and additional revisits to that location.
• Sensor maintenance, repair and configuration changes are easily made to aircraft
platforms.
• Aircraft flight paths no boundaries except political boundaries.
Drawbacks Aircraft platforms for remote sensing systems
Getting permission to intervene into foreign airspace can be a long
and frustrating process.
The low altitude flown by aircraft narrows the field of view to the
sensor requiring many passes to cover a large area on the ground.
The turnaround time it takes to get the data to the user is delayed due
to the necessity of returning the aircraft to the airport before
transferring the raw image data to the data provider's facility for
preprocessing
3. Space borne platforms
• In space borne remote sensing, sensors are mounted on-board a
spacecraft (space shuttle or satellite) orbiting the earth. e.g. Rockets,
Satellites and space shuttles.
• Space borne platforms range upto 36,000 km above earth’s surface.
Advantages of space borne remote sensing
• Large area coverage;
• Frequent and repetitive coverage of an area of interest;
• Quantitative measurement of ground features using calibrated sensors;
• Semi automated computerized processing and analysis;
• Relatively lower cost per unit area of coverage
Airborne versus Space borne Radars
• Fine resolution can be achieved from both airborne and space borne platforms.
• Airborne radar image over a wide range of incidence angles, perhaps as 60 or
70 degrees, in order to achieve relatively wide swaths.
• In case of airborne, Incidence angle (look angle) has a significant effect on
the backscatter from surface features and on their appearance on an image.
• Space borne radars are able to avoid some of these imaging geometry problems
as they operate at altitudes up to 100times higher than airborne radars.
Cont. …
• The geometrical problem can be reduced by acquiring imagery from
more than one look direction.
• Airborne radar is able to collect data any where and at any time (as
long as weather and flying conditions are acceptable!).
• Satellite radars have advantage of being able to collect imagery more
quickly over a larger area and provide consistent viewing geometry.
• A space borne radar may have a revisit period as short as one day.
oAirborne radar will be vulnerable to variations in environmental/
weather conditions.
oIn order to avoid geometric positioning errors due to random
variations in the motion of aircraft, the radar system must use
sophisticated navigation/positioning equipment and advanced image
processing to compensate for these variations.
oSpace borne radars are not affected by motion of this type.
Orbit Types
• Polar orbit: an orbit with an inclination angle between 800 and 1000.
• Sun-synchronous orbit: is a near-polar orbit that covers each area of the
world at a constant local time of day called local sun time.
• Geostationary orbit: refers to orbits in which the satellite is placed above
the equator (inclination angle = 00).
• The figure shows meteorological observation systems comprised of
geostationary and polar satellites.
Hence, sun-synchronous orbit ensures consistent illumination conditions when acquiring
images in a specific season over successive years, or over a particular area over a series of
days.
This is an important factor for monitoring changes between images or for mosaicking
adjacent images together,
Geostationary satellites, at altitudes of approximately 36,000km, revolve at speeds which
match the rotation of the Earth so they seem stationary, relative to the Earth's surface.
This allows the satellites to observe and collect information continuously over specific
areas. Weather and communications satellites commonly have these types of orbits.
Due to their high altitude, some geostationary weather satellites can monitor weather and
cloud patterns covering an entire hemisphere of the Earth.
• Applications of radar
• There are many useful applications of radar images. Radar data provide
complementary information to visible and infrared remote sensing data. In the case
of forestry, radar images can be used to obtain information about forest canopy,
biomass and different forest types.
• Radar images also allow the differentiation of different land cover types, such as
urban areas, agricultural fields, water bodies, etc.
• In geology and geomorphology the fact that radar provides information about
surface texture and roughness plays an important role in lineament detection and
geological mapping.
Light Detection And Ranging (LIDAR)
• LIDAR is a remote sensing technology which uses much shorter
wavelength of electromagnetic spectrum.
• Light Detection And Ranging (LiDAR) is a laser-based remote sensing
technology.
• A technique that can measure the distance to and other properties of a
target by illuminating the target.
• LIDAR is a remote sensing method used to examine the surface of the
Earth.
• LiDAR, is used for measuring the exact distance of an object
on the earth’s surface.
• LiDAR uses a pulsed laser to calculate an object’s variable
distances from the earth surface.
• This technology is used in (GIS) to produce a digital elevation
model (DEM) or a digital terrain model (DTM) for 3D
mapping.
• LIDAR uses:
• Ultraviolet
• Visible and
• Infrared light to image or capture the target
• It can be used with wide range of targets including:
• Non-metalic objects
• rocks
• Chemical compounds
LIDAR Operating Principle
• Emission of a laser pulse
• Record of the backscattered signal
• Distance measurement (Time of travel x speed of light)
• Retrieving plane position and altitude
• Computation of precise echo position
• LiDAR for drones matches perfectly with:
• Small areas to fly over
• Mapping under vegetation
• Hard-to-access zones
• Data needed in near real-time or frequently
• Accuracy range required between 2.5 and 10 cm
• LiDAR systems integrate 3 main components whether they
are mounted on automotive vehicles, aircraft or UAV:
• These 3 main components are:
• Laser scanner
• Navigation and positioning system
• Computing technology
1. Laser Scanner
• LiDAR system pulse a laser light from various mobile systems
(automobiles, airplanes, drones…) through air and vegetation
(aerial Laser) and even water (bathymetric Laser).
• A scanner receives the light back (echoes), measuring distances
and angles.
• The choice of optic and scanner influences greatly the resolution
and the range in which you can operate the LiDAR system.
2. Navigation and positioning systems
• Whether a LiDAR sensor is mounted on aircraft, car or UAS
(unmanned aerial systems), it is crucial to determine the absolute
position and orientation of the sensor to make sure data captured are
useable data.
• Global Navigation Satellite Systems provide accurate geographical
information regarding the position of the sensor (latitude, longitude,
height) and the precise orientation of the sensor.
3. Computing technology
• In order to make most of the data : computation is required
to make the LiDAR system work by defining precise echo
position.
• It is required for on-flight data visualization or data post-
processing as well to increase precision and accuracy
delivered in the 3D mapping point cloud.
• There are two basic types of lidar: airborne and terrestrial.
1. Airborne
• With airborne lidar, the system is installed in either a
fixed-wing aircraft or helicopter.
• The infrared laser light is emitted toward the ground and
returned to the moving airborne lidar sensor.
• There are two types of airborne sensors: topographic and
bathymetric.
i. Topographic LiDAR
• Topographic LiDAR can be used to derive surface models for
use in many applications, such as forestry, hydrology,
geomorphology, urban planning, landscape ecology, coastal
engineering, survey assessments, and volumetric calculations.
ii. Bathymetric lidar
• Bathymetric lidar is a type of airborne acquisition that is water
penetrating.
• Most bathymetric lidar systems collect elevation and water
depth simultaneously, which provides an airborne lidar survey
of the land-water interface.
• Bathymetric information is also used to locate objects on the
ocean floor.
2. Terrestrial lidar
• Terrestrial lidar collects very dense and highly accurate
points, which allows precise identification of objects.
• These dense point clouds can be used to manage facilities,
conduct highway and rail surveys, and even create 3D city
models for exterior and interior spaces.
• There are two main types of terrestrial lidar: mobile and
static.
• In the case of mobile acquisition, the lidar system is
mounted on a moving vehicle.
• In the case of static acquisition, the lidar system is
typically mounted on a tripod or stationary device.
LiDAR applications
• Power Utilities: power line survey to detect line sagging issues or for planning activity
• Mining: surface/volume calculation to optimize mine operations
• Civil engineering: mapping to help leveling, planning and infrastructure optimization
(roads, railways, bridges, pipelines, golf courses) or renovating after natural
disasters, beach erosion survey to build emergency plan
• Archaeology: mapping through the forest canopy to speed up discoveries
• Forestry: mapping forests to optimize activities or help tree counting
• Environmental research: measuring growth speed, disease spreading
• Meteorology (wind speed, atmospheric condition, clouds, aerosols)
• UAV/Drone image
• UAV stands for Unmanned Aerial Vehicle, something that
can fly without a pilot onboard.
• the term Drone and UAV mean the same thing, and can be
used interchangeably.
• UAV/drone – is the actual aircraft being piloted/operated by
remote control or onboard computers.
• A drone is an unmanned aircraft.
• Essentially, a drone is a flying robot that can be remotely
controlled or fly autonomously using software-controlled
flight plans in its embedded systems, that work in
conjunction with onboard sensors and a global positioning
system (GPS).
• UAV remote sensing can be used to:
• Track erosion
• Track vegetation growth around infrastructure
• Track damage
• Track land changes
• Track and prevent theft, and more.
• Recent developments in the technology of drones (UAVs
(Unmanned Aerial Vehicles), etc.) have opened up
important new possibilities in the field of remote sensing
so that drones can be regarded as the third generation
of platforms generating remotely sensed data of the
surface the Earth.
Example of Drones
• UAV/Drone image is an imagery acquired by drones
which is essential for creating geospatial products like
orthomosaics, digital terrain models, or 3D textured
meshes.
• An unmanned aerial vehicle system has two parts, the
drone itself and the control system.
• Unmanned aerial vehicle (UAV) technology bridges the
gap among space borne, airborne, and ground-based
remote sensing data.
• Its characteristics of light weight and low price enable
affordable observations with very high spatial and
temporal resolutions.
• Drones are excellent for taking high-quality aerial photographs
and video, and collecting vast amounts of imaging data.
• Uses of drone/UAV
‾ Mapping of Landslide Affected Area:
‾ Infested Crop Damage Assessment:
‾ 3-Dimensioinal Terrain Model Construction:
‾ Archaeological surveys
Applications…..
• Agriculture
• Wildlife monitoring
• Weather forecasting
• Military
End of Course