KEMBAR78
471112728-DIGITAL-IMAGE-PROCESSING-NOTES-VTU.pptx
1
Module -1
DIGITAL IMAGE
FUNDAMENTALS
2
Why do we need image processing?
It is motivated by two major
applications-
o Improvement of pictorial
information for human
perception.
o For autonomous application
o Efficient storage and
transmission
3
Image Processing Goals
•Image Processing is a subclass of signal
processing concerned specifically with Pictures.
•It Aims to improve the image quality for
• Human Perception: subjective
• Computer interpretation:objective
•Compress images for efficient
Storage/transmission
4
What is Digital Image Processing?
•Computer manipulation of pictures, or images
that have been converted into numeric form.
Typical operations include
•Contrast Enhancement
•Remove Blur from image
•Smooth out graininess, speckale or noise
•Magnify, minify or rotate an image(image
warping)
•Geometric Correction
•Image Compression for efficient
storage/transmission.
5
What is Digital Image???
An image may be defined as a 2D function f(x,y), where
x and y are the spatial coordinates and f is the intensity
value at x and y.
If x,y and f are all discrete, then the image is called as
Digital Image.
Examples of Fields that use Digital Image
processing.
6
7
Gamma-Ray Imaging
Major uses of Uses of
Gamma-Ray Imaging
include nuclear
medicine, astronomical
observations.
•Nuclear medicine:
patient is injected with
radioactive isotope that
emits gamma rays as it
decays. Images are
produced from emissions
collected by detectors.
8
Contd.
•Fig 1a shows an image of complete bone scan
obtained by using gamma ray imaging.
•Images of this sort are used to locate sites of
bone pathology such as infections or tumors.
•Fig 1b shows an another modality called
Positron Emission Tomography. This image
shows tumor in the brain and one in the lung.
Easily visible as small white masses.
9
X-Ray Imaging
Oldest source of EM radiation
for imaging
•Used for CAT scans
•Used for angiograms where
X-ray contrast medium is
injected through catheter to
enhance contrast at site to be
studied.
•Industrial inspection
10
•CT Image: Computed Tomography(good for hard tissues
such as bones.
•In CT each slice of human body is imaged by means of
X-ray and then a number of such images are piled up to
form a volumetric representation of a body or specific
part.
X-Ray Imaging(Contd…)
11
Imaging in the Microwave Band:
The dominant application of imaging in the microwave
band is radar.
The imaging radar has the ability to collect data over
any region at any time regardless of weather or
ambient lighting conditions.
Imaging in the Radio Band.
The major application is in the field of medicine which includes MRI.
MRI: Magnetic Resonance Imaging very similar to CT Imaging,
provides more detailed images of the soft tissues of the body. It can be
used to study both structure and function of a body.
Difference between CT and MRI is the imaging radiation. Ct uses
ionizing radiation such as X-ray whereas MRI uses a powerful
magnetic field.
Thermal Image: Thermographic camera used to capture images in
night vision.
12
Imaging in the Visible band and nfrared band:
Infraband applications:
Industrial inspection
-inspect for missing parts
-missing pills
-unacceptable bottle fill
-unacceptable air pockets
-anomalies in cereal color
-incorrectly manufactured replacement lens for eyes
Imaging in visible band
• Face detection &recognition
• Iris Recognition
• Number Plate recognition
13
Fundamental Steps in Image Processing
14
Components of a general purpose Image Processing
15
Structure of Human Eye
•Shape is nearly spherical
•Average diameter = 20mm
•Three membranes:
-Cornea and Sclera
-Choroid
-Retina
16
Structure of Human Eye
•Cornea
-Tough, transparent tissue that covers
the anterior surface of the eye
•Sclera
-Opaque membrane that encloses the
remainder of the optical globe
•Choroid
-Lies below the sclera
-Contains network of blood vessels
that serve as the major source of
nutrition to the eye.
-Choroid coat is heavily pigmented
and hence helps to reduce the
amount of extraneous light entering
the eye and the backscatter within
the optical globe
17
Lens and Retina
•Lens
-Both infrared and ultraviolet light are
absorbed appreciably by proteins within the
lens structure and, in excessive amounts, can
cause damage to the eye
•Retina
-Innermost membrane of the eye which lines
the inside of the wall’s entire posterior
portion. When the eye is properly focused,
light from an object outside the eye Is imaged
on the retina.
18
Receptors
•Two classes of light receptors on retina: cones and rods
•Cones
-6-7 million cones lie in central portion of the retina,
called the fovea.
-Highly sensitive to color and bright light.
-Resolve fine detail since each is connected to its own
nerve end.
-Cone vision is called photopic or bright-light vision.
•Rods
-75-150 million rods distributed over the retina
surface.
-Reduced amount of detail discernable since several
rods are connected to a single nerve end.
-Serves to give a general, overall picture of the field of
view.
-Sensitive to low levels of illumination.
-Rod vision is called scotopic or dim-light vision.
19
Distribution of Cones and Rods
•Blind spot: no receptors in region of emergence of optic nerve.
•Distribution of receptors is radially symmetric about the fovea.
•Cones are most dense in the center of the retina (e.g., fovea)
•Rods increase in density from the center out to 20°and then
decrease
Density of rods and cones for a cross section of right eye
20
Image Formation in the Eye
21
Brightness Adaptation and Discrimination
•The eye’s ability to discriminate between
intensities is important.
•Experimental evidence suggests that
subjective brightness (perceived) is a
logarithmic function of light incident on eye.
Notice approximately linear response in log-
scale below.
22
Contd…
23
1. Mach Band Effect
Two Phenomena to illustrate that the perceived brightness is
not a simple function of intensity
24
2. Simultaneous Contrast
Two Phenomena to illustrate that the perceived brightness
is not a simple function of intensity(contd….)
25
Optical Illusions
Unit
–
2
DIGITAL
IMAGE
FUNDAMENT
ALS
26
IMAGE SENSING AND ACQUISITION
Unit
–
2
DIGITAL
IMAGE
FUNDAMENT
ALS
27
Image Acquisition Using a Single Sensor
Unit
–
2
DIGITAL
IMAGE
FUNDAMENT
ALS
28
Image Acquisition Using Sensor Strips
Unit
–
2
DIGITAL
IMAGE
FUNDAMENT
ALS
29
Image Acquisition Using Sensor Array
Unit
–
2
DIGITAL
IMAGE
FUNDAMENT
ALS
30
Sampling and Quantization
Unit
–
2
DIGITAL
IMAGE
FUNDAMENT
ALS
31
Continuous Image
Projected on to a
image array
Result of image
Sampling and
Quantization
32
REPRESENTING DIGITAL
IMAGE
• 33
Contd…
Let A be a digital Image
• The pixel intensity levels (gray scale levels) are in
the interval of [0, L-1].
0 ai,j L-1 , W h e r e L =
• b is the no. of bits required to store digitized image
of size M by N, then b is given as
b = M x N x k
34
 The range of values spanned by the gray scale is called the
dynamic range of an image.
 It is also defines as the ratio of maximum measurable
intensity to minimum detectable intensity level.
 Upper limit- saturation
 Lower limit- Noise
 Images whose gray levels span a significant portion of the
gray scale are considered as having a high dynamic range.
 Contrast (highest- lowest) intensity level
 When an appreciable number of pixels exhibit this property,
the image will have high contrast.
 Conversely, an image with low dynamic range tends to have a
dull, washed out gray look.
Contd…
35
Spatial and Intensity Resolution
Spatial resolution is the smallest discernible(noticeable) detail in
an image.
 It is also stated as dots per unit inch(dpi), line pairs per unit
distance.
Gray-level resolution similarly refers to the smallest discernible
(noticeable) change in gray level.
 Measuring discernible changes in gray level is a highly subjective
process
The number of gray levels is usually an integer power of 2
Thus, an L-level digital image of size M*N has a spatial resolution
of M*N pixels and a gray-level resolution of L levels
36
False Contouring
37
The effect caused by the use of an insufficient number of gray levels in
smooth areas of a digital image, is called false contouring
38
Example for reducing the
spatial Resolution
39
IMAGE INTERPOLATION
Interpolation is the process of using known data to
estimate values at unknown locations.
Interpolation is the process of determining the
values of a function at positions lying between its
samples.
It is the basic tool used extensively in tasks such as
zooming, Shrinking, rotating and geometric
corrections.
Zooming and Shrinking are basically image
resampling methods.
The process of interpolation is one of the
fundamental operations in image processing. The
image quality highly depend on the used
interpolation technique.
40
Optical Zoom vs. Digital Zoom
 An Optical zoom means moving the zoom lens so that it increases
the magnification of light before it even reaches the digital sensor.
 A digital zoom is not really zoom , it is simply interpolating
the
image after it has been acquired at the sensor (pixilation
process).
Zooming and Shrinking(Resampling)
41
• Zooming may be viewed as oversampling, while
shrinking may be viewed as undersampling.
• Sampling and quantizing are applied to an original
continuous image, zooming and shrinking are applied
to a digital image.
• Zooming requires two steps:
• The creation of new pixel locations.
• The assignment of gray levels to those new
locations.
• Zooming Methodologies
• Nearest neighbor interpolation,
• Pixel Replication,
• Bilinear interpolation
• Bicubic Interpolation
Contd….
42
Nearest neighbor interpolation:
 Suppose that we have an image of size 500 × 500 pixels and we
want to enlarge it 1.5 times to 750 × 750 pixels.
 Conceptually, one of the easiest ways to visualize zooming is
laying an imaginary 750 × 750 grid over the original image.
 Obviously, the spacing in the grid would be less than one pixel
because we are fitting it over a smaller image.
 In order to perform gray-level assignment for any point in the
overlay, we look for the closest pixel in the original image and
assign its gray level to the new pixel in the grid.
 When we are done with all points in the overlay grid, we simply
expand it to the original specified size to obtain the zoomed
image.
 This method of gray-level assignment is called nearest neighbor
interpolation.
43
Pixel replication:
 Pixel replication, the method used to generate Figs. 2.20(b) through (f), is a
special case of nearest neighbor interpolation.
 Pixel replication is applicable when we want to increase the size of an image
an integer number of times.
 For instance, to double the size of an image, we can duplicate each column.
This doubles the image size in the horizontal direction.
 Then, we duplicate each row of the enlarged image to double the size in the
vertical direction. The same procedure is used to enlarge the image by any
integer number of times (triple, quadruple, and so on).
 Duplication is just done the required number of times to achieve the desired
size.
 The gray-level assignment of each pixel is predetermined by the fact that new
locations are exact duplicates of old locations.
Although nearest neighbor interpolation is fast, it has the undesirable feature that
it produces a checkerboard effect that is particularly objectionable at high factors
of magnification,
44
45
Bilinear interpolation:
 A slightly more sophisticated way of accomplishing gray-level assignments is
bilinear interpolation using the four nearest neighbors of a point.
 Let (x', y') denote the coordinates of a point in the zoomed image and let v(x’,
y') denote the gray level assigned to it.
 For bilinear interpolation, the assigned gray level is given by:
v(x', y') = ax' + by' + cx'y' + d
where; the four co-efficients are determined from the four equations in four
unknowns that can be written using the four nearest neighbors of point (x', y')
 It is possible to use more neighbors for interpolation.
 Using more neighbors implies fitting the points with a more complex surface,
which generally gives smoother results.
 This is an exceptionally important consideration in image generation for 3-D
graphics and in medical image processing, but the extra computational burden
seldom is justifiable for general-purpose digital image zooming and shrinking,
where bilinear interpolation generally is the method of choice.
46
Bilinear interpolation considers the closest 2x2
neighborhood of known pixel values surrounding the
unknown pixel. It then takes a weighted average of these 4
pixels to arrive at its final interpolated value. This results
in much smoother looking images than nearest neighbors.
Contd….
47
48
 Involves sixteen neighbors
to estimate intensity
 V(x, y) = ∑∑aij xi yj ( i, j = 0
to 3)
 Need to solve sixteen
equations
 Gives better results than
other methods
 More complex
 Used in Adobe Photoshop,
and Corel Photopaint
BICUBIC INTERPOLATION
49
SOME BASIC RELATIONSHIPS
BETWEEN PIXELS
50
SOME BASIC RELATIONSHIPS BETWEEN PIXELS
 An image is denoted by f(x, y).
 Lowercase letters such as p and q are used to represent
particular pixels in an image.
 The structure of a digital image allows stating some basic
relationships between pixels that can be useful in some
practical cases.
 The pixels are organized in a regular structure and can have a
limited number of values.
51
(x-1,y-1)
(0,0)
(x-1,y)
(0,1)
(x-1,y+1)
(0,2)
- - - - - -
(x,y-1)
(1,0)
(x,y)
(1,1)
(x,y+1)
(1,2)
- - - - - -
(x+1,y-1)
(2,0)
(x+1,y)
(2,1)
(x+1,y+1)
(2,2)
- - - - - -
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Digital Image Coordinates
52
(x-1,y-1)
(0,0)
(x-1,y)
(0,1)
(x-1,y+1)
(0,2)
- - - - - -
(x,y-1)
(1,0)
(x,y)
(1,1)
(x,y+1)
(1,2)
- - - - - -
(x+1,y-1)
(2,0)
(x+1,y)
(2,1)
(x+1,y+1)
(2,2)
- - - - - -
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Digital Image Coordinates
53
: 4-neighbors of p.
• Any pixel p(x, y) has two vertical and two horizontal neighbors,
given by (x+1,y), (x-1, y), (x, y+1), (x, y-1)
• This set of pixels are called the 4-neighbors of P, and is denoted by
N4 (P)
• Each of them is at a unit distance from P.
Neighbors of a Pixel
(x-1,y)
(x,y-1) p(x,y) (x,y+1)
(x+1,y)
54
: Diagonal-neighbors of p.
• This set of pixels, called Diagonal-neighbors and
denoted by
• : four diagonal neighbors of p have coordinates:
(x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1)
• Each of them are at Euclidean distance of 1.414 from
P.
Neighbors of a Pixel
(x-1,y-1) (x-1,y+1)
p(x,y)
(x+1,y-1) (x+1,y+1)
55
: 8-neighbors of p.
• and together are called 8-neighbors of p, denoted by
.
•
• Some of the points in the may fall outside image
when p lies on the border of image.
Neighbors of a Pixel
(x-1,y-1) (x-1,y) (x-1,y+1)
(x,y-1) p(x,y) (x,y+1)
(x+1,y-1) (x+1,y) (x+1,y+1)
56
Adjacency
• Two pixels are adjacent if:
•They are neighbors in some sense (e.g. N4(p), N8(p), …)
•Their gray levels satisfy a specified criterion of similarity
V (e.g. equality, …)
• For example, in a binary image two pixels are connected if
they are 4-neighbors and have same value (0/1)
• Let v: a set of intensity values used to define adjacency and
connectivity.
• In a binary Image v={1}, if we are referring to adjacency of
pixels with value 1.
• In a Gray scale image, the idea is the same, but v typically
contains more elements, for example v= {180, 181,
182,....,200}.
• If the possible intensity values 0 to 255, v set could be any
subset of these 256 values.
57
Adjacency(Contd..)
Types of adjacency
1. 4-adjacency: Two pixels p and q with values from v are 4-adjacent if q
is in the set .
2. 8-adjacency: Two pixels p and q with values from v are 8-adjacent if q
is in the set .
3. m-adjacency (mixed): two pixels p and q with values from v are m-
adjacent if:
I. q is in or
II. q is in and The set ∩ has no pixel whose values are from v (No
intersection).
• Mixed adjacency is a modification of 8-adjacency ''introduced to eliminate
the ambiguities that often arise when 8- adjacency is used. (eliminate
multiple path connection)
• Pixel arrangement as shown in figure for v= {1} Example:
58
Two subsets S1 and S2 are adjacent, if some pixel in S1 is adjacent to
some pixel in S2.
Adjacent means, either 4-, 8- or m-adjacency.
1
0
0
0
1
0
1
1
0
1
0
0
0
1
0
1
1
0
1
0
0
0
1
0
1
1
0
Adjacency(Contd..)
p
q
p
q
p
q
8-path from p to q
results in some ambiguity
m-path from p to q
solves this ambiguity
8-path m-path
59
•A digital path (or curve) from pixel p with coordinate
(x,y) to pixel q with coordinate (s,t) is a sequence of
distinct pixels with coordinates (), (), ..., (), where (.
•() is adjacent pixel (xi-1, yi-1) for 1≤j≤n ,
•n- The length of the path.
•If ()= ():the path is closed path.
•We can define 4- ,8- , or m-paths depending on the
type of adjacency specified.
Path
60
DISTANCE MEASURES
 For pixels p,q,z with coordinates (x,y), (s,t), (u,v), D
is a distance function or metric if:
 D(p,q) ≥ 0 (D(p,q)=0 iff p=q)
 D(p,q) = D(q,p) and
 D(p,z) ≤ D(p,q) + D(q,z)
DISTANCE MEASURES
 The Euclidean Distance between p and q is defined as:
De (p,q) = [(x – s)2
+ (y - t)2
]1/2
Pixels having a distance less than or equal
to some value r from (x,y) are the points
contained in a disk of
radius r centered at (x,y)
De
(p,q)
p (x,y)
q (s,t)
DISTANCE MEASURES
 The D4 distance (also called city-block distance)
between p and q is defined as:
D4 (p,q) = | x – s | + | y – t |
Pixels having a D4 distance from
(x,y), less than or equal to some
value r form a Diamond
centered at (x,y)
p (x,y)
q (s,t)
D4
DISTANCE MEASURES
Example:
The pixels with distance D4 ≤ 2 from (x,y) form the
following contours of constant distance.
The pixels with D4 = 1 are
the 4-neighbors of (x,y)
DISTANCE MEASURES
 The D8 distance (also called chessboard distance)
between p and q is defined as:
D8 (p,q) = max(| x – s |,| y – t |)
Pixels having a D8 distance from
(x,y), less than or equal to some
value r form a square
Centered at (x,y)
p (x,y)
q (s,t)
D8(b)
D8(a)
D8 = max(D8(a) , D8(b))
DISTANCE MEASURES
Example:
D8 distance ≤ 2 from (x,y) form the following contours of
constant distance.
DISTANCE MEASURES
 Dm distance:
is defined as the shortest m-path between the points.
In this case, the distance between two pixels will
depend on the values of the pixels along the path, as
well as the values of their neighbors.
DISTANCE MEASURES
 Example:
Consider the following arrangement of pixels and
assume that p, p2, and p4 have value 1 and that p1 and
p3 can have can have a value of 0 or 1
Suppose that we consider
the adjacency of pixels
values 1 (i.e. V = {1})
DISTANCE MEASURES
 Cont. Example:
Now, to compute the Dm between points p and p4
Here we have 4 cases:
Case1: If p1 =0 and p3 = 0
The length of the shortest m-path
(the Dm distance) is 2 (p, p2, p4)
DISTANCE MEASURES
 Cont. Example:
Case2: If p1 =1 and p3 = 0
now, p1 and p will no longer be adjacent (see m-
adjacency definition)
then, the length of the shortest
path will be 3 (p, p1, p2, p4)
DISTANCE MEASURES
 Cont. Example:
Case3: If p1 =0 and p3 = 1
The same applies here, and the shortest –m-path will
be 3 (p, p2, p3, p4)
DISTANCE MEASURES
 Cont. Example:
Case4: If p1 =1 and p3 = 1
The length of the shortest m-path will be 4 (p, p1 , p2,
p3, p4)
72

471112728-DIGITAL-IMAGE-PROCESSING-NOTES-VTU.pptx

  • 1.
  • 2.
    2 Why do weneed image processing? It is motivated by two major applications- o Improvement of pictorial information for human perception. o For autonomous application o Efficient storage and transmission
  • 3.
    3 Image Processing Goals •ImageProcessing is a subclass of signal processing concerned specifically with Pictures. •It Aims to improve the image quality for • Human Perception: subjective • Computer interpretation:objective •Compress images for efficient Storage/transmission
  • 4.
    4 What is DigitalImage Processing? •Computer manipulation of pictures, or images that have been converted into numeric form. Typical operations include •Contrast Enhancement •Remove Blur from image •Smooth out graininess, speckale or noise •Magnify, minify or rotate an image(image warping) •Geometric Correction •Image Compression for efficient storage/transmission.
  • 5.
    5 What is DigitalImage??? An image may be defined as a 2D function f(x,y), where x and y are the spatial coordinates and f is the intensity value at x and y. If x,y and f are all discrete, then the image is called as Digital Image.
  • 6.
    Examples of Fieldsthat use Digital Image processing. 6
  • 7.
    7 Gamma-Ray Imaging Major usesof Uses of Gamma-Ray Imaging include nuclear medicine, astronomical observations. •Nuclear medicine: patient is injected with radioactive isotope that emits gamma rays as it decays. Images are produced from emissions collected by detectors.
  • 8.
    8 Contd. •Fig 1a showsan image of complete bone scan obtained by using gamma ray imaging. •Images of this sort are used to locate sites of bone pathology such as infections or tumors. •Fig 1b shows an another modality called Positron Emission Tomography. This image shows tumor in the brain and one in the lung. Easily visible as small white masses.
  • 9.
    9 X-Ray Imaging Oldest sourceof EM radiation for imaging •Used for CAT scans •Used for angiograms where X-ray contrast medium is injected through catheter to enhance contrast at site to be studied. •Industrial inspection
  • 10.
    10 •CT Image: ComputedTomography(good for hard tissues such as bones. •In CT each slice of human body is imaged by means of X-ray and then a number of such images are piled up to form a volumetric representation of a body or specific part. X-Ray Imaging(Contd…)
  • 11.
    11 Imaging in theMicrowave Band: The dominant application of imaging in the microwave band is radar. The imaging radar has the ability to collect data over any region at any time regardless of weather or ambient lighting conditions. Imaging in the Radio Band. The major application is in the field of medicine which includes MRI. MRI: Magnetic Resonance Imaging very similar to CT Imaging, provides more detailed images of the soft tissues of the body. It can be used to study both structure and function of a body. Difference between CT and MRI is the imaging radiation. Ct uses ionizing radiation such as X-ray whereas MRI uses a powerful magnetic field. Thermal Image: Thermographic camera used to capture images in night vision.
  • 12.
    12 Imaging in theVisible band and nfrared band: Infraband applications: Industrial inspection -inspect for missing parts -missing pills -unacceptable bottle fill -unacceptable air pockets -anomalies in cereal color -incorrectly manufactured replacement lens for eyes Imaging in visible band • Face detection &recognition • Iris Recognition • Number Plate recognition
  • 13.
    13 Fundamental Steps inImage Processing
  • 14.
    14 Components of ageneral purpose Image Processing
  • 15.
    15 Structure of HumanEye •Shape is nearly spherical •Average diameter = 20mm •Three membranes: -Cornea and Sclera -Choroid -Retina
  • 16.
    16 Structure of HumanEye •Cornea -Tough, transparent tissue that covers the anterior surface of the eye •Sclera -Opaque membrane that encloses the remainder of the optical globe •Choroid -Lies below the sclera -Contains network of blood vessels that serve as the major source of nutrition to the eye. -Choroid coat is heavily pigmented and hence helps to reduce the amount of extraneous light entering the eye and the backscatter within the optical globe
  • 17.
    17 Lens and Retina •Lens -Bothinfrared and ultraviolet light are absorbed appreciably by proteins within the lens structure and, in excessive amounts, can cause damage to the eye •Retina -Innermost membrane of the eye which lines the inside of the wall’s entire posterior portion. When the eye is properly focused, light from an object outside the eye Is imaged on the retina.
  • 18.
    18 Receptors •Two classes oflight receptors on retina: cones and rods •Cones -6-7 million cones lie in central portion of the retina, called the fovea. -Highly sensitive to color and bright light. -Resolve fine detail since each is connected to its own nerve end. -Cone vision is called photopic or bright-light vision. •Rods -75-150 million rods distributed over the retina surface. -Reduced amount of detail discernable since several rods are connected to a single nerve end. -Serves to give a general, overall picture of the field of view. -Sensitive to low levels of illumination. -Rod vision is called scotopic or dim-light vision.
  • 19.
    19 Distribution of Conesand Rods •Blind spot: no receptors in region of emergence of optic nerve. •Distribution of receptors is radially symmetric about the fovea. •Cones are most dense in the center of the retina (e.g., fovea) •Rods increase in density from the center out to 20°and then decrease Density of rods and cones for a cross section of right eye
  • 20.
  • 21.
    21 Brightness Adaptation andDiscrimination •The eye’s ability to discriminate between intensities is important. •Experimental evidence suggests that subjective brightness (perceived) is a logarithmic function of light incident on eye. Notice approximately linear response in log- scale below.
  • 22.
  • 23.
    23 1. Mach BandEffect Two Phenomena to illustrate that the perceived brightness is not a simple function of intensity
  • 24.
    24 2. Simultaneous Contrast TwoPhenomena to illustrate that the perceived brightness is not a simple function of intensity(contd….)
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
    Unit – 2 DIGITAL IMAGE FUNDAMENT ALS 31 Continuous Image Projected onto a image array Result of image Sampling and Quantization
  • 32.
  • 33.
    • 33 Contd… Let Abe a digital Image • The pixel intensity levels (gray scale levels) are in the interval of [0, L-1]. 0 ai,j L-1 , W h e r e L = • b is the no. of bits required to store digitized image of size M by N, then b is given as b = M x N x k
  • 34.
    34  The rangeof values spanned by the gray scale is called the dynamic range of an image.  It is also defines as the ratio of maximum measurable intensity to minimum detectable intensity level.  Upper limit- saturation  Lower limit- Noise  Images whose gray levels span a significant portion of the gray scale are considered as having a high dynamic range.  Contrast (highest- lowest) intensity level  When an appreciable number of pixels exhibit this property, the image will have high contrast.  Conversely, an image with low dynamic range tends to have a dull, washed out gray look. Contd…
  • 35.
    35 Spatial and IntensityResolution Spatial resolution is the smallest discernible(noticeable) detail in an image.  It is also stated as dots per unit inch(dpi), line pairs per unit distance. Gray-level resolution similarly refers to the smallest discernible (noticeable) change in gray level.  Measuring discernible changes in gray level is a highly subjective process The number of gray levels is usually an integer power of 2 Thus, an L-level digital image of size M*N has a spatial resolution of M*N pixels and a gray-level resolution of L levels
  • 36.
  • 37.
    37 The effect causedby the use of an insufficient number of gray levels in smooth areas of a digital image, is called false contouring
  • 38.
    38 Example for reducingthe spatial Resolution
  • 39.
    39 IMAGE INTERPOLATION Interpolation isthe process of using known data to estimate values at unknown locations. Interpolation is the process of determining the values of a function at positions lying between its samples. It is the basic tool used extensively in tasks such as zooming, Shrinking, rotating and geometric corrections. Zooming and Shrinking are basically image resampling methods. The process of interpolation is one of the fundamental operations in image processing. The image quality highly depend on the used interpolation technique.
  • 40.
    40 Optical Zoom vs.Digital Zoom  An Optical zoom means moving the zoom lens so that it increases the magnification of light before it even reaches the digital sensor.  A digital zoom is not really zoom , it is simply interpolating the image after it has been acquired at the sensor (pixilation process). Zooming and Shrinking(Resampling)
  • 41.
    41 • Zooming maybe viewed as oversampling, while shrinking may be viewed as undersampling. • Sampling and quantizing are applied to an original continuous image, zooming and shrinking are applied to a digital image. • Zooming requires two steps: • The creation of new pixel locations. • The assignment of gray levels to those new locations. • Zooming Methodologies • Nearest neighbor interpolation, • Pixel Replication, • Bilinear interpolation • Bicubic Interpolation Contd….
  • 42.
    42 Nearest neighbor interpolation: Suppose that we have an image of size 500 × 500 pixels and we want to enlarge it 1.5 times to 750 × 750 pixels.  Conceptually, one of the easiest ways to visualize zooming is laying an imaginary 750 × 750 grid over the original image.  Obviously, the spacing in the grid would be less than one pixel because we are fitting it over a smaller image.  In order to perform gray-level assignment for any point in the overlay, we look for the closest pixel in the original image and assign its gray level to the new pixel in the grid.  When we are done with all points in the overlay grid, we simply expand it to the original specified size to obtain the zoomed image.  This method of gray-level assignment is called nearest neighbor interpolation.
  • 43.
    43 Pixel replication:  Pixelreplication, the method used to generate Figs. 2.20(b) through (f), is a special case of nearest neighbor interpolation.  Pixel replication is applicable when we want to increase the size of an image an integer number of times.  For instance, to double the size of an image, we can duplicate each column. This doubles the image size in the horizontal direction.  Then, we duplicate each row of the enlarged image to double the size in the vertical direction. The same procedure is used to enlarge the image by any integer number of times (triple, quadruple, and so on).  Duplication is just done the required number of times to achieve the desired size.  The gray-level assignment of each pixel is predetermined by the fact that new locations are exact duplicates of old locations. Although nearest neighbor interpolation is fast, it has the undesirable feature that it produces a checkerboard effect that is particularly objectionable at high factors of magnification,
  • 44.
  • 45.
    45 Bilinear interpolation:  Aslightly more sophisticated way of accomplishing gray-level assignments is bilinear interpolation using the four nearest neighbors of a point.  Let (x', y') denote the coordinates of a point in the zoomed image and let v(x’, y') denote the gray level assigned to it.  For bilinear interpolation, the assigned gray level is given by: v(x', y') = ax' + by' + cx'y' + d where; the four co-efficients are determined from the four equations in four unknowns that can be written using the four nearest neighbors of point (x', y')  It is possible to use more neighbors for interpolation.  Using more neighbors implies fitting the points with a more complex surface, which generally gives smoother results.  This is an exceptionally important consideration in image generation for 3-D graphics and in medical image processing, but the extra computational burden seldom is justifiable for general-purpose digital image zooming and shrinking, where bilinear interpolation generally is the method of choice.
  • 46.
    46 Bilinear interpolation considersthe closest 2x2 neighborhood of known pixel values surrounding the unknown pixel. It then takes a weighted average of these 4 pixels to arrive at its final interpolated value. This results in much smoother looking images than nearest neighbors. Contd….
  • 47.
  • 48.
    48  Involves sixteenneighbors to estimate intensity  V(x, y) = ∑∑aij xi yj ( i, j = 0 to 3)  Need to solve sixteen equations  Gives better results than other methods  More complex  Used in Adobe Photoshop, and Corel Photopaint BICUBIC INTERPOLATION
  • 49.
  • 50.
    50 SOME BASIC RELATIONSHIPSBETWEEN PIXELS  An image is denoted by f(x, y).  Lowercase letters such as p and q are used to represent particular pixels in an image.  The structure of a digital image allows stating some basic relationships between pixels that can be useful in some practical cases.  The pixels are organized in a regular structure and can have a limited number of values.
  • 51.
    51 (x-1,y-1) (0,0) (x-1,y) (0,1) (x-1,y+1) (0,2) - - -- - - (x,y-1) (1,0) (x,y) (1,1) (x,y+1) (1,2) - - - - - - (x+1,y-1) (2,0) (x+1,y) (2,1) (x+1,y+1) (2,2) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Digital Image Coordinates
  • 52.
    52 (x-1,y-1) (0,0) (x-1,y) (0,1) (x-1,y+1) (0,2) - - -- - - (x,y-1) (1,0) (x,y) (1,1) (x,y+1) (1,2) - - - - - - (x+1,y-1) (2,0) (x+1,y) (2,1) (x+1,y+1) (2,2) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Digital Image Coordinates
  • 53.
    53 : 4-neighbors ofp. • Any pixel p(x, y) has two vertical and two horizontal neighbors, given by (x+1,y), (x-1, y), (x, y+1), (x, y-1) • This set of pixels are called the 4-neighbors of P, and is denoted by N4 (P) • Each of them is at a unit distance from P. Neighbors of a Pixel (x-1,y) (x,y-1) p(x,y) (x,y+1) (x+1,y)
  • 54.
    54 : Diagonal-neighbors ofp. • This set of pixels, called Diagonal-neighbors and denoted by • : four diagonal neighbors of p have coordinates: (x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1) • Each of them are at Euclidean distance of 1.414 from P. Neighbors of a Pixel (x-1,y-1) (x-1,y+1) p(x,y) (x+1,y-1) (x+1,y+1)
  • 55.
    55 : 8-neighbors ofp. • and together are called 8-neighbors of p, denoted by . • • Some of the points in the may fall outside image when p lies on the border of image. Neighbors of a Pixel (x-1,y-1) (x-1,y) (x-1,y+1) (x,y-1) p(x,y) (x,y+1) (x+1,y-1) (x+1,y) (x+1,y+1)
  • 56.
    56 Adjacency • Two pixelsare adjacent if: •They are neighbors in some sense (e.g. N4(p), N8(p), …) •Their gray levels satisfy a specified criterion of similarity V (e.g. equality, …) • For example, in a binary image two pixels are connected if they are 4-neighbors and have same value (0/1) • Let v: a set of intensity values used to define adjacency and connectivity. • In a binary Image v={1}, if we are referring to adjacency of pixels with value 1. • In a Gray scale image, the idea is the same, but v typically contains more elements, for example v= {180, 181, 182,....,200}. • If the possible intensity values 0 to 255, v set could be any subset of these 256 values.
  • 57.
    57 Adjacency(Contd..) Types of adjacency 1.4-adjacency: Two pixels p and q with values from v are 4-adjacent if q is in the set . 2. 8-adjacency: Two pixels p and q with values from v are 8-adjacent if q is in the set . 3. m-adjacency (mixed): two pixels p and q with values from v are m- adjacent if: I. q is in or II. q is in and The set ∩ has no pixel whose values are from v (No intersection). • Mixed adjacency is a modification of 8-adjacency ''introduced to eliminate the ambiguities that often arise when 8- adjacency is used. (eliminate multiple path connection) • Pixel arrangement as shown in figure for v= {1} Example:
  • 58.
    58 Two subsets S1and S2 are adjacent, if some pixel in S1 is adjacent to some pixel in S2. Adjacent means, either 4-, 8- or m-adjacency. 1 0 0 0 1 0 1 1 0 1 0 0 0 1 0 1 1 0 1 0 0 0 1 0 1 1 0 Adjacency(Contd..) p q p q p q 8-path from p to q results in some ambiguity m-path from p to q solves this ambiguity 8-path m-path
  • 59.
    59 •A digital path(or curve) from pixel p with coordinate (x,y) to pixel q with coordinate (s,t) is a sequence of distinct pixels with coordinates (), (), ..., (), where (. •() is adjacent pixel (xi-1, yi-1) for 1≤j≤n , •n- The length of the path. •If ()= ():the path is closed path. •We can define 4- ,8- , or m-paths depending on the type of adjacency specified. Path
  • 60.
    60 DISTANCE MEASURES  Forpixels p,q,z with coordinates (x,y), (s,t), (u,v), D is a distance function or metric if:  D(p,q) ≥ 0 (D(p,q)=0 iff p=q)  D(p,q) = D(q,p) and  D(p,z) ≤ D(p,q) + D(q,z)
  • 61.
    DISTANCE MEASURES  TheEuclidean Distance between p and q is defined as: De (p,q) = [(x – s)2 + (y - t)2 ]1/2 Pixels having a distance less than or equal to some value r from (x,y) are the points contained in a disk of radius r centered at (x,y) De (p,q) p (x,y) q (s,t)
  • 62.
    DISTANCE MEASURES  TheD4 distance (also called city-block distance) between p and q is defined as: D4 (p,q) = | x – s | + | y – t | Pixels having a D4 distance from (x,y), less than or equal to some value r form a Diamond centered at (x,y) p (x,y) q (s,t) D4
  • 63.
    DISTANCE MEASURES Example: The pixelswith distance D4 ≤ 2 from (x,y) form the following contours of constant distance. The pixels with D4 = 1 are the 4-neighbors of (x,y)
  • 64.
    DISTANCE MEASURES  TheD8 distance (also called chessboard distance) between p and q is defined as: D8 (p,q) = max(| x – s |,| y – t |) Pixels having a D8 distance from (x,y), less than or equal to some value r form a square Centered at (x,y) p (x,y) q (s,t) D8(b) D8(a) D8 = max(D8(a) , D8(b))
  • 65.
    DISTANCE MEASURES Example: D8 distance≤ 2 from (x,y) form the following contours of constant distance.
  • 66.
    DISTANCE MEASURES  Dmdistance: is defined as the shortest m-path between the points. In this case, the distance between two pixels will depend on the values of the pixels along the path, as well as the values of their neighbors.
  • 67.
    DISTANCE MEASURES  Example: Considerthe following arrangement of pixels and assume that p, p2, and p4 have value 1 and that p1 and p3 can have can have a value of 0 or 1 Suppose that we consider the adjacency of pixels values 1 (i.e. V = {1})
  • 68.
    DISTANCE MEASURES  Cont.Example: Now, to compute the Dm between points p and p4 Here we have 4 cases: Case1: If p1 =0 and p3 = 0 The length of the shortest m-path (the Dm distance) is 2 (p, p2, p4)
  • 69.
    DISTANCE MEASURES  Cont.Example: Case2: If p1 =1 and p3 = 0 now, p1 and p will no longer be adjacent (see m- adjacency definition) then, the length of the shortest path will be 3 (p, p1, p2, p4)
  • 70.
    DISTANCE MEASURES  Cont.Example: Case3: If p1 =0 and p3 = 1 The same applies here, and the shortest –m-path will be 3 (p, p2, p3, p4)
  • 71.
    DISTANCE MEASURES  Cont.Example: Case4: If p1 =1 and p3 = 1 The length of the shortest m-path will be 4 (p, p1 , p2, p3, p4)
  • 72.