Professional Elective –Il
Computer Vision
Course Code: CET 4011B Credits: 3 TH+1 Lab=4
Syllabus
Unit 1: Image Processing Fundamentals
Introduction : Types of Computer Images, Satellite Images, Medical Images, Image File Formats,
Components of Image Processing System, Fundamentals Steps in Image Processing, Dimensions of
image, Image Operations.
Image Formation and Low-Level Processing: Human Vision System, Computer Vision System,
Stereo Vision, Geometric Cameras and projection models, noise models, Human color perception.
Unit 2: Image Enhancement and Segmentation Techniques
Image Enhancement: Point processing, Gray Level Slicing, Thresholding
Transformations, Histogram Processing, Filtering with morphological operators,
Intensity transformations, contrast stretching, histogram equalization, Correlation
and convolution, smoothing filters,
sharpening filters.
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 2
Syllabus
Unit 2: Image Enhancement and Segmentation Techniques:
Image Segmentation: Classification of image segmentation techniques,
thresholding-based image segmentation, edge-based segmentation, edge
detection, edge linking, Hough transform, watershed transform, clustering
techniques, region approach.
Unit 3: Feature Detection and Extraction
Edge Detection: Mathematical concepts, Operators based on first order derivative
(Roberts, Prewitt and Sobel), Laplacian of Gaussian (LoG).
Corners Detection: Harris and Orientation Histogram, Scale-Invariant Feature
Transform (SIFT), Speeded Up Robust Features (SURF), Histogram of Oriented
gradients (HOG).
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 3
Syllabus
Unit 3: Feature Detection and Extraction
Feature extraction: Spatial Features, Amplitude, transform based features, Histogram
based statistical features, based on statistical moments (e.g., mean, variance, kurtosis,
etc), Shape/geometry-based features and moment-based features (Radii, perimeter,
area, compactness, max boundary rectangle, orientation etc.), Texture features (GLCM
and texture features, Gabor features), Color features.
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 4
Syllabus
Unit 4: Fundamentals of Video Processing
Introduction Analog Video, Digital Video, 3D Video, Video Quality, Video standards
Motion Estimation and Tracking, Motion Models.
Motion Estimation: Differential Methods: Lukas— Kanade Method, Horn—Schunk
Motion Estimation, Motion Tracking Kanade—Lucas—Tomasi Tracking, Mean-Shift
Tracking, Particle-Filter Tracking, Active-Contour Tracking.
Video Segmentation:
Change Detection: Shot-Boundary Detection, Background Subtraction, Motion
Segmentation: Dominant-Motion Segmentation, Multiple-Motion Segmentation,
Region-Based Motion Segmentation: Fusion of Color and Motion.
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 5
Syllabus
Unit 5: Applications of Computer Vision:
Object Recognition: simple object recognition methods, Shape correspondence and
shape matching, contour based representation, Region based representation, Patterns
and pattern classification.
Introduction to Satellite Image Processing: Concepts and Foundations of Remote
Sensing, Multispectral, Thermal, and Hyper Spectral Sensing, Earth Resource Satellites
Operating in the Optical Spectrum.
Introduction to Medical Image Processing: Basic steps of medical image Processing,
Medical Image Enhancement, Segmentation, Medical Image Analysis (Images of X-
ray/ CT Scan/ MRI).
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 6
Learning Resources:
Text Books:
1. R.C. Gonzalez and R.E. Woods, Digital Image Processing, Addison- Wesley, 1992
2. Richard Szeliski, Computer Vision: Algorithms and Applications, Springer-Verlag
London Limited 2011.
3. A. Forsyth, J. Ponce, Computer Vision: A Modern Approach, Pearson Education,
2003.
4. R. Davies, Computer & Machine Vision, Fourth Edition, Academic Press, 2012
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 7
Unit 2:
Image Enhancement and Segmentation Techniques:
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 8
Introduction: Image Enhancement
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 9
Introduction: Image Enhancement
10
Point Processing
11
Point Processing
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 12
Point Processing
Point processing is a technique in image processing that modifies the
intensity of pixels to enhance an image.
● Point processing operates on individual pixels, changing their value based on a
predefined function
● The new pixel value is calculated without considering the values of surrounding
pixels
● Point processing can be used to change the brightness, contrast, or emphasis
on certain gray levels
13
Examples of Point Processing
● Contrast stretching: A basic gray level transformation that improves contrast
● Thresholding: A point processing technique
● Logarithmic transformations: A point processing technique that modifies pixel
intensity
● Power-law transformations: A point processing technique that modifies pixel
intensity
● Piecewise linear transformations: A point processing technique that modifies
pixel intensity
● Intensity level slicing: A basic gray level transformation
● Bit plane slicing: A basic gray level transformation
14
Point Processing
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 15
Image Negative
• A negative of an image is an image where its lightest areas appear as
darkest and the darkest areas appear as lightest.
• The appearance change from lightest to darkest and darkest to lightest
is basically done in gray scale image and refers to the change of pixel
intensity values from highest to lowest and lowest to highest.
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 16
Image Negative
17
18
Image Negatives
import cv2
import numpy as np
# Read original image
image = cv2.imread("C:\\Users\\Dell\\OneDrive\\Desktop\CV_Image\\Img3.jpg")
# Max intensity based on quantization
L = image.max()
# Subtract each intensity from max to obtain negative
negative = L - image
cv2.imshow('original', image)
cv2.imshow('negative', negative)
cv2.waitKey(0)
cv2.destroyAllWindows() 19
Contrast Stretching expands the range of intensity levels in an Image
Extreme Contrast Stretching yields Thresholding. Thresholded image
has maximum contrast with only 0 and 255 intensities
Brightness enhancement is shifting intensities to higher values
20
Basic Gray Level Transformations
Linear
Logarithmic
Power Law
21
Logarithmic Transform
22
Logarithmic transformation
Logarithmic transformation is divided into two
types:
1. Log transformation
2. Inverse log transformation
The formula for Logarithmic transformation
s = c log(r + 1)
Here, s and r are the pixel values for input and
output image. And c is constant. In the
formula, we can see that 1 is added to each
pixel value this is because if pixel intensity is
zero in the image then log(0) is infinity so, to
have minimum value one is added.
23
Logarithmic Transformation
24
Logarithmic Transformation
25
26
Power Law Transformation
27
28
29
30
31
32
33
Contrast Stretching
It expands the range of intensity values in an image.
Contrast is difference between intensity levels of darker and brighter
pixels.
It is done in 3 ways
1. Multiplying each input pixel intensity value by a constant scalar.
2. Using Histogram equivalent.
3. applying a transform which make dark pixel darker by assigning
slope of < 1 and bright portion brighter by assigning slope > 1.
34
35
Piecewise transformation
Piecewise transformation is a
spatial domain method used for
enhancing the group of pixels
falling in the defined range. The
pixels are grouped into different
groups based on specified range
and each group has its own linear
transformation and slope. The
commonly used piecewise
transformations are,
thresholding, contrast stretching,
gray-level slicing and bit-plane
slicing
36
37
38
39
Contrast Stretching
40
Contrast Stretching
41
Contrast Stretching
42
Contrast Stretching
# Find line equations by calculating slopes
def Contrast_stretch(p, r1, s1, r2, s2):
pixelVal_vec = np.vectorize(Contrast_stretch)
if (0 <= p and p <= r1):
equation = (s1 / r1)*p # Contrast stretching
contrast = pixelVal_vec(image, r1, s1, r2, s2)
elif (r1 < p and p <= r2):
equation = ((s2 - s1)/(r2 - r1))*(p - r1)+s1 cv2.imwrite('contrast.jpg', contrast)
else: cv2.imshow('original', image)
cv2.imshow('contrast.jpg', contrast)
equation = ((255 - s2)/(255 - r2))*(p - r2)+s2 cv2.waitKey(0)
return equation cv2.destroyAllWindows()
# Read original image
image = cv2.imread("C:\\Users\\Dell\\OneDrive\\Desktop\ https://medium.com/@koushikc2000/basic-operations-on-ima
CV_Image\\img1.jpg") ges-using-opencv-python-cb0d60d11911
https://www.iosrjournals.org/iosr-jece/papers/vol1-issue2/L012
# Initialize range 2023.pdf
r1 = 55
https://ninjakx.github.io/Image_Enhancement/
s1 = 40
r2 = 140
43
s2 = 200
Intensity Level Slicing
44
Grey Level Slicing(Intensity Level)
This technique is used to highlight a specific range of gray levels in a given
image.
Similar to thresholding
Other levels can be suppressed or maintained
Useful for highlighting features in an image
It can be implemented in several ways, but the two basic themes are:
One approach is to display a high value for all gray levels in the range of
interest and a low value for all other gray levels.
The second approach, based on the transformation brightens the desired
range of gray levels but preservesHanan Hardan gray levels unchanged.
45
Grey Level Slicing(Intensity Level)
46
Grey Level Slicing(Intensity Level)
47
Bit Plane Slicing
48
Example
49
50
Thresholding
51
Single Threshold
52
Types of Thresholding
53
Two Objects
54
Procedure for obtaining Global Thresholding value T
55
Histogram Transformation
56
Histogram Equalization
57
Histogram Equalization
58
Histogram Equalization
59
Histogram Equalization
60
Histogram Equalization
61
62
63
Questions based on Slide No. 1 to Slide No. 38
1. How point processing works for image enhancement? Explain image negative
method in brief.
2. What do you mean by image enhancement? Define point processing.
3. Illustrate the image negative transformation with suitable example.
4. Explain the mechanism of spatial domain filtering with suitable functions.
5. List down basics of intensity transformation(Grey Level) in image enhancement.
6. Why Log transform is used in image enhancement?
7. With necessary graphs, explain Log transformation and power law
transformation for spatial domain image enhancement.
8. Illustrate contrast stretching and draw the necessary graph and calculate the slop
for given range.
9. Explain the concept of histogram for various images with relevant diagrams.
10.Explain the histogram equalization operation in image enhancement with
necessary expressions.
64
Filtering with Morphology Operators
65
Filtering with Morphology Operators
66
Filtering with Morphology Operators
67
Erosion and Dilation
68
Filtering with Morphology Operators:Dilation
69
70
Dilation
71
Filtering with Morphology Operators:Erosion
72
Filtering with Morphology Operators:Erosion
73
Erosion
74
75
76
Open and Close Operation
77
Open and Close
78
79
Spatial Filtering Concepts
80
Spatial Filtering Concepts
81
Smoothing Filters
82
83
84
85
Smoothing Filters
86
Smoothing Filters
87
Smoothing Filters
88
Spatial Filtering Concepts
89
90
Smoothing Filters
91
Smoothing Filters
92
Smoothing Filters
93
Smoothing Filters
94
Smoothing Filters
95
96
Smoothing Filters
97
Smoothing Filters
98
Smoothing Filters
99
Smoothing Filters
100
Smoothing Filters
101
102
Handling pixels close to boundaries
103
104
105
106
107
108
109
Median Filter
110
Sharpening Filter
111
Sharpening Filter
112
Sharpening Filter
113
Sharpening Filter
114
115
116
Sharpening Filter
117
Sharpening Filter
118
Sharpening Filter
119
Sharpening Filter
120
121
Laplacian Filter
122
Laplacian Filter
123
Sharpening Filter
124
Sharpening Filter
125
Questions
1. What do you understand by dilation and erosion in morphological
image processing? Explain with example. Also give one suitable
application for each.
2. What is a structuring element? How it is used for dilation operation?
126
Types of Image Segmentation
Image segmentation can be categorized into several types, including:
○ Thresholding Segmentation: This technique involves setting a threshold value to
separate pixels based on their intensity or color.
○ Edge-based Segmentation: It identifies object boundaries by detecting edges or
gradients within the image.
○ Region-based Segmentation: This approach groups pixels based on their visual
features, color, or texture similarity.
○ Semantic segmentation: Semantic segmentation is the process of assigning a class
label to each pixel in the image so that pixels with the same label belong to the same
object or category.
○ Clustering-based Segmentation: It utilizes clustering algorithms to group pixels with
similar attributes.
○ Instance segmentation is the process of assigning a class label and an instance label
to each pixel in the image, such that pixels with the same class label and instance label
belong to the same object or instance.
127
What Is Image Segmentation Used For?
mage segmentation finds applications in various domains, including:
I
○ Medical Imaging: Segmentation aids in identifying tumors, organs, or anatomical
structures, assisting in diagnosis, treatment planning, and surgical guidance.
○ Object Recognition and Tracking: Segmentation facilitates object detection, tracking,
and recognition tasks in computer vision applications.
○ Image Editing and Forensics: Segmenting images allows for selective editing,
background removal, and image manipulation.
○ Autonomous Driving: Accurate segmentation enables object detection, lane detection,
and scene understanding, contributing to safe and efficient autonomous vehicles.
○ Fingerprint recognition: Image segmentation can help extract fingerprints from images
and enable biometric authentication, criminal identification, forensic analysis, etc.
These are just some of the examples of image segmentation applications. Image
segmentation can be useful and beneficial in many more domains and scenarios.
128
Image Segmentation
129
Image Segmentation
130
Image Segmentation
131
Image Segmentation using Threshold Method
132
Image Segmentation using Threshold Method
133
134
135
136
137
138
Procedure for obtaining Global Thresholding value T
139
Refer Link: https://www.youtube.com/watch?v=eyTFqyM03L
k&t=447s
140
141
142
143
144
145
146
147
148
149
150
151
Image Segmentation
152
Introduction to Image Segmentation
153
Introduction to Image Segmentation
154
Region Approach
155
Region Approach
156
Region Approach
157
158
Introduction to Image Segmentation
159
160
161
https://www.youtube.com/watch?v=0kUGpgIrZIw
162
163
164
https://www.youtube.com/watch?v=0kUGpgIrZIw
165
Edge Based Segmentation
166
Edge Based Segmentation
167
168
Edge Based Segmentation
169
Edge Based Segmentation
170
Edge Based Segmentation
171
Edge Based Segmentation
172
Edge Based Segmentation
173
Hough Transform
174
Hough Transform
175
Hough Transform
176
Hough Transform
177
Hough Transform
178
Hough Transform
179
Hough Transform
180
Hough Transform
181
Hough Transform
182
Links: Hough Transform
https://universe.bits-pilani.ac.in/uploads/JNKDUBAI/hough_transform.
pdf
https://www.youtube.com/watch?v=XRBc_xkZREg
183
Clustering Technique
184
Clustering Technique
185
186
187
188
189
190
191
Clustering Technique
192
Clustering Technique
193
Important Links
https://www.youtube.com/watch?v=j3_Ck5oP5oI
https://www.youtube.com/watch?v=fiDDn_F9U74
https://www.youtube.com/watch?v=FQy3bTe2pVc
https://www.youtube.com/watch?v=j3_Ck5oP5oI
https://www.youtube.com/watch?v=fiDDn_F9U74
https://www.youtube.com/watch?v=FQy3bTe2pVc
2/1/2024 Computer Vision(PE-I) Unit 1 2024-245 TY S6 194