KEMBAR78
Introduction to Image Processing with MATLAB | PPTX
Follow me @ : http://sriramemarose.blogspot.in/
&
linkedin/sriramemarose
Every technology comes from Nature:
 Eye - Sensor to acquire photons
 Brain - Processor to process photoelectric signals from eye
Step 1. Light(white light) falling on
objects
Step 2. Eye lens focuses the light on
retina
Step 3. Image formation on retina, and
Step 4. Developing electric potential on
retina (Photoelectric effect)
Step 5. Optical nerves transmitting
developed potentials to brain
(Processor)
Optic nerves – Transmission medium
Hey, I got
potentials of X
values
(Temporal lobe)
Yes, I know what
does it mean
(Frontal lobe)
To frontal lobe,
From Temporal
lobe
 Different species absorbs different spectral wavelength
 Which implies different sensors(eye) have different reception abilities
 Color of the images depends on the type photo receptors
 Primary color images – RGB
 Photoreceptor – Cones
 Gray scale images (commonly known as black and white )
 Photoreceptor - Rods
 Man made technology that mimics operation of an eye
 Array of photoreceptors and film (to act as retina - cones and rods)
 Lens to focus light from surrounding/objects on the photoreceptors (mimics Iris
and eye lens)
)1,1()1,1()0,1(
)1,1()1,1()0,1(
)1,0()1,0()0,0(
),(
NMfMfMf
Nfff
Nfff
yxf
Gray line – Continuous analog
signals from sensors
Dotted lines – Sampling time
Red line – Quantized signal
Digital representation
of image obtained
from the quantized
signal
Different types of images often used,
 Color – RGB -> remember cones in eyes?
 R –> 0-255
 G –> 0-255
 B –> 0-255
 Grayscale -> remember rods in eyes?
 0 – Pure black/white
 1-254 – Shades of black and white(gray)
 255 – Pure black/white
 Boolean
 0- Pure black/white
 1- Pure white/black
Single pixel with respective
RGB values
RGB Image
Combination of RGB values
of each pixel contributing to
form an image
Pure black->0
Shades of black&white -> 1-254
White-> 255
Things to keep in mind,
 Image -> 2 dimensional matrix of size(mxn)
 Image processing -> Manipulating the values of each element of the matrix
)1,1()1,1()0,1(
)1,1()1,1()0,1(
)1,0()1,0()0,0(
),(
NMfMfMf
Nfff
Nfff
yxf
 From the above representation,
 f is an image
 f(0,0) -> single pixel of an image (similarly for all values of f(x,y)
 f(0,0) = 0-255 for grayscale
0/1 for binary
0-255 for each of R,G and B
From the image given below, how specific color(say blue) can be extracted?
Algorithm:
 Load an RGB image
 Get the size(mxn) of the image
 Create a new matrix of zeros of size mxn
 Read the values of R,G,B in each pixel while traversing through every
pixels of the image
 Restore pixels with required color to 1 and rest to 0 to the newly created
matrix
 Display the newly created matrix and the resultant image would be
the filtered image of specific color
Input image:
Output image(Extracted blue objects):
Snippet:
c=imread('F:matlab sample images1.png');
[m,n,t]=size(c);
tmp=zeros(m,n);
for i=1:m
for j=1:n
if(c(i,j,1)==0 && c(i,j,2)==0 && c(i,j,3)==255)
tmp(i,j)=1;
end
end
end
imshow(tmp);
From the image, count number of red objects,
Algorithm:
 Load the image
 Get the size of the image
 Find appropriate threshold level for red color
 Traverse through every pixel,
 Replace pixels with red threshold to 1 and remaining pixels to 0
 Find the objects with enclosed boundaries in the new image
 Count the boundaries to know number of objects
Input image:
Output image(Extracted red objects):
Snippet:
c=imread('F:matlab sample
images1.png');
[m,n,t]=size(c);
tmp=zeros(m,n);
for i=1:m
for j=1:n
if(c(i,j,1)==255 && c(i,j,2)==0 &&
c(i,j,3)==0)
tmp(i,j)=1;
end
end
end
imshow(tmp);
ss=bwboundaries(tmp);
num=length(ss);
Output: num = 3
 Thresholding is used to segment an image by setting all pixels whose intensity
values are above a threshold to a foreground value and all the remaining pixels to
a background value.
 The pixels are partitioned depending on their intensity value
 Global Thresholding,
g(x,y) = 0, if f(x,y)<=T
g(x,y) = 1, if f(x,y)>T
g(x,y) = a, if f(x,y)>T2
g(x,y) = b, if T1<f(x,y)<=T2
g(x,y) = c, if f(x,y)<=T1
 Multiple thresholding,
From the given image, Find the total number of objects present?
Algorithm:
 Load the image
 Convert the image into grayscale(incase of an RGB image)
 Fix a certain threshold level to be applied to the image
 Convert the image into binary by applying the threshold level
 Count the boundaries to count the number of objects
At 0.25 threshold At 0.5 threshold
At 0.6 thresholdAt 0.75 threshold
Snippet:
img=imread('F:matlab sample imagescolor.png');
img1=rgb2gray(img);
Thresholdvalue=0.75;
img2=im2bw(img1,Thresholdvalue);
figure,imshow(img2);
% to detect num of objects
B=bwboundaries(img2);
num=length(B);
Snippet:
img=imread('F:matlab sample imagescolor.png');
img1=rgb2gray(img);
Thresholdvalue=0.75;
img2=im2bw(img1,Thresholdvalue);
figure,imshow(img2);
% to detect num of objects
B=bwboundaries(img2);
num=length(B);
% to draw bow over objects
figure,imshow(img2);
hold on;
for k=1:length(B),
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'r','LineWidth',2);
end
 Given an image of English alphabets, segment each and every alphabets
 Perform basic morphological operations on the letters
 Detect edges
 Filter the noises if any
 Replace the pixel with maximum value found in the defined pixel set (dilate)
 Fill the holes in the images
 Label every blob in the image
 Draw the bounding box over each detected blob
Snippet:
a=imread('F:matlab sample imagesMYWORDS.png');
im=rgb2gray(a);
c=edge(im);
se = strel('square',8);
I= imdilate(c, se);
img=imfill(I,'holes');
figure,imshow(img);
[Ilabel num] = bwlabel(img);
disp(num);
Iprops = regionprops(Ilabel);
Ibox = [Iprops.BoundingBox];
Ibox = reshape(Ibox,[4 num]);
imshow(I)
hold on;
for cnt = 1:num
rectangle('position',Ibox(:,cnt),'edgecolor','r');
end
1. Write a program that solves the given equations calibration and
measurement
Hint: for manual calculation, to get values of x1,x2,y1 and y2 use imtool in
matlab
Algorithm:
 Load two images to be matched
 Detect edges of both images
 Traverse through each pixel and count number of black and white
points in one image (total value)
 Compare value of each pixels of both the images (matched value)
 Find the match percentage,
 Match percentage= ((matched value)/total value)*100)
 if match percentage exceeds certain threshold(say 90%), display, ‘image
matches’
Input Image:
Output Image after edge detection:
Note: This method works for identical
images and can be used for finger
print and IRIS matching
From the given image, find the nuts and washers based on its features
Algorithm:
 Analyze the image
 Look for detectable features of nuts/washers
 Preprocess the image to enhance the detectable feature
 Hint - Use morphological operations
 Create a detector to detect the feature
 Mark the detected results
Mathematical operation on two functions f and g, producing a third function that is a
modified version of one of the original functions
Example:
‱ Feature detection
Creating a convolution kernel for detecting edges:
‱ Analyze the logic to detect edges
‱ Choose a kernel with appropriate values to detect the lines
‱ Create a sliding window for the convolution kernel
‱ Slide the window through every pixel of the image
Input image Output image
After convolution
Algorithm:
‱ Load an image
‱ Create a kernel to detect horizontal edges
‱ Eg:
‱ Find the transpose of the kernel to obtain the vertical edges
‱ Apply the kernels to the image to filter the horizontal and vertical
components
Resultant image after applying horizontal filter kernel
Resultant image after applying vertical filter kernel
Snippet:
Using convolution:
rgb = imread('F:matlab sample images2.png');
I = rgb2gray(rgb);
imshow(I)
hy = fspecial('sobel');
hx = hy';
hrFilt=conv2(I,hy);
vrFilt=conv2(I,hx);
Using Fiters:
rgb = imread('F:matlab sample images2.png');
I = rgb2gray(rgb);
hy = fspecial('sobel');
hx = hy';
Iy = imfilter(double(I), hy, 'replicate');
Ix = imfilter(double(I), hx, 'replicate');
Often includes,
 Image color conversion
 Histogram equalization
 Edge detection
 Morphological operations
 Erode
 Dilate
 Open
 Close
To detect the required feature in an image,
‱ First subtract the unwanted features
‱ Enhance the required feature
‱ Create a detector to detect the feature
Gray scale
Histogram equalization
Edge detection:
Morphological close:
Image dilation:
Detect the feature in the preprocessed image
‱ Fusion: putting information together coming from different sources/data
‱ Registration: computing the geometrical transformation between two data
Applications:
‱ Medical Imaging
‱ Remote sensing
‱ Augmented Reality etc
Courtesy: G. Malandain, PhD, Senior Scientist, INRA
Courtesy: G. Malandain, PhD, Senior Scientist, INRA
PET scan of Brain MRI scan of Brain
+ =
Output of multimodal
registration( Different scanners)

Introduction to Image Processing with MATLAB

  • 1.
    Follow me @: http://sriramemarose.blogspot.in/ & linkedin/sriramemarose
  • 2.
    Every technology comesfrom Nature:  Eye - Sensor to acquire photons  Brain - Processor to process photoelectric signals from eye
  • 3.
    Step 1. Light(whitelight) falling on objects Step 2. Eye lens focuses the light on retina Step 3. Image formation on retina, and Step 4. Developing electric potential on retina (Photoelectric effect) Step 5. Optical nerves transmitting developed potentials to brain (Processor)
  • 4.
    Optic nerves –Transmission medium Hey, I got potentials of X values (Temporal lobe) Yes, I know what does it mean (Frontal lobe) To frontal lobe, From Temporal lobe
  • 6.
     Different speciesabsorbs different spectral wavelength  Which implies different sensors(eye) have different reception abilities
  • 7.
     Color ofthe images depends on the type photo receptors  Primary color images – RGB  Photoreceptor – Cones  Gray scale images (commonly known as black and white )  Photoreceptor - Rods
  • 9.
     Man madetechnology that mimics operation of an eye  Array of photoreceptors and film (to act as retina - cones and rods)  Lens to focus light from surrounding/objects on the photoreceptors (mimics Iris and eye lens)
  • 12.
    )1,1()1,1()0,1( )1,1()1,1()0,1( )1,0()1,0()0,0( ),( NMfMfMf Nfff Nfff yxf Gray line –Continuous analog signals from sensors Dotted lines – Sampling time Red line – Quantized signal Digital representation of image obtained from the quantized signal
  • 13.
    Different types ofimages often used,  Color – RGB -> remember cones in eyes?  R –> 0-255  G –> 0-255  B –> 0-255  Grayscale -> remember rods in eyes?  0 – Pure black/white  1-254 – Shades of black and white(gray)  255 – Pure black/white  Boolean  0- Pure black/white  1- Pure white/black
  • 14.
    Single pixel withrespective RGB values RGB Image
  • 15.
    Combination of RGBvalues of each pixel contributing to form an image
  • 16.
    Pure black->0 Shades ofblack&white -> 1-254 White-> 255
  • 20.
    Things to keepin mind,  Image -> 2 dimensional matrix of size(mxn)  Image processing -> Manipulating the values of each element of the matrix )1,1()1,1()0,1( )1,1()1,1()0,1( )1,0()1,0()0,0( ),( NMfMfMf Nfff Nfff yxf  From the above representation,  f is an image  f(0,0) -> single pixel of an image (similarly for all values of f(x,y)  f(0,0) = 0-255 for grayscale 0/1 for binary 0-255 for each of R,G and B
  • 21.
    From the imagegiven below, how specific color(say blue) can be extracted?
  • 22.
    Algorithm:  Load anRGB image  Get the size(mxn) of the image  Create a new matrix of zeros of size mxn  Read the values of R,G,B in each pixel while traversing through every pixels of the image  Restore pixels with required color to 1 and rest to 0 to the newly created matrix  Display the newly created matrix and the resultant image would be the filtered image of specific color
  • 23.
    Input image: Output image(Extractedblue objects): Snippet: c=imread('F:matlab sample images1.png'); [m,n,t]=size(c); tmp=zeros(m,n); for i=1:m for j=1:n if(c(i,j,1)==0 && c(i,j,2)==0 && c(i,j,3)==255) tmp(i,j)=1; end end end imshow(tmp);
  • 24.
    From the image,count number of red objects,
  • 25.
    Algorithm:  Load theimage  Get the size of the image  Find appropriate threshold level for red color  Traverse through every pixel,  Replace pixels with red threshold to 1 and remaining pixels to 0  Find the objects with enclosed boundaries in the new image  Count the boundaries to know number of objects
  • 26.
    Input image: Output image(Extractedred objects): Snippet: c=imread('F:matlab sample images1.png'); [m,n,t]=size(c); tmp=zeros(m,n); for i=1:m for j=1:n if(c(i,j,1)==255 && c(i,j,2)==0 && c(i,j,3)==0) tmp(i,j)=1; end end end imshow(tmp); ss=bwboundaries(tmp); num=length(ss); Output: num = 3
  • 27.
     Thresholding isused to segment an image by setting all pixels whose intensity values are above a threshold to a foreground value and all the remaining pixels to a background value.  The pixels are partitioned depending on their intensity value  Global Thresholding, g(x,y) = 0, if f(x,y)<=T g(x,y) = 1, if f(x,y)>T g(x,y) = a, if f(x,y)>T2 g(x,y) = b, if T1<f(x,y)<=T2 g(x,y) = c, if f(x,y)<=T1  Multiple thresholding,
  • 28.
    From the givenimage, Find the total number of objects present?
  • 29.
    Algorithm:  Load theimage  Convert the image into grayscale(incase of an RGB image)  Fix a certain threshold level to be applied to the image  Convert the image into binary by applying the threshold level  Count the boundaries to count the number of objects
  • 30.
    At 0.25 thresholdAt 0.5 threshold At 0.6 thresholdAt 0.75 threshold
  • 31.
  • 33.
    Snippet: img=imread('F:matlab sample imagescolor.png'); img1=rgb2gray(img); Thresholdvalue=0.75; img2=im2bw(img1,Thresholdvalue); figure,imshow(img2); %to detect num of objects B=bwboundaries(img2); num=length(B); % to draw bow over objects figure,imshow(img2); hold on; for k=1:length(B), boundary = B{k}; plot(boundary(:,2), boundary(:,1), 'r','LineWidth',2); end
  • 34.
     Given animage of English alphabets, segment each and every alphabets  Perform basic morphological operations on the letters  Detect edges  Filter the noises if any  Replace the pixel with maximum value found in the defined pixel set (dilate)  Fill the holes in the images  Label every blob in the image  Draw the bounding box over each detected blob
  • 36.
    Snippet: a=imread('F:matlab sample imagesMYWORDS.png'); im=rgb2gray(a); c=edge(im); se= strel('square',8); I= imdilate(c, se); img=imfill(I,'holes'); figure,imshow(img); [Ilabel num] = bwlabel(img); disp(num); Iprops = regionprops(Ilabel); Ibox = [Iprops.BoundingBox]; Ibox = reshape(Ibox,[4 num]); imshow(I) hold on; for cnt = 1:num rectangle('position',Ibox(:,cnt),'edgecolor','r'); end
  • 40.
    1. Write aprogram that solves the given equations calibration and measurement Hint: for manual calculation, to get values of x1,x2,y1 and y2 use imtool in matlab
  • 41.
    Algorithm:  Load twoimages to be matched  Detect edges of both images  Traverse through each pixel and count number of black and white points in one image (total value)  Compare value of each pixels of both the images (matched value)  Find the match percentage,  Match percentage= ((matched value)/total value)*100)  if match percentage exceeds certain threshold(say 90%), display, ‘image matches’
  • 42.
    Input Image: Output Imageafter edge detection: Note: This method works for identical images and can be used for finger print and IRIS matching
  • 43.
    From the givenimage, find the nuts and washers based on its features
  • 44.
    Algorithm:  Analyze theimage  Look for detectable features of nuts/washers  Preprocess the image to enhance the detectable feature  Hint - Use morphological operations  Create a detector to detect the feature  Mark the detected results
  • 46.
    Mathematical operation ontwo functions f and g, producing a third function that is a modified version of one of the original functions Example: ‱ Feature detection Creating a convolution kernel for detecting edges: ‱ Analyze the logic to detect edges ‱ Choose a kernel with appropriate values to detect the lines ‱ Create a sliding window for the convolution kernel ‱ Slide the window through every pixel of the image
  • 47.
    Input image Outputimage After convolution
  • 49.
    Algorithm: ‱ Load animage ‱ Create a kernel to detect horizontal edges ‱ Eg: ‱ Find the transpose of the kernel to obtain the vertical edges ‱ Apply the kernels to the image to filter the horizontal and vertical components
  • 50.
    Resultant image afterapplying horizontal filter kernel
  • 51.
    Resultant image afterapplying vertical filter kernel
  • 52.
    Snippet: Using convolution: rgb =imread('F:matlab sample images2.png'); I = rgb2gray(rgb); imshow(I) hy = fspecial('sobel'); hx = hy'; hrFilt=conv2(I,hy); vrFilt=conv2(I,hx); Using Fiters: rgb = imread('F:matlab sample images2.png'); I = rgb2gray(rgb); hy = fspecial('sobel'); hx = hy'; Iy = imfilter(double(I), hy, 'replicate'); Ix = imfilter(double(I), hx, 'replicate');
  • 53.
    Often includes,  Imagecolor conversion  Histogram equalization  Edge detection  Morphological operations  Erode  Dilate  Open  Close
  • 54.
    To detect therequired feature in an image, ‱ First subtract the unwanted features ‱ Enhance the required feature ‱ Create a detector to detect the feature
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
    Detect the featurein the preprocessed image
  • 60.
    ‱ Fusion: puttinginformation together coming from different sources/data ‱ Registration: computing the geometrical transformation between two data Applications: ‱ Medical Imaging ‱ Remote sensing ‱ Augmented Reality etc
  • 61.
    Courtesy: G. Malandain,PhD, Senior Scientist, INRA
  • 62.
    Courtesy: G. Malandain,PhD, Senior Scientist, INRA
  • 63.
    PET scan ofBrain MRI scan of Brain + = Output of multimodal registration( Different scanners)