KEMBAR78
Lecture 11 | PDF | Computer Vision | Signal Processing
0% found this document useful (0 votes)
13 views32 pages

Lecture 11

Uploaded by

owennene0909
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views32 pages

Lecture 11

Uploaded by

owennene0909
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Digital Image Processing - COMP4173

Lecture 11: Edge Detection

Prof. Hongjian Shi (时红建)


Department of Computer Science and Technology
BNU-HKBU United International University
Email: shihj@uic.edu.cn
Office: T3-601-R3;
Office Hours: Tues., Wed., Thur. 9:00-11:50; Wed. 16:00-16:50

1
Introduction
• Image processing learned previously is with the format
“Inputs and Outputs are images”
• Now we want the inputs are images but the output may be
images, information or attributes of the images – that is, to
extract useful information from image processing
• Image segmentation is to divide an image into meaningful
parts or objects (interested). For examples,
• Separating the pathological site and finding the tumor size from
images for medical diagnosis
• Inspecting if there is some fractures or breakings inside some object
• Inspecting if there are some troop motion in a large area in military
• Monitoring if there is some stranger going into a community
Outline
• Image segmentation - decompose image into “meaningful”
regions or objects based on two intensity properties:
• Discontinuity: the abrupt change in gray levels - point, line, edge
detection…this is the type of edge-based segmentation
• Similarity: predefined criteria of similarity - thresholding, region
growing, region splitting and merging…this is the region-based
segmentation
• Approaches:
Thresholding
edge-based vs. region-based
global vs. local
Feature-based – texture, motion, color
Edge-based segmentation

Region-based segmentation
Detection of Discontinuities – Point, Line, Edge

• Our focus is to detect the abrupt changes of


intensity. What tools we have for this purpose: the
first derivative and second derivative

0 1 0

• For two-dimensional case, 1 -4 1

Laplacian 0 1 0
• First-order derivatives generally produce
thicker edges in an image.
• Second-order derivatives have
a stronger response to fine detail, such
as thin lines, isolated points, and noise.
• Second-order derivatives produce a
double-edge response at ramp
and step transitions in intensity.
• The sign of the second derivative can be
used to determine whether a transition
into an edge is from light to dark or dark
to light.
Point Detection
𝟗

෍ 𝒘_𝒊 = 𝟎
𝒊=𝟏

0 1 0
1 -4 1

0 1 0
Laplacian mask
0 1 0
1 -4 1

0 1 0
Line Detection
The mask for line detection
Derivatives and Edge Detection
As we observed:
• 1st order derivative produces “thick” edges
• 2nd order derivative has stronger response to fine edges
• 2nd order derivative produces double edge response at ramp and step transitions
in intensity
• 2nd order derivative’s sign can be used to find out if going from dark to light or vice
versa

We can use these properties to find edge point, edge, edge segment:
• Edge point: 2nd order derivative is greater than a threshold
• Edge: set of connected edge points (need to predefine “connection”)
• Edge segment: short edges
Edge Models

Step edge
Ramp edge Roof edge
• For one pixel width edge, one needs to find the zero crossing of the second derivative.
This is true for three kinds of edges
• In case of noise presence, smooth the image first to reduce the noise
Edge Detection
• Gradient operator

1965

1970

Better smoothing

1970
Original
Vertical
gradient

Horizontal
Gradient image
gradient
(magnitude)
Thresholding without prior average
Thresholding with prior average filter
filtering on gradient magnitude
Mar-Hildreth Edge Detection (LoG
method)
• Laplacian operator detects small edge details and
one pixel width. In case of noise presence, the
Laplacian may not give a good outlook. So averaging
and thresholding are needed.
• Also, the detector should be tunable to different
scales of edges
• Laplacian of Gaussian ( ) would give better results:
Laplacian of a Gaussian (LoG)
1980
Mexican Hat

Isotropic
(invariant to rotation)
LoG Filter and DoG Filter (similar to LoG)

• The LoG filter has the effect to smooth the image first and
then to detect the edges as shown above, reducing intensity
structures and noise at scales much smaller than σ
• LoG filter responds to changes of intensity in any mask
direction and so no multiple tasks for directional filtering.
LoG filter corresponds to human vision characterization
• 99.7% of the volume under a 2D surface lies between ±3σ
around the mean, nxn LoG filter should have the smallest
𝑛 ≥ 6𝜎
• The detected edge is one-pixel width. Thresholding is
needed to avoid “closed-loop” edges, normally the
threshold is equal to 4% of the maximum value of LoG
image
Spaghetti effect
Canny Edge Detection
• Goal: Satisfy the following 3 criteria:
• Detection: should not miss important edges
• Localization: distance between the actual and located position of the
edge should be minimal
• One response: only one response (edge) to a single actual edge
• Algorithm
1. Smooth input image with a Gaussian filter (remove noise)
2. Compute the gradient magnitude and angle images
3. Apply nonmaxima suppression to the gradient magnitude image
4. Use double thresholding and connectivity analysis to detect and link
edges
Canny Algorithm Implementation
• Step 1: Smooth input image with a Gaussian filter
(remove noise)
Smoothed image

• Step 2: Compute the gradient magnitude and angle


images The result image has wide ridges
Canny Algorithm Implementation – cont.
• Step3: Apply nonmaxima suppression to the gradient
magnitude image

Here 𝑑1 , 𝑑2 , 𝑑3 , 𝑑4 denote horizontal, -45°, vertical, and 45° edge directions

𝑔𝑁 (𝑥, 𝑦) is the nonmaximal-suppressed image


Canny Algorithm Implementation –
cont.
• Use double thresholding and connectivity analysis
to detect and link edges
• Use two thresholds 𝑇𝐻 and 𝑇𝐿 with ratio 2:1 or 3:1 to
generate two images with initial pixel values set to 0:
Strong edge image − 𝑔𝑁𝐿 𝑥, 𝑦 contains more nonzeros
Weak edge image -- Remove the strong pixels

The process
removes
Irrelevant
features

(e) Append nonzero pixels from 𝒈𝑵𝑳 (𝒙, 𝒚) to 𝒈𝑵𝑯 (𝒙, 𝒚)


Intensity Thresholding
• Though Canny edge detection is superior, but still
there are gaps and discontinuities and spurious edges
due to noise low image quality.
• Here we explore partition of the image into regions
based on intensity values or properties of these
values
• Simple global thresholding:
• Extract an object from background
• Multiple global thresholding:
Easily segmented Hard to segment!
Illumination,
reflectance, and
noise are three
main causes to
degrade images

Three approaches:
1. Correct the shading pattern
by multiplying its inverse
2. Correct global shading
3. Work around on intensity

You might also like