UNIT III SEGMENTATION AND RESTORATION TECHNIQUES 9+3
ROI definition -Detection of discontinuities–Edge linking and boundary detection – Region based segmentation-
Morphological processing, Active contour models. Image Restoration- Noise models– Restoration in the presence of
Noise – spatial filtering, Periodic noise reduction by frequency domain filtering- linear position- Invariant degradation-
Estimation of degradation function, Inverse filter, Weiner filtering. Analyze the segmentation techniques to extract the
region of interest and restoration of degraded images using Matlab.
ROI definition:
In the context of image processing, ROI stands for "Region of Interest." ROI refers to a specific area
or subset within an image that is selected for further analysis, processing, or manipulation. Here's a
breakdown of what ROI entails in image processing:
1. Selection: Initially, an ROI is selected by either manually outlining the area of interest or
through automated methods such as thresholding, edge detection, or segmentation algorithms.
2. Focus: Once the ROI is defined, subsequent image processing operations are applied only to this
selected region, rather than the entire image. This allows for more focused analysis and efficient
processing, particularly in cases where only a specific part of the image contains relevant
information.
3. Applications: ROI selection is widely used in various applications of image processing such as
medical imaging (where specific organs or abnormalities are of interest), object detection and
recognition (where objects of interest need to be isolated), surveillance (where specific regions
within a frame require monitoring), and many others.
4. Enhancement: Processing operations applied within the ROI can include enhancement
techniques like noise reduction, contrast enhancement, or sharpening, tailored to improve the quality
or highlight features within the selected region.
5. Efficiency: By limiting processing to the ROI, computational resources can be conserved,
making real-time processing feasible in applications where speed is crucial.
6. Integration: ROI analysis can be integrated with machine learning algorithms for tasks such as
object detection, where the algorithm focuses its learning on features within the selected regions.
DETECTION OF DISCONTINUITIES:
Detection of discontinuities
Discontinuities such as isolated points, thin lines and edges in the image can be detected by
using similar masks as in the low- and high-pass filtering. The absolute value of the weighted sum
given by equation (2.7) indicates how strongly that particular pixel corresponds to the property
described by the mask; the greater the absolute value, the stronger the response.
1. Point detection
The mask for detecting isolated points is given in the Figure. A point can be defined to be
isolated if the response by the masking exceeds a predefined threshold: f x T
-1 -1 -1
-1 8 -1
-1 -1 -1
Figure : Mask for detecting isolated points.
0
2. Line detection
line detection is identical to point detection. I
Thus lines are detected using the following masks.
Instead of one mask, four different masks must be used to cover the four primary directions
namely, horizontal, vertical and two diagonals.
Figure Masks for line detection.
Horizontal mask will result with max response when a line passed through the middle row
of the mask with a constant background.
Thus a similar idea is used with other masks.
The preferred direction of each mask is weighted with a larger coefficient (i.e.,2) than
other possible directions.
Steps for Line detection:
Apply every masks on the image
let R1, R2, R3, R4 denotes the response of the horizontal, +45 degree, vertical and -
45 degree masks, respectively.
if, at a certain point in the image |Ri| > |Rj|,
for all ji, that point is said to be more likely associated with a line in the direction of mask
i.
Alternatively, to detect all lines in an image in the direction defined by a given mask,
just run the mask through the image and threshold the absolute value of the result.
The points that are left are the strongest responses, which, for lines one pixel thick,
correspond closest to the direction defined by the mask.
3.Edge detection:
first-order derivative (Gradient operator) Second-order derivative (Laplacian operator)
0
The most important operation to detect discontinuities is edge detection.
Edge detection is used for detecting discontinuities in gray level. First and second order
digital derivatives are implemented to detect the edges in an image.
Edge is defined as the boundary between two regions with relatively distinct gray- level
properties.
An edge is a set of connected pixels that lie on the boundary between two regions.
Approaches for implementing edge detection
THICK EDGE:
The slope of the ramp is inversely proportional to the degree of blurring in the edge.
We no longer have a thin (one pixel thick) path.
Instead, an edge point now is any point contained in the ramp, and an edge would then be a set
of such points that are connected.
The thickness is determined by the length of the ramp.
The length is determined by the slope, which is in turn determined by the degree of blurring.
Blurred edges tend to be thick and sharp edges tend to be thin
The first derivative of the gray level profile is positive at the trailing edge, zero in the
constant gray level area, and negative at the leading edge of the transition
The magnitude of the first derivative is used to identify the presence of an edge in the
image
The second derivative is positive for the part of the transition which is associated with the
dark side of the edge. It is zero for pixel which lies exactly on the edge.
It is negative for the part of transition which is associated with the light side of the edge
The sign of the second derivative is used to find that whether the edge pixel lies on the
light side or dark side.
The second derivative has a zero crossings at the midpoint of the transition in gray level.
The first and second derivative t any point in an image can be obtained by using the
magnitude of the gradient operators at that point and laplacian operators.
0
1. FIRST DERIVATE. This is done by calculating the gradient of the pixel relative to its
neighborhood. A good approximation of the first derivate is given by the two Sobel operators as
shown in the following Figure , with the advantage of a smoothing effect. Because derivatives
enhance noise, the smoothing effect is a particularly attractive feature of the Sobel operators.
First derivatives are implemented using the magnitude of the gradient.
i. Gradient operator:
For a function f (x, y), the gradient f at co-ordinate (x, y) is defined as the 2- dimesional column
vector
∆f = Gx
Gy
∂f/∂x
∆f = ∂f/∂y
∂f/∂y
0
This approximation is known as the Prewitt operator.
The above Equation can be implemented by using the following masks.
a) Sobel operator.
The gradients are calculated separately for horizontal and vertical directions:
Gx x7 2x8 x9 x1 2x2 x3
Gy x3 2x 6 x9 x1 2x 4 x7
Horizontal edge Vertical edge
-1 -2 -1 -1 0 1
0 0 0 -2 0 2
1 2 1 -1 0 1
Figure Sobel masks for edge detection
0
The following shows the a 3 x3 region of image and various masks used to
compute the gradient at point z5
i. SECOND DERIVATE can be approximated by the Laplacian mask given in Figure .
The drawbacks of Laplacian are its sensitivity to noise and incapability to detect the direction
of the edge. On the other hand, because it is the second derivate it produces double peak
(positive and negative impulse). By Laplacian, we can detect whether the pixel lies in the dark
or bright side of the edge. This property can be used in image segmentation.
0 -1 0
-1 4 -1
0 -1 0
Figure : Mask for Laplacian (second derivate).
a) Laplacian Operator:
It is a linear operator and is the simplest isotropic derivative operator, which
is defined as,
------------------(1)
Where,
Sub (2) & (3) in (1), we get,
f 2 f (x 1, y) f (x 1, y) f (x, y 1) f (x, y 1) 4 f (x, y) ---------------(4)
0
Edge linking and boundary detection:
Set of pixels from edge detecting algorithms, seldom define a boundary completely because of
noise, breaks in the boundary etc.
• Edge detection is always followed by edge linking
• This section discusses edge algorithms that link edge pixels into meaningful forms
Basic approaches
1. Local Processing
2. Global Processing via the Hough Transform
3. Global Processing via Graph-Theoretic Techniques
1. Local processing
Analysing the characteristics of edge pixels in a small neighbourhood
• Gradient magnitude
• Gradient direction
• All points that are similar according to a set of criteria are linked
Analyze the characteristics of pixels in a small neighborhood (say, 3x3, 5x5) about
every edge pixels (x,y) in an image.All points that are similar according to a set of predefined
criteria are linked, forming an edge of pixels that share those criteria
Criteria
The strength of the response of the gradient operator used to produce the edge pixel an edge
pixel with coordinates (x0,y0) in a predefined neighborhood of (x,y) is similar in magnitude to
the pixel at (x,y) if
|f(x,y) - f (x0,y0) | E
The direction of the gradient vector an edge pixel with coordinates (x0,y0) in a predefined
neighborhood of (x,y) is similar in angle to the pixel at (x,y) if
|(x,y) - (x0,y0) | < A
1. A Point in the predefined neighborhood of (x,y) is linked to the pixel at (x,y) if both
magnitude and direction criteria are satified.
2. The process is repeated at every location in the image
3. A record must be kept
4. Simply by assigning a different gray level to each set of linked edge pixels.
5. find rectangles whose sizes makes them suitable candidates for license plates
6. Use horizontal and vertical Sobel operators ,eliminate isolated short segments
link conditions:
gradient value > 25
gradient direction differs < 15
2. Global Processing via the Hough Transform
Motivation – Given a point (xi , yi ), many lines pass through this point as yi = axi + b with different a and
b. – Find all lines determined by every pair of points and find all subsets of points that are close to
particular lines.
Hough Transform:
Hough transform is a process that converts xy-plane to ab-plane (parameter space). Considering b = -axi + yi
in ab-plane:
– A point (xi , yi ) in the image space is mapped to many
points {(a, b)} in the parameter space which are on line.
– (xj , yj ) is mapped to many points {(a, b)} in the parameter space which are on line:
b=-axj +y j .
• These two points are collinear if b = -axi + yi and b=-axj +yj in
the parameter space intersect at (a’, b’)
The Procedures of Hough Transform:
Step 1: Subdivide ab-plane to accumulator cells.
Let A(i, j) be the cell at (i, j) where amin≤ai ≤ a max , bmin ≤bj ≤b max and A(i, j)=0.
Step 2: For every (x k , yk ), find b=-xk a p+yk for each allowed p.
Step 3: Round off b to the nearest allowed value b q . Let A(p,q)=A(p,q)+1.
Performance and Limitation
Performance – With n image points and K accumulator cells, there are nK computation involved.
• Limitation – The slope approaches infinitely as the line approaches the vertival. – Solution:
• Rewrite the normal representation of a line to xi cosθ + yi sinθ = ρ Rewriting a line to
xi cosθ + yi sinθ = ρ
Rewriting a line as x cosθ + y sinθ = ρ
This implementation is identical to the method using slope-intercept representation.
• Instead of straight lines, the loci are sinusoidal curves in ρθ -plane.
Rewriting a line as x cosθ + y sinθ = ρ
Edge-Linking Using Hough Transform
1. Compute the gradient of an image and threshold it to obtain a binary image.
2. Specify subdivisions in the ρθ -plane.
3. Examine the counts of the accumulator cells for high pixel concentrations.
4. Examine the relationship (for continuity) between pixels in a chosen cell
Edge Linking and Boundary Detection
Set of pixels from edge detecting algorithms, seldom define a boundary completely because
of noise, breaks in the boundary etc.
• Therefore, Edge detecting algorithms are usually followed by linking and other detection
procedures, designed to assemble edge pixels into meaningful boundaries
10.2.7 Edge linking and boundary detection
Edge detection is always followed by edge linking
Local processing
• Analyze pixels in small neighbourhood Sxy of each edge point
• Pixels that are similar are linked
• Principal properties used for establishing similarity:
(1) M (x, y) = |∇ f (x, y )|: Magnitude of gradient vector
(2) α (x, y ): Direction of gradient vector
• Edge pixel with coordinates (s, t ) in Sxy is similar in magnitude to pixel at (x, y ) if
|M (s, t ) − M (x, y )| < E
• Edge pixel with coordinates (s, t ) in Sxy has an angle similar to pixel at (x, y ) if
|α (s, t ) − α (x, y )| < A
• Edge pixel (s, t ) in Sxy is linked with (x, y) if both criteria are satisfied The above strategy is
expensive. A record has to be kept of all linked points by, for example, assigning a different label to
every set of linked points Simplification suitable for real-time applications:
(1) Compute M (x, y ) and α (x, y ) of input image f (x, y )
(2) Form binary image g (x, y) = 1, if M (x, y ) > TM AND α (x, y ) ∈ [A − TA, A + TA] 0, otherwise
(3) Scan rows of g and fill (set to 1) all gaps (sets of 0s) in each row that do not exceed a specified
length K
(4) Rotate g by θ and apply step
(3). Rotate result back by − θ.
Image rotation is expensive ⇒ when linking in numerous directions is required, steps (3) and (4) are
combined into a single, radial scanning procedure.
Regional processing (Polygonal approximations)
A conceptual understanding of this idea is sufficient
Requirements:
(1) Two starting points must be specified;
(2) All the points must be ordered
Large distance between successive points, relative to the distance between other points ⇒ boundary
segment (open curve) ⇒ end points used as starting points
Seperation between points uniform ⇒ boundary (closed curve) ⇒ extreme points used as starting
points