IMAGE ENHANCEMENT TECHNIQUES IN
DIFFERENT DOMAINS
UNIT 2
Resmi K R
Mission Vision Core Values
Christ University is a nurturing ground for an individual’s Excellence and Service Faith in God | Moral Uprightness
holistic development to make effective contribution Love of Fellow Beings | Social Responsibility
to the society in a dynamic environment Pursuit of Excellence
Syllabus
Gray Level Transformations, Histogram Processing,
Histogram equalization, Basics of Spatial Filters
Binary Image
• A binary image is a digital image that has only two possibl
e values for each pixel.
• Typically the two colors used for a binary image are black
and white though any two colors can be used.
Grayscale Image
In a grayscale image, every pixel digitized can hold an intensity value
of brightness for the gray shade in consideration. In most cases,
these range from 0 – 255 in an 8-bit grayscale image whereby 0 is
represented by black, 255 by white and all other values lies between
the two extremes as grey.
Color Image
● In computer vision, a color image is an image that contains information about
the colors of objects, unlike grayscale images which only represent intensity
variations.
● Color images are typically represented using the RGB (Red, Green, Blue)
color model, where each pixel has values for the intensity of each of these
three primary colors.
Gray Level Transformations
Image Enhancement
• The principal objective of image enhancement is to process a
given image so that the result is more suitable than the origin
al image for a specific application.
• Image enhancement techniques can be divided into two broa
d categories:
1. Spatial domain methods, which operate directly on pixels
2. Frequency domain methods, which operate on the Fourie
r transform of an image.
Spatial Domain Process
Most spatial domain enhancement operations can be
reduced to the form g (x, y) = T[ f (x, y)] where f (x, y)
is the input image, g (x, y) is the processed image
and T is some operator defined over some
neighbourhood of (x, y).
g ( x, y ) = T [ f ( x, y )])
f ( x, y ) : input image
g ( x, y ) : output image
T : an operator on f defined over
a neighborhood of point ( x, y )
6/30/2025 8
Point Processing Techniques
These processing methods are based only on the intensity of single pixels.
Point processing operation take the form s = T(r) where s refers to the pr
ocessed image pixel value and r refers to the original image pixel value.
The point processing operations:
1. Image Negative
2. Logarithmic Transformation
3. Power-law (Gamma) Transformation
4. Contrast Stretching
5. Thresholding
6. Gray Level Slicing
7. Bit-plane Slicing
8. Intensity Level Slicing
Transformations Functions
Digital Negative
(Image Negatives)
Inverts the intensity of pixels.Formula: s = L - 1 - r (where r is
input pixel, s is output, L is max gray level)
=
s −
L−
1r
Application:
Digital negatives are useful in the display of medical images and in producing negative
prints of images.
LOG TRANSFORM
The general form of the log transformation is
s = c * log (1 + r)
The log transformation maps a narrow range of low input grey level values into a wi
der range of output values.
The inverse log transformation performs the opposite transformation.
Log functions are particularly useful when the input grey level values may have an ex
tremely large range of values.
Power-law (Gamma) Transformation
● Power law transformations have the
following form:
● Map a narrow range of dark input
values into a wider range of output
values or vice versa.
● Varying γ gives a whole family of
curves.
● C is generally set to 1.
● Grey levels must be in the range [0.0,
1.0].
● Most of the display devices uses
gamma correction by default
Gamma Correction
Piecewise-Linear Transformation Functions
Contrast Stretching s
Improves the contrast by expanding the range of
intensity values. L-1
r 0ra Vb
s=
(r−a)+
Va arb Va
(r−)+
b V brL
L-1 r
b
0 a b
=
a=
50
,
b =
150
,0
.
2=
,2
,=
1
,
V=
30
a,
V
b=
200
Contrast Stretching
Clipping
0 0ra
s=(r−a) arb
L− 1 brL
a b r
0 L
=
a =
50=
,b
150
, 2
Thresholding
– Thresholding is a special case of clipping where a=b=T and
the output becomes binary.
– Thresholing is used to make such an image binary.
– Converts grayscale images to binary.s = 0 if r < T, s = 1 if r ≥
T
Thresholding
Intensity-level slicing
Gray Level Slicing highlights specific ranges of intensities. Similar
to gray level slicing, but applied more categorically to enhance
certain ranges.
Intensity level slicing
Bit-plane slicing
Analyzes the contribution of individual bits in a pixel’s binary representation.
Bit-plane slicing
Bit-plane slicing
Histogram of an Image
• The histogram of a digital image with intensity levels in
the range [0,L-1] is a discrete function h(rk) = nk
where rk is the kth intensity value and nk is the number
of pixels in the image with intensity rk
• The histogram of an image is a plot of the number of
gray levels in the image against the gray level values.(Plo
t of h(rk) = nk versus rk)
Histogram of an Image
• It is a common practice to normalize a histogram by div
iding each of its components by the total number of pix
els in the image.
• Thus a normalized histogram is given by
p(rk) = nk /MN ;k=0,1,2,….,L-1.
where M and N are the row and column dimensions of the image
Let's assume that an Image matrix is given as:
In this method, the x-axis has grey levels/ Intensity values and the y-axis has
the number of pixels in each grey level.
Histogram Processing
Histogram Equalized Images
Histogram Equalization
● Histogram Equalization is a mathematical technique to widen the dynamic
range of the histogram. Sometimes the histogram is spanned over a short
range, by equalization the span of the histogram is widened. In digital image
processing, the contrast of an image is enhanced using this very technique.
Steps
1. Find the range of intensity values.
2. Find the frequency of each intensity value.
3. Calculate the probability density function for each frequency.
4. Calculate the cumulative density function for each frequency.
5. Multiply CDF with the highest intensity value possible.
6. Round off the values obtained in step-5.
● A 3-bit Image of size 4x5 is shown below. Compute the histogram equalized image.
Output pixel values are
024 567
165 332
Basics of Spatial Filters
Correlation and Convolution
● In image processing, both correlation and convolution are used
to analyze images by comparing them with a kernel (a small
matrix).
● While both operations slide a kernel across an image and
compute weighted sums, convolution flips the kernel
horizontally and vertically before sliding, whereas correlation
does not. This difference in kernel manipulation leads to
different applications.
● Convolution is primarily used for linear filtering operations,
such as blurring, sharpening, edge detection, and noise
reduction.
● Correlation is used for pattern detection and matching
● Correlation Filtering
The basic idea in correlation filtering:
1. Slide the center of the correlation kernel on the image
2. Multiply each weight in the correlation kernel by the pixel in the image
3. Sum these products
Output
● Convolution Filtering
● The convolution filtering is also a linear filtering and it is more
common then correlation filtering. There is a small difference
between correlation and convolution :
● Flip the filter in both dimensions (bottom to top, right to left)
Video Links
● https://setosa.io/ev/image-kernels/
● Correlation and Convolution in 1D:
https://www.youtube.com/watch?v=WGpZ26xosXg
● Correlation and Convolution in 2D:
https://www.youtube.com/watch?v=gFELyrIx010