Image Sampling and
Quantization
IMAGE SAMPLING AND QUANTIZATION
Quantization
Audio signals are continuous in time and amplitude
Audio signal must be digitized in both time and amplitude to be
represented in binary form.
Discrete in time by sampling – Nyquist
Discrete in amplitude by quantization
Once samples have been captured, they must be made discrete in
amplitude.
Analog to Digital Converter
Quantization
Converts actual sample values (usually voltage measurements)
into an integer approximation
Process of rounding off a continuous value so that it can be
represented by a fixed number of binary digits
Tradeoff between number of bits required and error
Human perception limitations affect allowable error
Specific application affects allowable error
Two approaches to quantization
Rounding the sample to the closest integer.
(e.g. round 3.14 to 3)
Create a Quantizer table that generates a staircase pattern of
values based on a step size.
There are numerous ways to acquire images.
To generate digital images from sensed data.
The output of most sensors is a continuous voltage
waveform whose amplitude and spatial behavior are
related to the physical phenomenon being sensed.
To create a digital image, we need to convert the
continuous sensed data into digital form.
This involves two processes:
sampling
quantization.
Basic Concepts in Sampling and
Quantization
The basic idea behind sampling and quantization is
illustrated in Fig. 2.16.
Figure 2.16(a) shows a continuous image f that we want to
convert to digital form.
An image may be continuous with respect to the x- and y-
coordinates, and also in amplitude.
To convert it to digital form, we have to sample the
function in both coordinates and in amplitude.
Digitizing the coordinate values is called sampling
Digitizing the amplitude values is called quantization.
The one-dimensional function in Fig. 2.16(b) is a plot of
amplitude (intensity level) values of the continuous
image along the line segment AB in Fig. 2.16(a).
The random variations are due to image noise.
To sample this function, we take equally spaced samples
along line AB, as shown in Fig. 2.16(c).
The spatial location of each sample is indicated by a
vertical tick mark in the bottom part of the figure.
The samples are shown as small white squares
superimposed on the function
The set of these discrete locations gives the sampled function.
However, the values of the samples still span (vertically) a
continuous range of intensity values.
In order to form a digital function, the intensity values also must
be converted(quantized) into discrete quantities.
The right side of Fig. 2.16(c) shows the intensity scale divided
into eight discrete intervals, ranging from black to white.
The vertical tick marks indicate the specific value assigned to
each of the eight intensity intervals.
The continuous intensity levels are quantized by assigning
one of the eight values to each sample.
• The assignment is made depending on the vertical proximity of a
sample to a vertical tick mark.
• The digital samples resulting from both sampling and
quantization are shown in Fig. 2.16(d).
• Starting at the top of the image and carrying out this procedure
line by line produces a two-dimensional digital image.
• It is implied in Fig. 2.16 that, in addition to the number of
discrete levels used, the accuracy achieved in quantization is
highly dependent on the noise content of the sampled signal.
The method of sampling is determined by the sensor
arrangement used to generate the image.
When an image is generated by a single sensing element
combined with mechanical motion, as in Fig. 2.13.
However, spatial sampling is accomplished by selecting
the number of individual mechanical increments at
which we activate the sensor to collect data.
• Mechanical motion can be made very exact so, in
principle, there is almost no limit as to how fine we can
sample an image using this approach.
• In practice, limits on sampling accuracy are determined
by other factors, such as the quality of the optical
components of the system.
When a sensing strip is used for image acquisition,
the number of sensors in the strip establishes the
sampling limitations in one image direction.
Mechanical motion in the other direction can be
controlled more accurately,
but it makes little sense to try to achieve sampling
density in one direction that exceeds the sampling
limits established by the number of sensors in the
other.
Quantization of the sensor outputs completes the
process of generating a digital image
When a sensing array is used for image acquisition, there is no
motion and the number of sensors in the array establishes the
limits of sampling in both directions.
Quantization of the sensor outputs is as before.
Figure 2.17 illustrates this concept.
Figure 2.17(a) shows a continuous image projected onto the
plane of an array sensor.
• Figure 2.17(b) shows the image after sampling and quantization.
• Clearly, the quality of a digital image is determined to a large
degree by the number of samples and discrete intensity levels
used in sampling and quantization.
• Image content is also an important consideration in choosing
these parameters
Representing Digital Images
Let f(s,t) represent a continuous image function of two continuous
variables, s and t.
We convert this function into a digital image by sampling and
quantization,
Suppose that we sample the continuous image into a 2-D array, , containing
M rows and N columns, where are discrete coordinates. For notational
clarity and convenience, we use integer values for these discrete
coordinates
For example, the value of the digital image at the origin is f(0,)) , and the
next coordinate value along the first row is . Here, the notation (0, 1) is
used to signify
the second sample along the first row. It does not mean that these are the
values
of the physical coordinates when the image was sampled. In general, the
value of the image at any coordinates(x,y) is denoted by f(x,y) , where x and
y
are integers. The section of the real plane spanned by the coordinates of an
image is called the spatial domain, with x and y being referred to as spatial
variables or spatial coordinates.
The representation of an M×N numerical array as
f (0, 0) f (0,1) ... f (0, N 1)
f (1, 0) f (1,1) ... f (1, N 1)
f ( x, y )
... ... ... ...
f ( M 1, 0) f ( M 1,1) ... f ( M 1, N 1)
The representation of an M×N numerical array as
a0,0 a0,1 ... a0, N 1
a a1,1 ... a1, N 1
A 1,0
... ... ... ...
aM 1,0 aM 1,1 ... aM 1, N 1
The representation of an M×N numerical array in
MATLAB
f (1,1) f (1, 2) ... f (1, N )
f (2,1) f (2, 2) ... f (2, N )
f ( x, y )
... ... ... ...
f ( M ,1) f ( M , 2) ... f (M , N )
Discrete intensity interval [0, L-1], L=2k
The number b of bits required to store a M × N digitized
image
b=M×N×k
As Fig. 2.18 shows, there are three basic ways to represent f(x,y).
Figure 2.18(a) is a plot of the function, with two axes determining
spatial location
and the third axis being the values of f (intensities) as a function of
the two spatial
variables x and y. Although we can infer the structure of the image
in this
example by looking at the plot, complex images generally are too
detailed and
difficult to interpret from such plots.This representation is useful
when working
with gray-scale sets whose elements are expressed as triplets of the
form (x,y,z)
, where x and y are spatial coordinates and z is the value of f at
coordinates (x,y)
The representation in Fig. 2.18(b) is much more common. It shows
as it would appear on a monitor or photograph. Here, the intensity of each
point is proportional to the value of f at that point. In this figure, there are only
three equally spaced intensity values. If the intensity is normalized to the interval
[0, 1], then each point in the image has the value 0, 0.5, or 1. A monitor
or printer simply converts these three values to black, gray, or white, respectively,
as Fig. 2.18(b) shows. The third representation is simply to display the
numerical values of f(x,y) as an array (matrix). In this example, f is of size 600 x 600
elements, or 360,000 numbers. Clearly, printing the complete array
would be cumbersome and convey little information.When developing algorithms,
however, this representation is quite useful when only parts of the
image are printed and analyzed as numerical values. Figure 2.18(c) conveys
this concept graphically
Numerical arrays are used for processing and
algorithm development.
In equation form, we write the representation of an
M x N numerical array as
origin of a digital image is at the
top left, with the positive x-axis extending downward and the positive y-
axis
extending to the right.This is a conventional representation based on the
fact
that many image displays (e.g., TV monitors) sweep an image starting at
the
top left and moving to the right one row at a time. More important is the
fact
that the first element of a matrix is by convention at the top left of the
array, so
choosing the origin of f(x,y) at that point makes sense mathematically.
Keep
in mind that this representation is the standard right-handed Cartesian
coordinate
system with which you are familiar.† We simply show the axes pointing
downward and to the right, instead of to the right and up
we define the dynamic range of an imaging
system to be the ratio of the maximum measurable
intensity to the minimum
detectable intensity level in the system. As a rule, the upper limit is
determined
by saturation and the lower limit by noise (see Fig. 2.19). Basically,
dynamic
range establishes the lowest and highest intensity levels that a system can
represent
and, consequently, that an image can have. Closely associated with this
concept
is image contrast, which we define as the difference in intensity between
the highest and lowest intensity levels in an image.When an appreciable
number
of pixels in an image have a high dynamic range, we can expect the image
to have high contrast. Conversely, an image with low dynamic range
typically
has a dull, washed-out gray look
The number of intensity levels corresponding to each
value of k is shown in parentheses.When an image
can have 2 power k intensity levels,
it is common practice to refer to the image as a “k-bit
image.”
Spatial and Intensity Resolution
spatial resolution can be stated in a number of ways,
with line pairs per unit distance, and dots (pixels)
per unit distance being
among the most common measures. Suppose that we
construct a chart with
alternating black and white vertical lines, each of
width W units (W can be
less than 1).The width of a line pair is thus 2W, and
there are ½ W line pairs
per unit distance
A widely used definition of image resolution is
the largest number of discernible line pairs per unit
distance (e.g., 100 line
pairs per mm). Dots per unit distance is a measure of
image resolution used
commonly in the printing and publishing industry.
In the U.S., this measure
usually is expressed as dots per inch (dpi).
the number of bits used to quantize intensity as the
intensity resolution
Image Interpolation
Interpolation is a basic tool used extensively in tasks
such as zooming, shrinking,
rotating, and geometric corrections.
(shrinking and
zooming), which are basically image resampling
methods.
interpolation is the process of using known data to
estimate
values at unknown locations.
nearest neighbor interpolation because it
assigns to each new location the intensity of its
nearest neighbor in the original
image