KEMBAR78
Vector Quantization Techniques | PDF | Data Compression | Signal Processing
100% found this document useful (1 vote)
148 views25 pages

Vector Quantization Techniques

Vector quantization is a lossy data compression technique that quantizes blocks of data instead of single samples. It works by mapping input vectors to codevectors in a codebook. The encoder finds the closest codevector match and transmits the index. At the decoder, the index is used to reconstruct the codevector. VQ performance depends on codebook size and vector size, with larger codebooks and vectors providing better performance at the cost of complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
148 views25 pages

Vector Quantization Techniques

Vector quantization is a lossy data compression technique that quantizes blocks of data instead of single samples. It works by mapping input vectors to codevectors in a codebook. The encoder finds the closest codevector match and transmits the index. At the decoder, the index is used to reconstruct the codevector. VQ performance depends on codebook size and vector size, with larger codebooks and vectors providing better performance at the cost of complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

VECTOR QUANTIZATION

Vector Quantization
• It is a block coding technique that quantises blocks of
data instead of single sample.

• VQ is a lossy data compression scheme based on the


principle of block coding.

• It is divided into two parts

 Encoding

 Decoding
Vector quantisation
• At the encoder, the input image is partitioned into a set of non-
overlapping image blocks
• The closest code word in the code book is then found for each image
block.
• Here, the closest code word for a given block is the one in the code book
that has the minimum squared Euclidean distance from the input block.
• Next, the corresponding index for each searched closest code word is
transmitted to the decoder.
• Compression is achieved because the indices of the closest code words in
the code book are sent to the decoder instead of the image block
themselves
Vector quantisation
• The goal of VQ code-book generation is to find an optimal code book
that yields the lowest possible distortions when compared with all
other code books of same size.
• VQ performance is directly proportional to the code book size and the
vector size.
• The encoder searches the code book and attempts to minimise the
distortion b/w the original image block and the chosen vector from
the code book
• The search complexity increases with the number of vectors in the
code book
• To minimise the search complexity, the tree search VQ scheme was
introduced.
Vector quantisation
• Vector quantisation is a lossy data-compression scheme based on
the principles of block coding.

• Each vector is called a code vector or a code word. The set of all
code words is called a code book.

• Each input vector can be associated with an index of a code word


and this index is transferred instead of the original vector. The index
can be decoded to get the code word.
• The number of code vectors (N) depends upon two parameters—
rate (R) and dimensions (L). The number of code vectors is
calculated through the following formulae:

Number of code vectors( N )=2^(R*L)


Where ,
R→Rate (bits/pixel)
L→Dimensions (grouping)

When the rate increases, the number of code vector increases. As the
number of code vectors increases, the size of the code book also
increases.
STEPS INVOLVED IN VECTOR
QUANTISATION
1. Computation of Dynamic range.
2. Fixing the rate and dimension.
3. Determining the number of code vectors.
4. Determing the code vectors through centroid
method.
5. Mapping input vectors to code vectors.
6. Adjusting input vector to fall into the code vector.
7. Transmission of indices.
8. Reconstruction of image from the transmitted
indices.
Consider an input image of size 4 by 4

Dynamic range decides the number of


bits required to represent the pixel.

Here Maximum value=16

Minimum value =1
Dynamic range=Maximum-Minimum
=16-1=15
a minimum of four bits is required to
represent each pixel value.
Choose the dimension as two
Totally, we have 16 elements in the input
matrix. After grouping, we have only
8 elements instead of sixteen elements.

Actually, four bits are necessary to


represent each pixel value. Then to
achieve compression, the rate is fixed to
be two (R = 2)
number of code vectors in the code
book is N =2^RL

Here , R=2 and L=2.


Hence the number of code vectors is
N=2^RL=2^(2 × 2) =16.

Totally, sixteen code vectors are there in


this example. The code vectors are from
C00 to C15.

𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑣𝑎𝑙𝑢𝑒−𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑣𝑎𝑙𝑢𝑒


Interval= 𝑅∗𝐿

=4
C00,C01, ...... C15 are the code vectors
of the generated code book and the
corresponding positions are (2, 14),
(6,14) ... ,(14,2)respectively
• Mapping of the input vector into the code vector is done through
projection. In our case, there are totally eight input vectors. The projection
of these eight input vectors into the code vector is illustrated by a star
symbol.
• The input vector is adjusted so as to
fall into one of the nearest code
vectors.

The first input image vector (2, 4) is


adjusted to fall into the code vector C08.
The next image vector (6, 8) is projected
into the code vector C09. The third code
vector (10, 11) is adjusted to fall on to
nearest the code vector C06. The fourth
image vector (16, 15) is adjusted to fall
on the nearest code vector C03. The fifth
image vector (9, 3) is adjusted to fall on
the nearest code vector C14. The sixth
image vector (1, 7) is adjusted to fall on
the nearest code vector C08. If we
consider the seventh image vector (12,
14), there are two possible choices—it
can be adjusted to fall either on to C02
or C03. In this case we have chosen the
code vector C03.Finally, the eigth image
vector (13, 5) is adjusted to fall on the
nearest code vector
• Code vectors after adjustment
• The indices corresponding to the code vectors are transmitted.
• In the receiver end, the receiver receives only the indices, not the code
vectors. Upon the reception of the indices, the corresponding code vectors
are reconstructed, because the same code book is maintained at the
transmitter and the receiver end.
• The image is reconstructed at the receiver.

Input Image Reconstructed Image at R=2


TYPES OF VECTOR QUANTIZATION
• Tree-Search Vector Quantisation (TSVQ)
• Multi-stage Vector Quantisation (MSVQ)
• Separating Mean Vector Quantization or Mean Removed Vector
Quantisation
• Gain-Shape Vector Quantisation
• Classified Vector Quantisation
• Hierarchical Vector Quantisation
• Interpolative Vector Quantisation
• Lapped Vector Quantisation
• Lattice Vector Quantisation
 TREE SEARCH VECTOR QUANTIZATION
• It is a modification of the full-search vector-quantisation technique.
• TSVQ requires fewer searches than full-search VQ.
• A full-search vector quantiser uses an unstructured set of reproduction
vectors and determines the appropriate index for transmission by
computing the distortion between the input vector and each possible
reproduction vector.
• The number of distortion calculations required by the full-search vector
quantiser is O(m).
• The number of distortion calculations required by tree-structured vector
quantisation is O(log 2 𝑚) .
• The fixed-rate tree-structured vector quantisation generally has a higher
distortion than a fixed-rate full-search vector quantisation due to the
constraints on the search.
• The encoder compares the input vector to two possible candidate
reproductions, chooses the one with the minimum distortion, and
advances to the selected node. If the node is not a terminal node of the
tree, the encoder continues and chooses the best available node of the
new pair presented.
• The encoder produces binary symbols to represent its sequence of
left/right decisions. The stored index is then a path map through the tree
to the terminal node, which is associated with the final code word.

Advantages:
The main advantage of TSVQ is that the search complexity is
reduced. The search complexity is reduced to O(log 2 𝑀 ) from O(M) for an
unstructured code book of size M.

Drawbacks:
The main drawback of TSVQ is the storage requirement. The
performance of TSVQ is, in general, inferior to that of unstructured VQ of the
same rate since a source vector may not be mapped on to the optimum code
vector because of the imposed structure
 MULTISTAGE VECTOR QUANTIZATION

• Multistage vector quantisation is also known as residual vector quantisation


or cascaded vector quantisation.
• multistage vector quantisation divides the encoding task into several stages.
• The first stage performs a relatively crude encoding of the input vector.
Then the second stage operates on the error vector between the original
vector and the quantized first-stage output.
• In the decoder, the reconstruction vectors are obtained using a Look-up
table (LUT).
 MULTISTAGE VECTOR QUANTIZATION

• The quantized error vector provides a refinement to the first approximation.


• Unlike full-search VQ, whose encoding complexity and memory requirements
increase exponentially with the dimension-rate product, in MSVQ, the increase
is only linear.
• This has particular utility in sub band coding since either the rate or the
dimension can be made large, which allows it to respond to the occasional need
for the locally high bit-rate sub-band coding.
 MEAN REMOVED VECTOR QUANTIZATION

• The mean value is subtracted from the image vectors to yield a residual
vector with zero mean.
• The residual vectors are quantized with VQ and the corresponding indices
are transmitted.
• Reconstruction is performed by adding the mean to the decoded image
vector.
 GAIN-SHAPE VECTOR QUANTIZATION

• In gain-shape vector quantisation, the input vectors are decomposed into


a scalar gain term and a gain normalised vector term, which is generally
termed the shape.
The gain value is given by,
𝑘

𝑔= 𝑥 = 𝑥2 𝑖
𝑖=1

• The gain factor is the energy or the variance of the code vector.
The shape vector is given by,
𝑥
S= 𝑔
• The gain term is quantized with a scalar quantizer, while the shape vectors
are represented by a shape code book designed specifically for the gain
normalised shape vectors
Advantage of Vector Quantisation Over
Scalar Quanitzation

• Vector quantisation exploits the linear and non-linear dependence among


vectors.

• It also provides flexibility in choosing multidimensional quantiser cell


shapes and in choosing a desired code-book size.

• The advantage of VQ over SQ is the fractional value of resolution that is


achievable. This is very important for low-bit-rate applications where low
resolution is sufficient
THANK YOU!

262

You might also like