KEMBAR78
Source coding theorem | PPTX
SOURCE CODING THEOREM
SOURCE CODING THEOREM
The theorem described thus far establish
fundamental limits on error-free communication
over both reliable and unreliable channels.
In this section we turn to the case in which the
channel is error free but the communication
process itself is lossy.
Under these circumstances, the principal
function of the communication system is
“information compression”.
The average error introduced by the
compression is constrained to some maximum
allowable level D.
We want to determine the smallest rate, at
which information about the source can be
conveyed to the user.
This problem is specifically addressed by a
branch of information theory known as rate
distortion theory.
----------------------------------------------------------------
Communication
System
 Let the information source and decoder output be defined by
the finite ensembles (A, z) and (B, z), respectively.
Information
Source
Channel
Information
User
Encoder Decoder
The assumption now is that the channel of the
figure is error free.
So a channel matrix Q, which relates z to v in
accordance with v=Qz can be thought of as
modeling the encoding-decoding process
alone.
Because the encoding-decoding process is
deterministic,
where Q determines an artificial zero-
memory channel that models the effect of the
compression and decompression.
Each time the source produces source symbol
𝑎𝑗, it is represented by a code symbol that is
then decode to yield output symbol 𝑏 𝑘 with
probability 𝑞 𝑘𝑗.
Addressing the problem of encoding the source
so that the average distoration is less than D.
A non-negative cost function ρ(𝑎𝑗, 𝑏 𝑘), called
a distoration measure, can be used to define
the penalty associated with reproducing source
output 𝑎𝑗 with decoder output 𝑏 𝑘.
The output of the source is random, so the
distoration also is a random variable whose
average value denoted d(Q), is
The notation d(Q) emphasizes that the average
distoration is a function of the encoding-
decoding procedure.
𝑄 𝐷={𝑞 𝑘𝑗|d(Q)≤D}
Rate distoration finction will be
R(D)= min
𝑄=𝑄𝑑
[𝐼(𝑧, 𝑣]
If D=0, then R(D) ≤ H(z).
We simply minimize I(z,v) by appropriate
choice of Q (or 𝑞 𝑘𝑗) subject to the constraints
𝑞 𝑘𝑗 ≥ 0,
𝑘=1
𝐾
𝑞𝑘𝑗 = 1 ,
d(Q)=D.
The above equations are fundamental
properties of channel matrix Q.
THANK YOU

Source coding theorem

  • 1.
  • 2.
    SOURCE CODING THEOREM Thetheorem described thus far establish fundamental limits on error-free communication over both reliable and unreliable channels. In this section we turn to the case in which the channel is error free but the communication process itself is lossy. Under these circumstances, the principal function of the communication system is “information compression”.
  • 3.
    The average errorintroduced by the compression is constrained to some maximum allowable level D. We want to determine the smallest rate, at which information about the source can be conveyed to the user. This problem is specifically addressed by a branch of information theory known as rate distortion theory.
  • 4.
    ---------------------------------------------------------------- Communication System  Let theinformation source and decoder output be defined by the finite ensembles (A, z) and (B, z), respectively. Information Source Channel Information User Encoder Decoder
  • 5.
    The assumption nowis that the channel of the figure is error free. So a channel matrix Q, which relates z to v in accordance with v=Qz can be thought of as modeling the encoding-decoding process alone. Because the encoding-decoding process is deterministic, where Q determines an artificial zero- memory channel that models the effect of the compression and decompression.
  • 6.
    Each time thesource produces source symbol 𝑎𝑗, it is represented by a code symbol that is then decode to yield output symbol 𝑏 𝑘 with probability 𝑞 𝑘𝑗. Addressing the problem of encoding the source so that the average distoration is less than D. A non-negative cost function ρ(𝑎𝑗, 𝑏 𝑘), called a distoration measure, can be used to define the penalty associated with reproducing source output 𝑎𝑗 with decoder output 𝑏 𝑘.
  • 7.
    The output ofthe source is random, so the distoration also is a random variable whose average value denoted d(Q), is The notation d(Q) emphasizes that the average distoration is a function of the encoding- decoding procedure. 𝑄 𝐷={𝑞 𝑘𝑗|d(Q)≤D} Rate distoration finction will be R(D)= min 𝑄=𝑄𝑑 [𝐼(𝑧, 𝑣] If D=0, then R(D) ≤ H(z).
  • 8.
    We simply minimizeI(z,v) by appropriate choice of Q (or 𝑞 𝑘𝑗) subject to the constraints 𝑞 𝑘𝑗 ≥ 0, 𝑘=1 𝐾 𝑞𝑘𝑗 = 1 , d(Q)=D. The above equations are fundamental properties of channel matrix Q.
  • 9.