DSPchapter 2
DSPchapter 2
This chapter discusses the representation and characteristics of signals and systems.
Signal is a general term to mean something that conveys information. Mathematically, it can be
represented as a function of one or more variables.
Although many physical quantities can be regarded as signals, for the area of electrical
engineering, those quantities are always transformed into voltage or current by various
transducers, e.g. microphone for speech signal, camera for image signal, and sensors in IoT. In
DSP, their original units (such as temperature degree, meter, kilogram and so on) are
disregarded.
Figure 2.2 (a) Segment of a continuous-time speech signal xa (t ) . (b) Sequence of samples
x[n] = xa (nT ) obtained from the signal in part (a) with T=125 µs.
3
Basic sequences and operations
5
The unit sample sequence:
0 n0 0 if i j
[ n] = same as ij =
1 n=0 1 if i = j
Known as Kronecker delta function.
Its role in DSP is completely the same as the Dirac delta function (t )
in continuous-time signals and systems. It is therefore referred to as the
discrete-time impulse, or simply impulse. But, unlike (t ) which is a
generalized function, [n] is a function.
6
Fig. 2.4 Example of a sequence to be represented as a sum of scaled, delayed impulses
1 n0
u[n ] =
0 n0
n
It can also be represented as u[n] = [k ] . This is an interesting representation. A
k =−
Conversely, we have
[n] = u[n] − u[n − 1] 7
◼ The exponential sequence
x[n] = A n
For 0 1 , x[n ] is exponentially decaying to the
right.
For 1 , it is exponentially growing to the right.
For 0 , it alternately changes between positive
values and negative values.
x[n ] = A = A e e j0n
j
n n
= A e j (0n + )
n
= A cos(0n + ) + j A sin(0n + )
n n
shown by
10
2𝜋
period is 𝑇 =
𝜔0
(2) x(t ) is always a periodical signal of t. The property is generally
not true for x[n ] . For periodical x[n ] , the following condition
should be satisfied for an integer N:
(3) For x (t ) , the period decreases as 0 increases. This is not true for
x[n ] because x[n ] is a periodical signal of 0 with period 2 .
11
2.2 Discrete-time Systems
y[n] = T x[n]
For memoryless systems, y[n] depends only on x[n ] for every n, e.g.,
y[n] = ( x[n])2 and y[n] = 3x[n] .
and
T ax[n] = aT x[n] .
◼ The first equation is referred to as the additivity property; while the second one
is the homogeneity or scaling property.
◼ They can be combined into one:
2.2.5 Stability
= x[k ]h[n − k ]
k =−
Here, the second equality is based on the linearity property and the fourth
17
Figure 2.8 Representation of the output of an LTI system as the superposition of responses to
individual samples of the input.
18
y[n] = x[k ]h[n − k ]
k =−
(2.49)
22
(3) parallel connect h1[ n ] and h2 [n ] h[n ] = h1[n ] + h2 [n ]
上式更改了 h[n-k]=0
◼ The concept of causality can be applied to the signal by
defining a causal signal as: x[n ] is a causal sequence
iff x[n] = 0 for n 0 .
24
M2
1
y[n] =
M1 + M 2 + 1 k =− M1
x[n − k ]
The following systems are LTI and their impulse responses are given
below:
◼ Ideal delay: h[n] = [n − nd ] where nd is a positive fixed integer.
◼ Moving average:
1
1 M2
for − M1 n M 2
h[n] =
M1 + M 2 + 1 k =− M1
[n − k ] = M 1 + M 2 + 1
0
otherwise
Accumulator: h[n] = [n − k ] = u[n] . It is an infinite impulse
k =0
25
If h[n ] is non-zero only for a period of finite
duration, it is called an FIR (finite impulse response);
otherwise, it is an IIR (infinite impulse response).
N M
a y[n − k ] = b x[n − m]
k =0
k
m =0
m
27
◼ Accumulator can be expressed in the following LCCDE form
n
y[n] = x[k ] = y[n − 1] + x[n]
k =−
28
M2
1
y[n] = x[n − k ]
M1 + M 2 + 1 k =− M1
a y[n − k ] = 0 .
k =0
k
k = 0 to obtain N roots zm , m = 1,
a Z
k =0
−k
, N , and then form
N
yh [n] = Am zmn
m =1
Since y[n] have the same form as x[n] , e jn is an eigenfunction with
eigenvalue H (e j ) .
36
Figure 2.17 Ideal lowpass filter showing (a) periodicity of the frequency response and (b) one
period of the periodic frequency response.
37
Figure 2.18 Ideal frequency-selective filters. (a) Highpass filter. (b) Bandstop filter. (c)
Bandpass filter. In each case, the frequency response is periodic with period 2 π. Only one period
is shown.
38
Example 2.16: Moving average system
M2
1
j
H (e ) =
M1 + M 2 + 1 n=− M1
e − jn
(2.121)
1 M 2 − jn
H (e ) =j
e (2.122)
M 2 + 1 n =0
1 1 − e − j ( M 2 +1)
H (e ) =
j
M 2 + 1 1 − e − j
1 (e j ( M 2 +1) /2 − e − j ( M 2 +1) /2 )e − j ( M 2 +1)/2
=
M2 +1 ( e j /2 − e− j /2 ) e− j /2 (2.123)
1 sin ( M 2 + 1) / 2 − j M 2 /2
= e
M2 +1 sin / 2
From Eq.(2.123), we find that it is a linear phase LP filter. 39
◼ The example for M 2 = 4 is shown below:
Figure 2.19 (a) Magnitude and (b) phase of the frequency response of the
moving-average system for the case M1 = 0 and M2 = 4.
◼ For M1 = M 2 , it becomes
sin ( 2 M 2 + 1) / 2
H (e ) =
j 1
(2.124)
2M 2 + 1 sin / 2
It is a real value. 40
2.6.2 Suddenly Applied Complex Exponential Inputs
In practical realization of a filter, the input signal can not be applied starting from
− because, in that case, we can not calculate the response. So, we always
consider to apply the input starting at n = 0 , e.g., x[n] = e jnu[n] , the
corresponding output response is
0 n0
y[n] = n − j k j n
h [ k ]e e n0
k =0
For n 0 ,
− j k j n − j k j n
y n = h k e e − h k e e (2.126)
k =0 k = n +1
= H ( e ) e − h k e − jk e jn
j j n
(2.127)
k = n +1
= yss [n] + yt [n]
where yss [n] is referred to as the steady state response and yt [n] is the
transient response. 41
◼ In some conditions, yt [n] → 0 as n → . This is determined by
yt [ n ] = e jn
( h[k ]e − jk
) h[k ] (2.128)
k = n +1 k = n +1
42
◼ The following figure shows the “missing” samples (i.e., x[n] for n 0 ) have
less and less effect as n increases.
j
H (e ) = h[k ]e
k =−
− j k
k =−
h[k ]
1
x[n] =
2
−
X (e j ) e jn d (2.130)
j
X (e ) = x[n]e
n =−
− jn
(2.131)
X (e j )
= X R (e j ) + jX I (e j ) (2.132a) in rectangular form
j
= X (e j ) e jX ( e )
(2.132a) in polar form
46
The sufficient condition that the Fourier transform of a signal exists is
that it is “absolutely summable”, i.e.,
X (e ) =j
x[n]e
n =−
− jn
n =−
x[n ]
x[n] . This
2
A more loose condition is “square summable”, i.e.,
n =−
leads to
lim
2
X (e ) − X M (e ) d = 0
j j
(2.138)
M → −
M
where X M (e j ) = x[n]e
n =− M
− jn
. In other words, X (e j ) exists in the
49
◼ Gibb’s phenomenon: As we enlarge the window size M, the
the ripple never disappears, even for the case that we let the
window size approaching infinite. Moreover, the amplitude of
1− e r =−
51
2.8 Symmetry Properties of the Fourier Transform
1
xe [n] = ( x[n] + x[−n]) = xe[−n] (2.149b)
2
1
xo [n] = ( x[n] − x[−n]) = − xo[−n] (2.149c)
2
◼ For real sequence, xe [n] = xe [−n] is referred to as an even sequence;
while xo [n] = − xo [−n] is an odd sequence.
52
Similarly, X (e j ) = X e (e j ) + X o (e j ) (2.150a)
X e (e j ) = 12 [ X (e j ) + X (e − j )] (2.150b)
X o (e j ) = 12 [ X (e j ) − X (e − j )] (2.150c)
Note that Eq. (2.150) is not the Fourier transform version of Eq.(2.149).
xe [n] X e (e j ) ()
xo [n] X o (e j ) ()
53
Symmetry properties of Fourier transform:
x[n] X (e j )
(2) x[−n] X (e j )
(3) Re{x[n]} X e (e ) j
j
X (e ) =
n =−
x[n]e− jn
(4) j Im{x[n]} X o (e j )
(5) xe [n] X R (e j )
54
(1) x[n] X (e− j )
1
d
j j n
x[n] = X ( e ) e
2 −
1
d
j − j n
x *[n] = X *(e ) e
2 −
Change variable by ' = − , we obtain
1 −
d '
− j ' j ' n
=− X *( e ) e
2
1
) e d '
− j ' j ' n
= X *(e
2 −
− j
x *[n] X *(e )
55
(2) x[−n] X (e j )
1
x[n] =
2
−
X (e j ) e jn d
1
d
j − j ( − n )
x *[−n] = X *(e ) e
2 −
x *[−n] X *(e j )
(3) Re{x[n]} X e (e j )
1
Re{x[n]} = ( x[n] + x *[n])
2
1
( X (e j ) + X *(e − j )) = X e (e j )
2 56
(4) j Im{x[n]} X o (e j )
j
jIm{x[n]} = ( x[n] − x *[n])
2j
1 j − j j
( X (e ) − X *(e )) = X o (e )
2
(5) xe [n] X R (e j )
j 1 j j
X R (e ) = ( X (e ) + X *(e ))
2
1
( x[ n] + x *[ − n]) = xe [ n]
2
57
(6) xo [n] jX I (e j )
Example 2.21
1
j
x[n] = a u[n] X (e ) =
n
if a 1
1 − ae− j
58
Fig.2.22: Frequency response for a system with impulse response h[n] = a n u[n] . (a) Real part. a > 0;
a = 0.75 (solid curve) and a = 0.5 (dashed curve). (b) Imaginary part. (c) Magnitude. a > 0; a =
0.75 (solid curve) and a = 0.5 (dashed curve). (d) Phase.
59
2.9 Fourier Transform Theorems
60
Linearity
x[n − nd ] e− jnd X (e j )
1
j ( n − nd )
e j0n
x[n] X (e j (−0 )
) x[n − nd ] = X ( e j
) e d
2 −
1
− j nd
= X ( e j
) e e j n
d
2 −
61
Differentiation in frequency domain
dX (e j )
nx[n] j
d
j
X (e ) =
n =−
x[n]e− jn
dX (e j )
de− jn
= x[n] = x[n](− jn)
d n =− d n =−
Parseval’s theorem
1
X (e
2
E= x[n] = j
) d
2
n =− 2 −
j 2
X (e ) is called the energy density spectrum and it is defined only
n =− 2 −
n =− n =−
1
= x[n] X *(e ) e d
j − j n
n =− 2 −
1
=
2 − X *( e j
)
n =−
x [ n ] e − j n
d
1
) d
j j
= X *( e ) X ( e
2 −
1
) d
j
= X ( e
2 − 63
Convolution theorem
y[n] = x[k ]h[n − k ] = x[n] h[n] Y (e ) = X (e )H (e )
k =−
j j j
(2.163)
Note that the term on the right hand side is a periodic convolution integral.
64
y[n] =
k =−
x[k ]h[n − k ] = x[n] h[n] Y (e j ) = X (e j ) H (e j )
Y (e j ) =
n =− k =−
x[k ]h[n − k ]e − jn
=
k =−
x[k ] h[n − k ]e − jn change variable by n ' = n − k
n =−
=
k =−
x[k ] h[n ']e − j ( n '+ k )
n '=−
=
k =−
x[k ]e − jk
n '=−
h[n ']e − jn ' = X (e j ) H (e j )
65
y[n] =
k =−
x[k ]h[n − k ] = x[n] h[n] Y (e j ) = X (e j ) H (e j )
1
d
j j n
x[n] = X ( e ) e
2 −
1 1
X (e ) H (e ) e d = x[k ]e − jk H (e j ) e jn d
j j j n
2 − 2 −
k =−
1
= x[k ] H (e
j
)e j ( n − k )
d
k =− 2 −
= x[k ]h[n − k ] = y[n]
k =−
66
1
2 −
2 ( ) e jn d
= ( ) e j 0 d
−
=1
67
2.10 Discrete-time Random Signals
In real world, many physical signals are so complicated that we can
not describe their variations precisely. In many cases, the statistics of
a signal is good enough to represent its characteristics.
Mathematically, we treat those signals as random process or
stochastic process. A random process is an ensemble of time
functions (for the case that time is the independent variable). Each
time function is a realization of the random process. A probability is
assigned to represent the probability that a time function appears.
For a fixed time, the values of all time functions of a random
process form a random variable. A probability distribution is
assigned to the random variable. For multiple times, the values
of all time functions form many random variables. Then, a joint
probability distribution is needed to describe them.
68
To completely specify (or describe) a random process, we need to
know the joint probability density function (pdf) of all times. But this
is an impossible mission. In practices, we can calculate some average
(or expectation) values of the assumed joint pdf. The autocorrelation
or autocovariance sequence of a random process is a commonly used
averaged function.
where
chh [l ] = h[l ] h[−l ] = h[k ]h[l + k ]
k =−
(2.188)
yy (e j ) = Chh (e j ) xx (e j ) (2.189)
where
j j j 2
Chh (e )= H e( j
H) e ( = )H e ( )
71
2
E{ y [n]} = yy (0) =
2 1
2
H (e ) xx (e j )d
j
− (2.192)
= total average power in output
If x[n] is a white noise, then
yx (e j ) = H (e j ) xx (e j )
72
Homework
Understand the two interpretations of convolution sum operation
73