KEMBAR78
DSPchapter 2 | PDF | Stochastic Process | Discrete Time And Continuous Time
0% found this document useful (0 votes)
38 views73 pages

DSPchapter 2

Uploaded by

lintawei.11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views73 pages

DSPchapter 2

Uploaded by

lintawei.11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Chapter 2: Discrete-time Signals and Systems

This chapter discusses the representation and characteristics of signals and systems.

 Signal is a general term to mean something that conveys information. Mathematically, it can be
represented as a function of one or more variables.

 Although many physical quantities can be regarded as signals, for the area of electrical
engineering, those quantities are always transformed into voltage or current by various
transducers, e.g. microphone for speech signal, camera for image signal, and sensors in IoT. In
DSP, their original units (such as temperature degree, meter, kilogram and so on) are
disregarded.

 If the independent variable of a signal is the continuum of times, it is a continuous-time signal.


It is also referred to as an analog signal.

 A discrete-time signal is defined on the independent variable of discrete times. Hence, it is a


sequence of numbers.
1
 The amplitude of a signal can be either continuous or discrete. Digital signals have
discrete amplitude and are defined on discrete times.
 For convenience, we always treat digital signals as discrete-time signals (with
continuous amplitude) in all the following discussions except in real implementation.
This treatment makes the mathematical analysis feasible and will result in a small
error which is called the finite register length effect.
 Sometimes, we can regard a discrete-time signal representation as a special case of the
analog signal representation via treating the discrete-time signal as a sequence of
weighted impulses. This unified representation can describe well both discrete-time
signals and analog signals, and hence can be applied to process or analyze mixed
analog-digital signals or systems.
2.1 Discrete-time signals
 A discrete-time signal is mathematically represented by x[n], −   n   . If the
signal is obtained by sampling a continuous-time signal xa (t ) , then
1
x[n] = xa (nT ), −   n   , where T is the sampling period and f s = is the
T
sampling frequency. 2
Figure 2.1 Graphic representation of a discrete-time signal.

Figure 2.2 (a) Segment of a continuous-time speech signal xa (t ) . (b) Sequence of samples
x[n] = xa (nT ) obtained from the signal in part (a) with T=125 µs.

3
 Basic sequences and operations

◼ Product and sum are two basic operations.


They operate on two signals in a
sample-by-sample basis.

◼ Multiplication by a constant α is to multiply


each sample of a signal by α.

◼ Delay or shift: y[n] = x[n − n0 ]


4
Figure 2.3 Some basic sequences. The sequences shown play important roles in the analysis and
representation of discrete-time signals and systems.

5
 The unit sample sequence:
0 n0 0 if i  j
 [ n] =  same as  ij = 
1 n=0 1 if i = j
Known as Kronecker delta function.
Its role in DSP is completely the same as the Dirac delta function  (t )
in continuous-time signals and systems. It is therefore referred to as the
discrete-time impulse, or simply impulse. But, unlike  (t ) which is a
generalized function,  [n] is a function.

◼  [n] is often used to represent a signal as a weighted impulse train:



x[n] =  x[k ]  [n − k ]
k =−

This is a useful representation. Fig.2.4 is an example.

6
Fig. 2.4 Example of a sequence to be represented as a sum of scaled, delayed impulses

 The unit step sequence:

1 n0
u[n ] = 
0 n0
n
It can also be represented as u[n] =   [k ] . This is an interesting representation. A
k =−

more common representation is



u[n] =  [n] +  [n − 1] +  [n − 2] + =   [n − k ]
k =0

Conversely, we have
 [n] = u[n] − u[n − 1] 7
◼ The exponential sequence
x[n] = A n
For 0    1 , x[n ] is exponentially decaying to the
right.
For 1   , it is exponentially growing to the right.
For   0 , it alternately changes between positive
values and negative values.

◼ The sinusoidal sequence:


x[n] = A cos(0n +  )
A:amplitude, positive real
 :phase, real 8
Euler’s rule
𝑒 𝑗𝑥 = cos𝑥 + 𝑗sin𝑥

◼ If the A and  in x[n] = A n are complex, then

x[n ] = A = A e  e j0n
j
n n

= A  e j (0n + )
n

= A  cos(0n +  ) + j A  sin(0n +  )
n n

It is referred to as an exponentially weighted sinusoid.


If  = 1 , it is a complex exponential sequence:
x[n] = A cos(0 n +  ) + j A sin(0 n +  ) ,
where 0 is the angular frequency with unit of radians/sample.
If n is dimensionless, then the unit of 0 is radians.
9
◼ Since the independent variable n is integer, discrete-time
and continuous-time exponential (sinusoidal) signals have
the following difference:

x[n] = A e j (0n+ ) ; x(t ) = A e j (0t + )

(1) x[n ] is a periodical signal with respect to 0 . This is

shown by

x1[n] = A e j[(0 + 2 r ) n+ ] = A e j (0n + ) e j 2 rn = A e j (0n + ) = x[n]

for any integer r. The property is generally not true for x (t ) .

10
2𝜋
period is 𝑇 =
𝜔0
(2) x(t ) is always a periodical signal of t. The property is generally
not true for x[n ] . For periodical x[n ] , the following condition
should be satisfied for an integer N:

x[ N + n] = x[n] for all n ,


i.e.,
0 N = 2 k for some integer k .
For some 0 , such as 0 = 1, we can not find any N that satisfies
the above condition. In those cases, x[n ] is not a periodical signal.
For periodical x[n ] , the smallest N is called the fundamental period.

(3) For x (t ) , the period decreases as 0 increases. This is not true for
x[n ] because x[n ] is a periodical signal of 0 with period 2 .
11
2.2 Discrete-time Systems

 A discrete-time system can be denoted as the following general form:

y[n] = T x[n]

It is represented as a transformation or an operator to convert an input signal


x[n ] into an output signal y[n] . In general, y[n] depends on x[n ] for all n.

 Some typical systems:


◼ The ideal delay system: y[n] = x[n − nd ]
◼ The moving average system:
M2
1
y[n] = 
M1 + M 2 + 1 k =− M1
x[n − k ]

 System are classified according to the constraints added on T{•}.


12
2.2.1 Memoryless Systems

 For memoryless systems, y[n] depends only on x[n ] for every n, e.g.,
y[n] = ( x[n])2 and y[n] = 3x[n] .

2.2.2 Linear Systems


 Linear systems are defined based on the principle of superposition:

T x1[n] + x2 [n] = T x1[n] + T x2 [n]

and

T ax[n] = aT x[n] .

◼ The first equation is referred to as the additivity property; while the second one
is the homogeneity or scaling property.
◼ They can be combined into one:

T ax1[n] + bx2 [n] = aT x1[n] + bT x2 [n] 13


◼ For a linear system, y[n] =  ak yk [n] as x[n] =  ak xk [n] ,
k k

where yk [n ] = T xk [n] .


n
◼ The accumulator system: y[n] =  x[k ]
k =−
is a linear system.

2.2.3 Time-invariant (Shift-invariant) Systems

 Definition: if x1[n] = x[n − n0 ] , then y1[n] = y[n − n0 ] . In other


words, the characteristics of the system do not change with time.

◼ Accumulator is a time-invariant system, while the compressor


system, y[n] = x[ Mn] is not.
14
2.2.4 Causality

 Definition: For  n0 , y[n0 ] depends only on x[ n] for n  n0 , i.e.,


it is non-anticipative.
◼ forward and backward differences: y[n] = x[n + 1] − x[n] and
y[n] = x[n] − x[n − 1] are noncausal and causal, respectively.

2.2.5 Stability

 Definition: A system is stable in the bounded-input, bounded-output


(BIBO) sense iff any bounded input x[n ] will generate bounded
output y[n] , i.e.,

if x[n]  Bx   for all n ,then y[n]  By   for all n .


15
2.3 Linear Time-invariant (LTI) Systems
 An LTI system can be completely specified or characterized by its impulse
response. This is shown as follows:
  
y[n ] = T   x[k ] [n − k ]
 k =− 

=  x[k ]T  [n − k ]
k =−
(2.49)

=  x[k ]h [n]
k =−
k


=  x[k ]h[n − k ]
k =−

Here, the second equality is based on the linearity property and the fourth

equality is based on the time-invariant property. hk [n] is the response of

 [n − k ] and h[n ] is the impulse response.


16
 Eq. (2.49) is called convolution sum and denoted by

y[n] = x[n]  h[n] (2.50)

 Eq. (2.49) can be interpreted as follows. We first decompose


x[n ] into the weighted sum of infinite number of delayed
impulses, and then obtain the output y[n] by summing all
responses of those impulses. Fig.2.8 shows an example of this
interpretation. Note that this is exactly what shown by
Eq.(2.49).

17
Figure 2.8 Representation of the output of an LTI system as the superposition of responses to
individual samples of the input.
18

y[n] =  x[k ]h[n − k ]
k =−
(2.49)

 An alternative interpretation of Eq. (2.49) is given as follows.


We treat n as a fixed number to calculate y[n] as sum of all
terms of the product of x[k ] and h[n − k ] . Here, k is taken
as independent variable of x[k ] and h[n − k ] . h[n − k ]
can be obtained by firstly time-reversing h[k ] and then
right-shifting by n, i.e., h[n − k ] = h[−(k − n)] . Fig.2.9 shows
an example of the way to form h[n − k ] .
 Eq. (2.49) can be used to calculate y[n] if h[n ] is a
finite-duration sequence.
19
Figure 2.9 Forming the sequence h[n − k ] . (a) The sequence h[k] as a
function of k. (b) The sequence h[ −k ] as a function of k. (c) The sequence
h[n − k ] = h[−(k − n)] as a function of k for n = 4.
20
Convolution is a linear operator

2.4 Properties of LTI Systems


 Some properties of LTI systems are determined by their
impulse responses.
 Some properties of the convolution sum operator can be

equally applied to LTI systems. They include:


(1) x[n]  h[n] = h[n]  x[n]  The same output can be
obtained by exchanging the
input and the system impulse
response.
(2) cascade h1[n ] and h2 [n ]  cascade h2 [n ] and h1[ n ]
 h[n] = h1[n]  h2 [n]
21
Figure 2.12 (a) Cascade combination of two LTI systems. (b)
Equivalent cascade. (c) Single equivalent system.

22
(3) parallel connect h1[ n ] and h2 [n ]  h[n ] = h1[n ] + h2 [n ]

Figure 2.11 (a) Parallel combination of LTI systems. (b) An


equivalent system.
23

 An LTI system is stable iff Bh =  h[k ]   .
k =−
(2.67)

 An LTI system is causal iff h[n] = 0 for n  0 . (2.72)


This can be observed clearly from the following equation:
n
y[n] =  x[k ]h[n − k ]
k =−
this is true only if h[n − k ]=0 for k  n

上式更改了 h[n-k]=0
◼ The concept of causality can be applied to the signal by
defining a causal signal as: x[n ] is a causal sequence
iff x[n] = 0 for n  0 .

24
M2
1
y[n] = 
M1 + M 2 + 1 k =− M1
x[n − k ]

 The following systems are LTI and their impulse responses are given
below:
◼ Ideal delay: h[n] =  [n − nd ] where nd is a positive fixed integer.

◼ Moving average:
 1
1 M2
 for − M1  n  M 2
h[n] = 
M1 + M 2 + 1 k =− M1
 [n − k ] =  M 1 + M 2 + 1
0
 otherwise

Accumulator: h[n] =   [n − k ] = u[n] . It is an infinite impulse
k =0

response (IIR) system and is unstable.


◼ Forward difference: h[n] =  [n + 1] −  [n] . It is noncausal.

◼ Backward difference: h[n] =  [n] −  [n − 1] .

25
 If h[n ] is non-zero only for a period of finite
duration, it is called an FIR (finite impulse response);
otherwise, it is an IIR (infinite impulse response).

 An inverse system of an LTI system is a system that


perfectly reconstructs x[n ] from y[n] . Its impulse
response is denoted as hi [n] and satisfies the
following condition: hi [n]  h[n] = h[n]  hi [n] =  [n] .
It is easier to discuss the properties of inverse
systems in frequency domain.
26
2.5 Linear Constant-Coefficient Difference
Equations (LCCDE)

N M

 a y[n − k ] =  b x[n − m]
k =0
k
m =0
m

is an important subclass of LTI systems. Its importance lies


in the fact that all LTI systems in this class can be analyzed
and implemented in a systematic way.

27
◼ Accumulator can be expressed in the following LCCDE form
n
y[n] =  x[k ] = y[n − 1] + x[n]
k =−

Figure 2.15 Block diagram of a recursive difference equation


representing an accumulator.

28
M2
1
y[n] =  x[n − k ]
M1 + M 2 + 1 k =− M1

◼ An moving average system with M 1 = 0 can be expressed by


1
y[n] = y[n − 1] + ( x[n] − x[n − M 2 − 1])
M2 + 1

Figure 2.16 Block diagram of the recursive form of a


moving-average system.
29
 Without other constraints, an LCCDE system can not generate a unique
output for a given input because its general solution contains the
following two terms:
y[n] = yh [n] + y p [n]
Here, yh [n] is the homogeneous solution of the equation
N

 a y[n − k ] = 0 .
k =0
k

It can be obtained by firstly solving the characteristic equation


N

 k = 0 to obtain N roots zm , m = 1,
a Z
k =0
−k
, N , and then form
N
yh [n] =  Am zmn
m =1

where Am , m = 1, , N , are unknown constants to be determined by the


initial condition y[−1], y[−2], , y[− N ] .
30
 An alternative way of using the initial condition is to rewrite the
LCCDE equation into the following recursive form:
N M
ak bk
y[n] = − y[n − k ] +  x[n − k ] (2.99)
k =1 a0 k =0 a0

By iteratively feeding y[−1], y[−2], , y[− N ] and x[n], n  0 , we


can calculate y[n], n  0 .
 From above discussions, an LCCDE with a given initial condition can
uniquely determine the characteristic of the system. But, it is not an
LTI system anymore.
 If the initial condition is null, i.e., all are 0, then yP [n ] is the
solution. Now, it is an LTI system. But we still need to determine
whether yP [n ] is a right- or left-sided sequence. By adding the
constraint of causality, the system is uniquely specified.
31
 In DSP, we always assume the initial-rest condition:
y[n] = 0, for n  n0 and x[n] = 0, for n  n0 . Eq.(2.99) is
then used to calculate y[n], for n  n0 .
 If N = 0 , the recursion doesn’t exist anymore, and we have
M
bk
y[n] =  x[n − k ] (2.101)
k =0 a0

It is expressed in a convolution form. The impulse response is


 bn 
 for 0  n  M
h[n] =  a0  (2.102)
0
 o.w.
It is an FIR.
32
2.6 Frequency-domain Representation of Discrete-time
Signals and Systems
2.6.1 Eigenfunctions for LTI systems
 This representation is developed based on the fact that complex

exponential sequences are eigenfunctions of LTI systems. The


derivation is given in the following.

Let x[n] = e jn , for −   n  . Then, the output of the LTI

system with impulse response h[n ] can be calculated by


 
y[n] = 
k =−
h[k ]x[n − k ] = 
k =−
h[k ]e j ( n − k )

=e j n
(  h[k ]e − j k )
k =−
33

j
Define H (e ) =  h[k ]e
k =−
− jk
. (2.104).

Then we have y[n] = e jn H (e j ) . (2.103)

Since y[n] have the same form as x[n] , e jn is an eigenfunction with

eigenvalue H (e j ) .

 H (e j ) is called the frequency response of the system. It can be

rewritten and expressed by


H (e j ) = H R (e j ) + jH I (e j )
j
= H (e j ) e jH ( e )

where H (e j ) is called the magnitude response and H (e j ) is

called the phase response. 34


◼ The ideal delay system:

y[n] = x[n − nd ], nd is a fixed integer.  H (e j ) = e− jnd

 If x[n ] can be expressed by the sum of complex exponential


sequences, i.e.,
x[n] =  k e jk n
k

then y[n] =  k H (e jk )e jk n . (Linearity property)


k

 H (e j ) is a periodical signal of  with period 2 . We


therefore only need to consider its variation within a period, say
( − ,  ] or [0, 2 ) . Note that  corresponds to the highest
frequency, while 0 is the lowest frequency.
35
 Ideal LP (lowpass)、BP (bandpass)、HP (highpass)、bandstop or
BS (band stop) filters is determined by the shape of the pass band

of H (e j ) . They are shown in Figs.2.17 and 2.18.

36
Figure 2.17 Ideal lowpass filter showing (a) periodicity of the frequency response and (b) one
period of the periodic frequency response.

37
Figure 2.18 Ideal frequency-selective filters. (a) Highpass filter. (b) Bandstop filter. (c)
Bandpass filter. In each case, the frequency response is periodic with period 2 π. Only one period
is shown.
38
 Example 2.16: Moving average system
M2
1
j
H (e ) = 
M1 + M 2 + 1 n=− M1
e − jn
(2.121)

For the causal moving average system, M1 = 0 and Eq.(2.121) becomes

1 M 2 − jn
H (e ) =j
 e (2.122)
M 2 + 1 n =0

1  1 − e − j ( M 2 +1) 
H (e ) =
j
 
M 2 + 1  1 − e − j 
1 (e j ( M 2 +1) /2 − e − j ( M 2 +1) /2 )e − j ( M 2 +1)/2
=
M2 +1 ( e j /2 − e− j /2 ) e− j /2 (2.123)

1 sin  ( M 2 + 1) / 2  − j M 2 /2
= e
M2 +1 sin  / 2
From Eq.(2.123), we find that it is a linear phase LP filter. 39
◼ The example for M 2 = 4 is shown below:

Figure 2.19 (a) Magnitude and (b) phase of the frequency response of the
moving-average system for the case M1 = 0 and M2 = 4.

◼ For M1 = M 2 , it becomes
sin  ( 2 M 2 + 1) / 2 
H (e ) =
j 1
(2.124)
2M 2 + 1 sin  / 2
It is a real value. 40
2.6.2 Suddenly Applied Complex Exponential Inputs

 In practical realization of a filter, the input signal can not be applied starting from
− because, in that case, we can not calculate the response. So, we always
consider to apply the input starting at n = 0 , e.g., x[n] = e jnu[n] , the
corresponding output response is
0 n0

y[n] =  n − j k  j n
  h [ k ]e e n0
 k =0 
For n  0 ,
  − j k  j n   − j k  j n
y  n  =   h  k e  e −  h  k e e (2.126)
 k =0   k = n +1 
  
= H ( e ) e −   h  k e − jk  e jn
j j n
(2.127)
 k = n +1 
= yss [n] + yt [n]
where yss [n] is referred to as the steady state response and yt [n] is the
transient response. 41
◼ In some conditions, yt [n] → 0 as n →  . This is determined by
 
yt [ n ] = e jn
(  h[k ]e − jk
)  h[k ] (2.128)
k = n +1 k = n +1

For the case that h[n ] is


finite duration in 0  n  M,
yt [n] = 0 for n +1  M or n  M −1. In this case,

y  n = yss [n] = H ( e j ) e jn for n  M −1

For the case of IIR, if h[n] → 0 as n →  , then yt [n] → 0 .


Eq.(2.128) can be rewritten as
  
yt  n =  h k  e − j k
e jn   h k    h k  (2.129)
k = n +1 k = n +1 k =0

42
◼ The following figure shows the “missing” samples (i.e., x[n] for n  0 ) have
less and less effect as n increases.

Figure 2.20 Illustration of a real part of suddenly applied complex exponential


input with (a) FIR and (b) IIR.
43
 The stability condition of an LTI system is the same as the sufficient

condition that its frequency response H (e j ) exists, i.e.,

 
j
H (e ) =  h[k ]e
k =−
− j k
 
k =−
h[k ]  

2.7 Representation of Sequences by Fourier Transforms

1 
x[n] =
2 

X (e j ) e jn d (2.130)

j
X (e ) =  x[n]e
n =−
− jn
(2.131)

are the inverse discrete-time Fourier transform (IDTFT) and the


discrete-time Fourier transform (DTFT) pair. 44
◼ Eq.(2.130) is a synthesis formula, i.e., it represents x[n ] as a superposition
of infinitesimally small complex sinusoids of the form
1
X (e j ) e jn d
2
◼ Eq.(2.131), the discrete-time Fourier Transform is an expression for
analyzing the frequency components of x[n ] .

◼ X (e j ) is called the Fourier transform of x[n ] . It can also be expressed as

X (e j )
= X R (e j ) + jX I (e j ) (2.132a) in rectangular form
j
= X (e j ) e jX ( e )
(2.132a) in polar form

with X (e j ) representing the magnitude, and X (e j ) the phase.


45
◼ We note that X (e j ) is not uniquely specified
because adding 2 r (r is an integer) to it has no
effect on changing the value of X (e j ) . We hence
have some freedom to choose a proper value of
X (e j ) . Two popular ways of adding a constraint
to uniquely specify it are given below:

(1) Principal value: Let X (e j )  (− ,  ] . It is


denoted by ARG[ X (e j )] .
(2) Let X (e j ) be a continuous function of  for
−     . It is denoted by arg[ X (e j )] .

46
 The sufficient condition that the Fourier transform of a signal exists is
that it is “absolutely summable”, i.e.,
 
X (e ) =j
 x[n]e
n =−
− jn
 
n =−
x[n ]  

Since the sufficient condition is the same as the stability condition, we


can conclude that the Fourier transforms of all stable sequences do exist.

 x[n]   . This
2
 A more loose condition is “square summable”, i.e.,
n =−

leads to

lim 
2
X (e ) − X M (e ) d  = 0
j j
(2.138)
M → −

M
where X M (e j ) =  x[n]e
n =− M
− jn
. In other words, X (e j ) exists in the

sense of mean-square convergence. 47


 Example 2.18: Square-summability for the ideal LP filter
For an ideal LP filter
1,   c

H lp ( e ) = 
j
(2.139)
0, c    

with cutoff frequency c , its impulse response is
sin(c n)
hlp [n] = , −   n  .
n
◼ hlp [n] is noncausal and not absolutely summable. So,

sin(c n) − jn

n =− n
e does not converge uniformly for all values of  .

Performing the windowing operation, we obtain


M
sin(c n) − jn
H M (e ) = 
j
e (2.141)
n =− M n
H M (e j ) evaluated for several values of M is displayed in Fig.2.21.
48
Figure 2.21: Convergence of the Fourier transform. The oscillatory behavior at  = c is

often called the Gibbs phenomenon.

49
◼ Gibb’s phenomenon: As we enlarge the window size M, the

ripple of H M (e j ) around c will become narrower. But,

the ripple never disappears, even for the case that we let the
window size approaching infinite. Moreover, the amplitude of

ripple always keeps the same. Actually, H M (e j ) → H (e j ) as

M →  for all  except  = c .

 By using  ( ) , we can find the Fourier transform of some signals


which do not conform to the above existence condition. This will
be discussed later.
50
 Some useful Fourier transform pairs:

x[n] = 1  X (e j ) =  2 ( + 2 r)
r =−

x[n] = e j0n  X (e j ) =  2 ( − 
r =−
0 + 2 r )

x[n] =  ak e jk n
 X (e j ) =   2 a  ( − 
k k + 2 r )
k r =− k

1
j
x[n] = u[n]  X (e ) = U (e ) = − j
+   ( + 2 r )
j

1− e r =−

51
2.8 Symmetry Properties of the Fourier Transform

 Definition: A conjugate-symmetric sequence xe [n] if xe [n] = xe[−n] ;

A conjugate-antisymmetric sequence xo [n] if xo [n] = − xo[−n] 。

 Any sequence can be decomposed by x[n] = xe [n] + xo [n] (2.149a), where

1
xe [n] = ( x[n] + x[−n]) = xe[−n] (2.149b)
2
1
xo [n] = ( x[n] − x[−n]) = − xo[−n] (2.149c)
2
◼ For real sequence, xe [n] = xe [−n] is referred to as an even sequence;
while xo [n] = − xo [−n] is an odd sequence.

52
 Similarly, X (e j ) = X e (e j ) + X o (e j ) (2.150a)

X e (e j ) = 12 [ X (e j ) + X  (e − j )] (2.150b)

X o (e j ) = 12 [ X (e j ) − X  (e − j )] (2.150c)

where X e (e j ) = X e (e− j ) (2.151a)

and X o (e j ) = − X o (e− j ) (2.151b)

Note that Eq. (2.150) is not the Fourier transform version of Eq.(2.149).

xe [n]  X e (e j ) ()
xo [n]  X o (e j ) ()
53
 Symmetry properties of Fourier transform:

x[n]  X (e j )

(1) x[n]  X  (e− j ) 1 


x[n] =
2 

X (e j ) e jn d

(2) x[−n]  X  (e j ) 

(3) Re{x[n]}  X e (e ) j
j
X (e ) = 
n =−
x[n]e− jn

(4) j Im{x[n]}  X o (e j )

(5) xe [n]  X R (e j )
54
(1) x[n]  X  (e− j )

1 
 d
j j n
x[n] = X ( e ) e
2 −
1 
 d
j − j n
x *[n] = X *(e ) e
2 −
Change variable by  ' = − , we obtain
1 −
 d '
− j ' j ' n
=− X *( e ) e
2 
1 
 ) e d '
− j ' j ' n
= X *(e
2 −
− j
x *[n]  X *(e )
55
(2) x[−n]  X  (e j )
1 
x[n] =
2 

X (e j ) e jn d 

1 
 d
j − j ( − n )
x *[−n] = X *(e ) e
2 −
x *[−n]  X *(e j )
(3) Re{x[n]}  X e (e j )
1
Re{x[n]} = ( x[n] + x *[n])
2
1
 ( X (e j ) + X *(e − j )) = X e (e j )
2 56
(4) j Im{x[n]}  X o (e j )

j
jIm{x[n]} = ( x[n] − x *[n])
2j
1 j − j j
 ( X (e ) − X *(e )) = X o (e )
2
(5) xe [n]  X R (e j )
j 1 j j
X R (e ) = ( X (e ) + X *(e ))
2
1
 ( x[ n] + x *[ − n]) = xe [ n]
2
57
(6) xo [n]  jX I (e j )

The following properties apply only when x[n ] is real:

(7-11) Any real x[n ]  X (e j ) = X  (e− j ) , X R (e j ) = X R (e− j ) ,

X I (e j ) = − X I (e− j ) , X (e j ) = X (e− j ) , X (e j ) = −X (e− j )

Note that X (e j 0 ) is a real value.


(12) Even part of x[n ]  X R (e j )

(13) Odd part of x[n ]  jX I (e j )

 Example 2.21
1
j
x[n] = a u[n]  X (e ) =
n
if a  1
1 − ae− j
58
Fig.2.22: Frequency response for a system with impulse response h[n] = a n u[n] . (a) Real part. a > 0;
a = 0.75 (solid curve) and a = 0.5 (dashed curve). (b) Imaginary part. (c) Magnitude. a > 0; a =
0.75 (solid curve) and a = 0.5 (dashed curve). (d) Phase.
59
2.9 Fourier Transform Theorems

60
 Linearity

ax1[n] + bx2 [n]  aX1 (e j ) + bX 2 (e j ) (2.156)

 Time shifting and frequency shifting

x[n − nd ]  e− jnd X (e j )
1 

j ( n − nd )
e j0n
x[n]  X (e j (−0 )
) x[n − nd ] = X ( e j
) e d
2 − 

1 

− j nd
= X ( e j
) e e j n
d
2 − 

 Time reversal  X (e j )e− jnd


x[−n]  X (e− j )
x[−n]  X  (e j ) if x[n] is real

61
 Differentiation in frequency domain

dX (e j )
nx[n]  j
d

j
X (e ) = 
n =−
x[n]e− jn

dX (e j ) 
de− jn 
=  x[n] =  x[n](− jn)
d n =− d n =−

 Parseval’s theorem
 
1
   X (e
2
E= x[n] = j
) d
2

n =− 2 −

j 2
X (e ) is called the energy density spectrum and it is defined only

for finite-energy signals.


62
 
1
 
2
E= x[n] = X (e ) d 
j
2

n =− 2 −

 

 x[n] =  x[n]x *[n]


2

n =− n =−

1 
=  x[n]  X *(e ) e d 
j − j n

n =− 2 −

1 
=
2 −  X *( e j
) 
n =−
x [ n ] e − j n
d

1 
 ) d
j j
= X *( e ) X ( e
2 −
1 
 ) d
j
= X ( e
2 − 63
 Convolution theorem

y[n] =  x[k ]h[n − k ] = x[n]  h[n]  Y (e  ) = X (e  )H (e  )
k =−
j j j
(2.163)

 Modulation or Windowing theorem


1 
 X (e j ) W (e j ( − ) )d
j
y[n] = x[n]w[n]  Y (e ) =
2 −

Note that the term on the right hand side is a periodic convolution integral.

64

y[n] = 
k =−
x[k ]h[n − k ] = x[n]  h[n]  Y (e j ) = X (e j ) H (e j )
 
Y (e j ) = 
n =− k =−
x[k ]h[n − k ]e − jn
 
= 
k =−
x[k ]  h[n − k ]e − jn change variable by n ' = n − k
n =−
 
= 
k =−
x[k ]  h[n ']e − j ( n '+ k )
n '=−
 
= 
k =−
x[k ]e − jk 
n '=−
h[n ']e − jn ' = X (e j ) H (e j )

65

y[n] = 
k =−
x[k ]h[n − k ] = x[n]  h[n]  Y (e j ) = X (e j ) H (e j )

1 
 d
j j n
x[n] = X ( e ) e
2 −

1  1 
 X (e ) H (e ) e d  =   x[k ]e − jk H (e j ) e jn d 
j j j n

2 −  2 −
k =−

 1  
=  x[k ]    H (e
j
)e j ( n − k )
d 
k =−  2 −


=  x[k ]h[n − k ] = y[n]
k =−

66
1 

2  −
2 ( ) e jn d

=   ( ) e j 0 d
−

=1

67
2.10 Discrete-time Random Signals
In real world, many physical signals are so complicated that we can
not describe their variations precisely. In many cases, the statistics of
a signal is good enough to represent its characteristics.
Mathematically, we treat those signals as random process or
stochastic process. A random process is an ensemble of time
functions (for the case that time is the independent variable). Each
time function is a realization of the random process. A probability is
assigned to represent the probability that a time function appears.
 For a fixed time, the values of all time functions of a random
process form a random variable. A probability distribution is
assigned to the random variable. For multiple times, the values
of all time functions form many random variables. Then, a joint
probability distribution is needed to describe them.
68
 To completely specify (or describe) a random process, we need to
know the joint probability density function (pdf) of all times. But this
is an impossible mission. In practices, we can calculate some average
(or expectation) values of the assumed joint pdf. The autocorrelation
or autocovariance sequence of a random process is a commonly used
averaged function.

 Usually, we assume that the signal is a wide-sense stationary random


process. In this case, we only need the first- and second-order
statistics, mx = E{x[n]} and xx [m] = E{x[n]x[n + m]} , to
characterize the random process.
69
 The Fourier transform of the
autocorrelation xx [m] of a random
process is known as the power spectral
density (psd) or power density spectrum
 xx (e j ) . Since xx [m] is an even real

function,  xx (e j ) is positive, even real


function. Moreover,

1
2 

 xx (e j )d  = xx [0] = E{( x[n]) 2 } .
70
 For an LTI system, the input x[n] , the output y[n] and the
impulse response h[n] have the following relationship

yy [m] = xx [m]  chh [m] = 
l =−
xx [m − l ] chh [m] (2.187)

where

chh [l ] = h[l ]  h[−l ] =  h[k ]h[l + k ]
k =−
(2.188)

◼ In frequency domain, their relation becomes

 yy (e j ) = Chh (e j ) xx (e j ) (2.189)

where
j   j j 2
Chh (e )= H e( j
H) e ( = )H e ( )
71


2
E{ y [n]} =  yy (0) =
2 1
2
H (e )  xx (e j )d 
j
 − (2.192)
= total average power in output
 If x[n] is a white noise, then

xx [m] =  x2 [m]   xx (e j ) =  x2 .


We can represent a non-white signal by using an LTI system with a
white noise input:
2
 yy (e ) = H (e )  x2
j j

 Cross-correlation between input and output:

 yx (e j ) = H (e j ) xx (e j )
72
Homework
 Understand the two interpretations of convolution sum operation

 Find the impulse response of moving average from Eq.(2.21)


 Prove Properties 3 and 4 of Table 2.1
 Prove the convolution theorem, the modulation theorem, and
Parseval’s theorem.

73

You might also like