KEMBAR78
Digital Control Notes | PDF | Limit (Mathematics) | Discrete Time And Continuous Time
0% found this document useful (0 votes)
212 views196 pages

Digital Control Notes

This document provides lecture notes on digital control. It covers topics such as discrete-time signals and systems, properties of discrete-time systems including stability, interconnections between continuous and discrete systems, and digital controller synthesis methods. The notes contain chapters on modeling discrete-time systems using the z-transform, analyzing system properties like stability, designing digital controllers through emulation of analog designs as well as direct digital synthesis techniques, and specifications for controller performance. Examples and applications to control problems are provided throughout.

Uploaded by

fvjibdz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
212 views196 pages

Digital Control Notes

This document provides lecture notes on digital control. It covers topics such as discrete-time signals and systems, properties of discrete-time systems including stability, interconnections between continuous and discrete systems, and digital controller synthesis methods. The notes contain chapters on modeling discrete-time systems using the z-transform, analyzing system properties like stability, designing digital controllers through emulation of analog designs as well as direct digital synthesis techniques, and specifications for controller performance. Examples and applications to control problems are provided throughout.

Uploaded by

fvjibdz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 196

Lecture Notes on Digital Control

Giacomo Baggio, Augusto Ferrante,


Francesco Ticozzi and Sandro Zampieri
2
Contents

1 Introduction 7
1.1 Digital Control . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Advantages and Disadvantages of Digital Control . . . . . . . 8

2 Discrete-time Signals and Systems and Z-Transform 11


2.1 Input-output analysis of discrete-time LTI systems . . . . . . . 11
2.2 Discrete-time signals . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Discrete-time linear SISO systems . . . . . . . . . . . . . . . . 14
2.4 Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Properties of the Z-Transform . . . . . . . . . . . . . . . . . . 19
2.6 Inverse Z-Transform . . . . . . . . . . . . . . . . . . . . . . . 32
2.6.1 Existence of the inverse Z-Transform . . . . . . . . . . 32
2.6.2 Computation of the inverse Z-Transform . . . . . . . . 33

3 Analysis of Discrete-Time Systems 41

4 Properties of Discrete-Time Systems 47


4.1 Stability of Discrete-Time Systems . . . . . . . . . . . . . . . 47
4.2 Criteria for Stability . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.1 Jury Test . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.2 Bilinear (or Möbius) Transform . . . . . . . . . . . . . 50
4.3 Interconnection of discrete time systems . . . . . . . . . . . . 54
4.3.1 Stability of a feedback loop . . . . . . . . . . . . . . . 55
4.3.2 Internal stability of an interconnection . . . . . . . . . 59
4.4 Frequency response . . . . . . . . . . . . . . . . . . . . . . . . 61
4.5 Nyquist plot and Nyquist criterion . . . . . . . . . . . . . . . 64
4 CONTENTS

5 Interconnections of continuous-time and discrete-time sys-


tems 65
5.1 The sampler and the interpolator . . . . . . . . . . . . . . . . 65
5.2 Shannon sampling theory . . . . . . . . . . . . . . . . . . . . . 70
5.3 Anti-Aliasing Filters for control . . . . . . . . . . . . . . . . . 76
5.4 Comments on quantization and its effects . . . . . . . . . . . . 82
5.5 Sampling signals with rational Laplace transform . . . . . . . 83
5.6 The zero holder interpolator . . . . . . . . . . . . . . . . . . . 85
5.7 Conversion between continuous and discrete systems . . . . . . 87

6 Control problem and controller design 97


6.1 Specifications on the asymptotic regime: Tracking . . . . . . . 98
6.1.1 Tracking steps and ramps . . . . . . . . . . . . . . . . 100
6.1.2 Tracking of sinusoidal signals . . . . . . . . . . . . . . 102
6.1.3 Asymptotic disturbance rejection . . . . . . . . . . . . 104
6.2 Performance specifications on the transient regime . . . . . . . 105
6.3 Translation of the time-domain performance specifications: a
brief summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4 Control system design . . . . . . . . . . . . . . . . . . . . . . 111
6.4.1 The choice of the control architecture . . . . . . . . . . 112
6.4.2 The choice of the sampling period . . . . . . . . . . . . 117

7 Digital controller synthesis: Emulation methods 127


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.2 Emulation method: the digital conversion of a continuous time
controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.3 P.I.D. controllers . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.4 Review of the phase margin based synthesis . . . . . . . . . . 144
7.5 P.D. - P.I. - P.I.D. design based on the phase margin . . . . . 146
7.5.1 P.D. design . . . . . . . . . . . . . . . . . . . . . . . . 146
7.5.2 P.I. design . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.5.3 P.I.D. design . . . . . . . . . . . . . . . . . . . . . . . 147

8 Digital Controllers Synthesis: Direct Synthesis Methods 149


8.1 Discrete-time direct synthesis by “canceling” . . . . . . . . . . 149
8.1.1 Assigning the transient properties . . . . . . . . . . . . 156
8.1.2 Dahlin’s method . . . . . . . . . . . . . . . . . . . . . 159
8.1.3 Second order W (z) . . . . . . . . . . . . . . . . . . . . 161
CONTENTS 5

8.2 Direct Synthesis from a different perspective . . . . . . . . . . 162


8.3 Smith’s predictor for the delay compensation . . . . . . . . . . 163
8.4 Controller design via Diophantine equations . . . . . . . . . . 168
8.4.1 Review of Diophantine equations . . . . . . . . . . . . 170
8.4.2 The controller design . . . . . . . . . . . . . . . . . . . 170
8.5 Digital controller synthesis: Deadbeat tracking . . . . . . . . . 173
8.6 Examples of dead-beat tracking for constant signals . . . . . . 178
8.7 Dead-beat control for P (s) derived from a sampling/holding . 180

A Table of Most Common ZLg Transforms 183

B Table of Most Common Laplace Transforms 187

C Notions of Control in Continuous-Time 191


C.1 Routh Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
C.2 Root Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
C.3 Nyquist Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6 CONTENTS
Chapter 1

Introduction

1.1 Digital Control


Nowadays the vast majority of control systems, especially in industrial ap-
plications, are implemented on digital devices (µP, DSP, FPGA, etc.), which
use and process sampled and quantized signals (see Fig.1.1). It is thus of
paramount importance for any control system engineer to have a deep un-
derstanding of the functioning principles of digital control systems. This
book aims to introduce the main approaches to the design of digital control
systems, as well as the principles and mathematical tools that are needed to
master them.
Digital control systems are sometimes misleadingly considered as simple
approximations of control system for analog systems (as traditionally de-
signed for electrical, hydraulic, chemical systems and plants): this is however
an unsatisfactory approach as it prevents one to best exploit the potential of
digital control implementations.
We shall see that when the controller design is obtained for approximation
(or emulation) of an analog design, the performance of the controlled system
are in the best case scenario close to those of a continuous controller.
On the other hand, when the controller is directly designed to operate in
the digital domain, its performance can potentially surpass those of analog
controller. The possibility of obtaining dynamical behaviours that have no
analogue in the analog world, such as finite impulse response systems, opens
up new opportunities in terms of control design.
In the light of these considerations, it is only natural that a book in
8 CHAPTER 1. INTRODUCTION

digital control be of hybrid nature, borrowing from the approaches of both


methodological and applied courses. The first part of the book will be ded-
icated to the study of mathematical methods for the modeling and analysis
of sampled-data signals and systems, mimicking the familiar route typically
taken to introduce classical control in the frequency domain. In particular,
the Z-transform will be introduced and used to manipulate discrete-time lin-
ear systems, developing a dedicated input-output transfer function approach
and modal analysis. A particular emphasis will be on the effect of sampling,
not only on signals but also on systems and their interconnections. As it
almost always happens in engineering, the theoretical foundations will be
interspersed with good-practice consideration or approximate formulas that
stem from experience and provide tentative, first-attempt choices in achieving
the desired controlled performance. In the second part the focus will be on
design techniques for digital feedback controllers for both intrinsic discrete-
time systems and sampled continuous-time systems. Emulation techniques
will be introduced first, followed by direct digital design ones. The con-
trol problems that will be treated will include not only regulation (set-point
control), but also asymptotic tracking or finite-time, dead-beat control.

1.2 Advantages and Disadvantages of Digital


Control
The main advantages connected to the use of digital control systems include:

1. On the hardware level:

• flexibility (the same type of controller can be interfaced to different


type of physical systems, even simultaneously);
• standardization and robustness of the components;
• reliability;
• time invariance (negligible aging of digital components);
• less noise;
• smaller dimensions and lower costs (due to the use of mass-produced
standard components);
• Easy maintenance and substitution;
1.2. ADVANTAGES AND DISADVANTAGES OF DIGITAL CONTROL9

Synthesis Block Information Acquisition/ Processing Block

Control Process/
State
Algorithm Signal
Estimation
Design Model

Control
Algorithm Process/
Adjust. Signal
Analysis
y

Control b
Actuator Process Sensors
Algorithm
u
Physical Block
n

Figure 1.1: A paradigmatic architecture for a digital control system, includ-


ing subsystems dedicated signal-processing, identification and adaptation.
[?]

2. On the design and software level:


• better performance achievable by the controlled system;
• simpler algorithms to be implemented;
• possible use of Finite Impulse Response – FIR systems;
• easily generated reference signals;
• ease of reconfiguration;
• operation scheduling;
• easier monitoring and data acquistion;
• possible inclusion of a user interface.
On the other hand, there are some disadvantages, which are to be carefully
considered in the design and implementation phases, which are related to:
1. the technology and its functioning environment:
• digital devices need power;
10 CHAPTER 1. INTRODUCTION

• digital devices do not work in extreme conditions (extreme tem-


peratures, pressures, radiations, etc.);

2. digital signal processing:

• quantization is nonlinear, and potentially introduce noise and


other undesirable effects that are hard to model and account for
(e.g. limit cycles);
• there can be problems introduced by sampling (e.g. aliasing);
• digital devices typically exhibit some delay in the acquisition and
processing of signals;
• difficult real-time processing for high-frequency signals;

3. installation and maintenance:

• testing can be difficult;


• state-of-the-art technologies change quickly;
• the development of dedicated software requires significant resources
in terms of time and costs;
• these devices require dedicated programming languages;
• difficult to re-train personnel that is accustomed to analog sys-
tems.
Chapter 2

Discrete-time Signals and


Systems and Z-Transform

2.1 Input-output analysis of discrete-time LTI


systems
In basic control courses, continuous-time systems and their interconnections
have been analyzed. In particular, the input-output (I/O) behaviour of
continuous-time systems may be conveniently described by using the Laplace
transform. The success of the continuous-time I/O analysis is a consequence
of the following assumptions and facts:

1. We consider linear, time-invariant, causal systems with lumped pa-


rameters that are described by ordinary differential equations (ODE).
These equations are homogeneous for autonomous systems (i.e. systems
without inputs), while in the general case when inputs are present, the
corresponding ODE is not homogeneous.

2. By employing the Laplace transform, the solution of the ODE and


the analysis of the structure of said solution can be handled by simple
algebraic techniques.

3. Linearity of the systems implies that the solution can be additively


decomposed as the sum of two signals: the free response depending
only on the initial conditions and the forced response depending only
on the input signal.
12CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

4. The I/O behaviour of the system, i.e. the properties of the function
mapping a certain input signal to the corresponding forced response can
be analyzed by considering the transfer function (TF) of the system.
For the linear, time-invariant, causal systems with lumped parameters
the TF is a rational function of the complex variable. Such variable is
usually denoted by s.

5. The position of the poles of the TF in the complex plane accounts for
many important properties of the systems including BIBO stability,
and important features of the step response of the system.

The first part of this book is devoted to introducing the tools needed
for deriving a similar approach for discrete-time signals and systems and for
sampled signals.

2.2 Discrete-time signals


Definition 2.1. A discrete-time signal x(k) is a sequence of real or complex
values (or of samples), i.e. a function x : Z(Z+ ) → R(C), k 7→ x(k).1

Some relevant sets of signals are:

1. Bounded signals: this is the vector space

ℓ∞ := {x(k) : ∃M < +∞, s.t. |x(k)| ≤ M ∀k ∈ Z}.

This vector space is endowed with the norm:

∥x∥∞ := inf M = sup |x(k)|.


k

With this norm ℓ∞ is a Banach space, i.e. all the Cauchy sequences in
ℓ∞ converge to an element of ℓ∞ .

2. Finite energy signals: this is the vector space


X
ℓ2 := {x(k) : |x(k)|2 < ∞}.
k

1
The symbol Z+ denotes the set of nonnegative integer (so that 0 ∈ Z+ ).
2.2. DISCRETE-TIME SIGNALS 13

This vector space is endowed with the internal product:


X
⟨x, y⟩ = x∗ (k)y(k),
k

which induces the norm:


!1/2 !1/2
X X
∥x∥2 = x∗ (k)x(k) = |x(k)|2 .
k k

Again, all the Cauchy sequences in ℓ2 converge to an element of ℓ2 so


that ℓ2 is a Hilbert space.

3. Absolutely summable signals: this is the vector space


X
ℓ1 := {x(k) : |x(k)| < ∞}.
k

It is endowed with the norm:


X
∥x∥1 = |x(k)|
k

and is a Banach space with respect to this norm.

Remark 2.1. More in general, we can define the vector space of the sequences
with summable p power, with 1 ≤ p ≤ ∞:
X
ℓp := {x(k) : |x(k)|p < ∞}, 1 ≤ p ≤ ∞.
k

Each of these spaces is endowed with a norm:


!1/p
X
∥x∥p := |x(k)|p
k

and is a Banach space with respect to this norm. An interesting feature


which does not have a counterpart is continuous-time signals is the following
strict inclusion:
ℓp ⊊ ℓs , ∀ 1 ≤ p < s ≤ ∞.
14CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

2.3 Discrete-time linear SISO systems


Let us consider a system understood as a transformation mapping an input
sequence {u(k)}k=+∞ k=+∞
k=−∞ to an output sequence {y(k)}k=−∞ . By imposing that
the system is linear we get that the following relation
X X
aj (k)y(j) = bl (k)u(l). (2.1)
j l

If we further impose that the system is causal, i.e. the output at each “time” k
only depends on the input at times l ≤ k, we get the following representation
k
X k
X
aj (k)y(j) = bl (k)u(l). (2.2)
j=−∞ l=−∞

In practice, to effectively implement any signal processing algorithm only a


finite number of samples can be stored on the limited memory of the pro-
cessor. By imposing this constraint to (2.2) we limit the class of systems
to those for which the output at each “time” k only depends on the last n
samples of the output itself and on the last m+1 samples of the input (where
n and m are natural numbers and for the input we consider m + 1 samples
because the sample at “the present time” k of the input may be used). In
this case, in place of (2.2) we have the equation
k
X k
X
aj (k)y(j) = bl (k)u(l). (2.3)
j=k−n l=k−m

Finally, if we impose that the system is time-invariant, i.e. its behaviour does
not change in time, then the coefficients aj (k) only depend on the difference
k − j and the coefficients bl (k) only depend on the difference k − l, so that
k
X k
X
ak−j y(j) = bk−l u(l)
j=k−n l=k−m
Xn Xm
aj y(k − j) = bl u(k − l). (2.4)
j=0 l=0

We will develop the I/O analysis for systems of the form (2.4). These systems
are obtained by discretizing continuous-time equations modeling, for exam-
ple, classical mechanical or electrical systems. We shall see, however, that
2.3. DISCRETE-TIME LINEAR SISO SYSTEMS 15

there are discrete-time systems described by (2.4) that cannot be obtained


by discretizing a continuous-time linear system of dynamical equations.
Example 2.1. Let us consider the following linear differential equation:
d2 d
a2 2
y(t) + a1 y(t) = b0 u(t). (2.5)
dt dt
We now describe how this equation can be “discretized”, i.e. approximated
by a discrete-time system. To this aim we sample the continuous-time signals
u(·), y(·) and derive a difference equation that is approximately satisfied by
the sampled signals: the key point being that when the sampling time tends
to 0 also the approximation error tends to zero.
Let T be the sampling time and assume that u(·), and y(·) are sufficiently
smooth signals (more precisely u(·), y(·) are of class al least C n , n being
the order of the ODE so that, in this case, n = 2). Then, for T sufficiently
d
small, we can approximate arbitrarily well the differential operator dt with

the difference quotient T , where the discrete difference operator ∆ acts on a
function f (t) as follows:
∆f (t) = f (t) − f (t − T ). (2.6)
We define the discrete-time signals
ỹ(k) := y(kT ), ũ(k) := u(kT ), k ∈ Z, (2.7)
and with straightforward algebraic manipulations we get
ỹ(k) − ỹ(k − 1)
   
d ∆
y(t) ≃ y(t) = (2.8)
dt | T | T
 2  t=kT  2 t=kT
d ∆ ỹ(k) − 2ỹ(k − 1) + ỹ(k − 2)
2
y(t) ≃ 2
y(t) = . (2.9)
dt |t=kT T |t=kT T2
By plugging these expression on (2.5) we get the following difference equation
ã0 ỹ(k) + ã1 ỹ(k − 1) + ã2 ỹ(k − 2) = b0 ũ(k), (2.10)
where
a2 a1 2a2 a1 a2
ã0 := 2
+ , ã1 := − 2 − , ã2 := 2 .
T T T T T
Notice, in passing, that the coefficients of the discrete-time system ob-
tained by discretizing a continuous-time one, depend on the sample time T
which has therefore a relevant impact on numerical properties of the discrete-
time system. ♢
16CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

2.4 Z-Transform
The Z-Transform may be viewed as the discrete-time counterpart of the
Laplace Transform. It is a linear operator mapping sequences in Z to func-
tions of the complex variable z ∈ C.
Definition 2.2. Let f : Z → R(C), k 7→ f (k) be a discrete-time signal.
We define the (unilateral)2 Z-Transform of f to be the sum of the following
series for the values of the complex variable z for which the series converges:

X
Z[f ] = F (z) := f (k)z −k , (2.11)
k=0

Example 2.2. Let f (k) be the discrete impulse also known as Kronecker
delta: (
1 k=0
f (k) = δ(k) = (2.12)
0 otherwise
The corresponding Z-Transform is

Z[δ] = 1 (2.13)

which converges, and hence is well defined, for all complex z. ♢

We now focus on the convergence of the Z-Transform defined in (2.11). The


following result holds:
Theorem 2.1. Let f : Z → R(C), k 7→ f (k) be a discrete-time sequence
and z ∈ C. Then there exists ϱ0 ∈ [0, +∞] such that the series

X
f (k)z −k
k=0

1. is absolutely convergent outside a circle of radius ϱ0 centered in the


origin of C i.e. ∀z ∈ {z ∈ C : |z| > ϱ0 }.
2
There also exists the bilateral Z-Transform defined by
+∞
X
Z[f ] = F (z) := f (k)z −k .
k=−∞

We will only use the unilateral Z-Transform as we mainly consider causal signals.
2.4. Z-TRANSFORM 17

2. Is divergent ∀z ∈ {z ∈ C : |z| < ϱ0 }.


The “radius of convergence”3 ϱ0 ∈ [0, +∞] is given by:
1
ϱ0 = lim sup |f (k)| k . (2.14)
k→∞

Some remarks on the previous result are in order:


1. Remind that lim sup or superior limit of a sequence g(k) is always well
defined (also when the sequence does not admit limit) and is given by
the following procedure: given g(k), define the new sequence

l(k) := sup g(h).


h≥k

Then
lim sup g(k) := lim l(k).
k→∞ k→∞

Notice that l(k) is by construction monotonic non-increasing so that


it necessarily admits limit which can be finite of infinite.Therefore,
formula (2.14) always provides a value of ϱ0 .
1
2. When the sequence |f (k)| k has limit, this limit necessarily coincides
with the superior limit so that, in these cases, we can use the simpler
formula
ϱ0 = lim |f (k)|1/k (2.15)
k→+∞

In other words, ϱ0 can be computed by using the simpler formula (2.15)


if and only if the limit in the right-hand side of (2.15) exists.
3. Formulas (2.14) and (2.15) hinge on the root test. An alternative for-
mula, based on the ratio test, is the following:

f (k + 1)
ϱ0 = lim . (2.16)
k→+∞ f (k)

Also this formula is usually much easier to compute than (2.14) but
holds if and only if the limit in its right-hand side exists.
3
Notice that ϱ0 may well be +∞: this means that the series does not converge for
any complex value so that the Z-Transform is not defined for sequences corresponding to
ϱ0 = +∞.
18CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

4. Theorem 2.1 discusses the convergence of the series ∞ −k


P
k=0 f (k)z only
for |z| > ϱ0 and for |z| < ϱ0 . Indeed, there are no general results on
the convergence of the series on the circle of radius ϱ0 : depending on
the specific sequence and on the specific z such that |z| = ϱ0 the series
can be convergent or non-convergent.

Definition 2.3. The radius of convergence (r.c.) of the Z-Transform in


(2.11) is the constant ϱ0 defined by (2.14). The region of convergence (see
Figure 2.1) of the Z-Transform is the set

{z ∈ C : |z| > ϱ0 }.

The radius of convergence can be computed by the simplified formulas


(2.15) and (2.16) if and only if the corresponding limits in the right-hand
side exist.

=(z)

%0

0 <(z)

Figure 2.1: Convergence region (highlighted in cyan) Rc = {z ∈ C : |z| > ϱ0 }


of the Z-Transform.

Example 2.3. Next we provide some examples of computation of Z-Transforms


and of the corresponding radius of convergence:

• Let f (k) be the discrete unit step (also known as Heaviside function)
(
1 k≥0
f (k) = δ−1 (k) := (2.17)
0 k<0
2.5. PROPERTIES OF THE Z-TRANSFORM 19

The corresponding Z-Transform is a geometric series of ratio z −1 ; there-


fore, we have
+∞
X 1 z
Z[f ] = z −k = −1
= , (2.18)
k=0
1−z z−1
ϱ0 = 1. (2.19)
In this case we obtained ϱ0 by using the results on the geometric series.
The same result can be obtained by using formulas (2.15) and (2.16).
• Let f (k) = pk , p ∈ C. The corresponding Z-Transform is a geometric
series of ratio pz −1 ; therefore, we have
+∞
X 1 z
F (z) = pk z −k = −1
= (2.20)
k=0
1 − pz z−p
ϱ0 = |p|. (2.21)
Observe that this kind of sequences are obtained by sampling continuous-
time exponential signals eλt with fixed sampling time T : eλT k = pk with
p = eλT .
2
• Let f (k) = 2k . The corresponding Z-Transform Z[f ] is not defined.
In fact, ϱ0 = +∞ as it can be checked by using formula (2.16).

Challenge 2.1. Provide a sequence f (k) for which the radius of converge
of the corresponding Z-Transform cannot be computed with the simplified
formulas (2.15) and (2.16) so that formula (2.14) must be employed. In
particular, prove that for such f (k) the limits in (2.15) and (2.16) do not
exist and compute the limit in (2.14).

2.5 Properties of the Z-Transform


Next we describe the main properties of the Z-Transform. To this aim we
Z Z
use the notation f (k) −→ F (z) or f −→ F (z) to indicate that F (z) is the
Z-Transform of f (k). Also, when possible, we denote discrete-time signals
with lower-case letters and use the corresponding upper-case letters for the
relative Z-Transforms.
20CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

Z
→ F (z), then4
1. Simmetry: If f : Z → C, and f −

Z
X
f ∗ (k) −
→ f ∗ (k)z −k = F ∗ (z ∗ ). (2.22)
k

If f is real-valued then (2.22) implies F (z) = F ∗ (z ∗ ) or, equivalently,


F (z ∗ ) = F ∗ (z).

2. Linearity: Let F1 (z), F2 (z) be the Z-Transforms of f1 , f2 : Z → C,


and ϱ1 , ϱ2 be the corresponding radii of convergence. Then, for all
c1 , c2 ∈ C we have
Z
f := c1 f1 + c2 f2 −
→ c1 F1 (z) + c2 F2 (z), (2.23)

and the corresponding radius of convergence ϱ0 satisfies

ϱ0 ≤ max{ϱ1 , ϱ2 }. (2.24)

Example 2.4. Let f (k) = cos(ϑk) = 21 ejϑk + e−jϑk . By employing




(2.20) and the linearity of the Z-Transform, we get


 
1 z z
Z[f ] = +
2 z−e jϑ z − e−jϑ
z(z − cos ϑ)
= 2 . (2.25)
z − 2 cos ϑz + 1

Similarly, for g(k) = sin(ϑk) we get

z sin ϑ
Z[g] = . (2.26)
z2 − 2 cos ϑz + 1

3. Translation in k: We now discuss two fundamental properties that


will be used several times in this book. Let f (k) : Z → R, C e Z[f ] =
F (z):
4
We use the notation f ∗ to denote the complex conjugate of f . If f is a vector (or a
vector-valued function) f ∗ denotes the conjugate transpose of f .
2.5. PROPERTIES OF THE Z-TRANSFORM 21

• Time advance: Let a ≥ 0 and let g(k) := f (k + a). The Z-


Transform of g is
+∞
X
Z[g] = f (k + a)z −k
k=0
+∞
X
=z a
f (k + a)z −(k+a)
k=0
a−1
X
= z a F (z) − f (j)z a−j . (2.27)
j=0
| {z }
Σa

Notice that Σa is a polynomial in z that cancels the first a terms


of z a F (z) = z a f (0) + z a−1 f (1) + · · · = Σa + f (a) + f (a + 1)z −1 +
f (a + 2)z −2 + . . . . In the specific case a = 1, we have
Z[f (k + 1)] = z[F (z) − f (0)]

• Time dealy: Let g(k) := f (k − r), r ≥ 0. We have


+∞
X
Z[g] = f (k − r)z −k
k=0
+∞
X
= z −r f (k − r)z −(k−r)
k=0
r
X
−r
= z F (z) + f (−l)z −(r−l) , (2.28)
|l=1 {z }
Σr

Notice that Σr ≡ 0 if and only if f (k) is causal, i.e. f (k) = 0 for


all k < 0. Otherwise, Σr is non-zero and it is a polynomial in z −1
of degree equal to r − 1. In the specific case r = 1, we have
Z[f (k − 1)] = z −1 F (z) + f (−1)
Remark 2.2. We may formally introduce the temporal translation
operator q as the map on the set of sequences f : Z → R (C) to
iteslf, defined by:
q[f (k)] := f (k + 1). (2.29)
22CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

Clearly this map is invertible and we have


q −1 [f (k)] = f (k − 1). (2.30)
This map corresponding to a one-step time advance is illustrated
in Figure 2.2. Its inverse q −1 is the one-step time delay operator
and, clearly, q ◦ q −1 is the identity operator.
f (k) q −1 [f (k)]

−3 −2 −1 0 1 2 3 4 5 6 7 k

f (k) q[f (k)]

−3 −2 −1 0 1 2 3 4 5 6 7 k

Figure 2.2: Temporal translation of the sequence f (k).

In view of (2.27), and (2.28), we may think that the action of


the operator q corresponds, in the Z-Transform domain, to mul-
tiplication by z and, similarly, that the action of the operator q −1
corresponds, in the Z-Transform domain, to multiplication by z −1 .
This is wrong and would lead to gross errors. In fact, we are con-
sidering the unilateral Z-Transform which is defined by summing
only on positive time instants and hence only accounting for the
causal part of signals whose domain is the whole Z.
For example, we easily see that the Z-Transform F (z) of discrete-
time signal f (k), does not contain any information about the term
f (−1). On the other hand, g(k) defined by g(k) := q −1 f (k) is
such that g(0) = f (−1) so that its Z-Transform (that contains
the information on g(0)) cannot be obtained from F (z) unless, as
observed before, we know that f is causal so that g(0) = f (−1) =
0. Otherwise we need to include the term Σr defined in (2.28).
In conclusion, when computing the Z-Transform of translated sig-
nals we need to take great care over the sample entering or exit-
ing the domain Z+ of the transform. For this reason, in formulas
(2.27) and (2.28) appear the terms Σa and Σr , respectively.
2.5. PROPERTIES OF THE Z-TRANSFORM 23

4. Periodic repetition: Let f (k) be a discrete-time signal defined in


Z. Assume that f (k) is causal, i.e f (k) = 0 for k < 0. Let N be
a positive integer and let g(k) be the periodic repetition of g(k) with
period N , i.e.

X
g(k) := f (k − iN ) (2.31)
i=0

By causality of f (k) we obtain that


⌊k/N ⌋
X
g(k) := f (k − iN )
i=0

and hence we have not to worry about the convergence of the sum.
An interesting special case is the one in which f (k) is supported in
[0, N − 1] namely it is zero also for all k ≥ N (see Figure 2.3). In this
case g(k) is clearly periodic of period N .
The Z-Transform of the periodic repetition can be easily obtained by
using linearity:

X 1 zN
G(z) := Z[f (k)] = z −iN F (z) = F (z) = F (z).
i=0
1 − z −N z N −1
| {z }
=:ΘN (z)
(2.32)
This formula shows that the action of the operator of periodic repetition
corresponds, in the transform domain, to multiplication by ΘN (z).

g(k)

3 4

−3 −2 −1 0 1 2 5 6 7 k

Figure 2.3: Signal g(k) vanishing for k < 0 and for k ≥ N = 5. The
corresponding periodic repetition of period N = 5, gives a causal signal f (k)
that equals g(k) for 0 ≤ k ≤ N − 1 = 4 and for which these 5 samples are
repeated periodically for k ≥ 5.
24CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

Example 2.5. Let f (k) : Z → {0, 1} be the periodic signal


(
1 k even
f (k) :=
0 k odd
This signal can be seen as the periodic repetition of the discrete impulse
(2.12) with period N = 2. Therefore, from (2.32) we get
"∞ #
X z2
F (z) := Z[f (k)] = Z δ(k − i2) = 2 .
i=0
z −1


f (k) 1

−3 −2 −1 0 1 2 3 4 5 6 7 k

Figure 2.4: Signal f (k) used in example 2.5.

5. Scaling in the z-domain C: Given the signal f (k), let F (z) =


Z[f (k)] and ϱ0 be the corresponding radius of convergence. Then
∞  
Z
X
k −k z
k
p f (k) −
→ f (k)p z = F , r.c. ϱ′0 = |p|ϱ0 . (2.33)
k=0
p

This property will be very useful fo the analysis of modes of discrete-


time systems.
Example 2.6. Let f (k) := λk cos(ϑk). By combining formula (2.33)
with the Z-Transform of the cosine function (2.25), we get
z z

z  − cos ϑ
F (z) := Z[f (k)] = F = z2 λ λ
λ λ2
− 2 cos ϑ λz + 1
z(z − λ cos ϑ)
= . (2.34)
z 2 − 2 cos ϑλz + λ2
Notice that if λ is real and 0 < λ < 1 then f (k) may be viewed as the
sampled version of a continuous-time damped oscillation. ♢
2.5. PROPERTIES OF THE Z-TRANSFORM 25

6. Discrete integration: Given the signal f (k), let F (z) = Z[f (k)].
The discrete integral of f (k) is defined as

k
X
g(k) := f (l).
l=0

5
The Z-Transform of this discrete integral is easily obtained as

k ∞ k ∞ ∞
! !
Z
X X X X X
f (l) −
→ z −k f (l) = z −l f (l) z l−k
l=0 k=0 l=0 l=0 k=l
1 z
= −1
F (z) = F (z). (2.35)
1−z z−1

Example 2.7. Let f (k) = k be the discrete-time ramp signal. Consider


the discrete
Pk unit step δ−1 (k) defined (2.17) and its discrete integral
g(k) := l=0 δ−1 (l): we easily see that g(k) = 0 for k < 0 and g(k) =
k + 1 for k ≥ 0 and hence f (k) = g(k − 1) for all k. Therefore, from
(2.35) and by taking the one-step delay into account, it follows:
  
−1 z z z
F (z) = z = . (2.36)
z−1 z−1 (z − 1)2

Notice that here g(k) is a causal signal so that Z[g(k − 1)] = z −1 G(z).

7. Discrete derivative: Given the signal f (k), let F (z) = Z[f (k)].
Let ∆ be the discrete derivative operator defined in (2.6) and g(k) :=
∆f (k) = f (k) − f (k − 1). By using (2.28) we get

Z[g(k)] = Z[∆f (k)] = Z[f (k) − f (k − 1)]


= F (z) − z −1 F (z) − f (−1)
= (1 − z −1 )F (z) − f (−1). (2.37)

5
Notice that the formula for the Z-Transform of the discrete integral is the same of
(2.32) for the periodic repetition of period N = 1. The reader is invited to explain this
fact.
26CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

In general, the Z-Transform of the discrete derivative of order n may


be obtained by a similar computation:
n
"   #
l n
X
n
Z[∆ f (k)] = Z (−1) f (k − l)
l=0
l
n−1 n  
−1 n
X
−h
X
l n
= (1 − z ) F (z) + z (−1) f (h − l).
h=0 l=h+1
l
(2.38)

Remark 2.3. Notice that the (double) sum

n−1 n  
X
−h
X n l
S(z) := z (−1) f (h − l)
h=0 l=h+1
l

is a polynomial in z −1 of degree at most n − 1. Its coefficients are


linear combinations of the terms f (−ℓ), ℓ = 1, . . . , n, that play the
same role of the initial conditions in the (unilateral) Laplace Transform
of a continuous-time signal. For the n-th order discrete derivative ∆n ,
the sum S(z) depends on the n samples f (−ℓ), for ℓ = 1, . . . , n. This
corresponds to the formula for the Laplace transform of the derivative
of order n of a continuous-time signal where n initial conditions are
required (i.e. the limits for t → 0− of the signal and of its first n − 1
derivatives). In the discrete-time, the idea is similar but to specify
the first difference in k = 0 we need the sample f (−1), to specify the
second difference in k = 0 we need the two samples f (−1) and f (−2),
and so on. In general, to specify the n-th difference in k = 0 we need
the n samples f (−ℓ), for ℓ = 1, . . . , n.

8. Derivation of the Z-Transform: Consider a signal f (k), and let


F (z) = Z[f (k)] be its Transform and ϱ0 be its convergence radius.
Then F (z) is an analytic 6 function in the region of convergence. By
6
We recall that a function of the complex variable z is analytic in an open set if in
this set it can be locally represented as the sum of a convergent power series. We recall
also that F (z) is analytic in an open set if and only if it is derivable (as a function of the
complex variable z) in this set and, in this case, F (z) is derivable infinitely many times in
the same set.
2.5. PROPERTIES OF THE Z-TRANSFORM 27

computing the derivative of F (z), we find



d X
F (z) = (−k)f (k)z −k−1 = −z −1 Z[kf (k)]. (2.39)
dz k=1

As a consequence, we have:
d
Z[kf (k)] = −z F (z). (2.40)
dz
Example 2.8. We now show as some relevant Z-Transforms can be
computed by using (2.40).

• Let f (k) = k 2 for k ≥ 0; then


d z
Z[f (k)] = Z[k · k] = −z
dz (z − 1)2
 
1 2z
= −z −
(z − 1)2 (z − 1)3
z(z + 1)
= . (2.41)
(z − 1)3

• The Z-Transform of f (k) = k n where n is any integer greater than


or equal to zero has not a simple closed form expression. It can
be computed inductively as follows. Let Qn (n) such that
z
Z[k n ] = Qn (z) (2.42)
(z − 1)n+1
Observe that Q0 (z) = 1. Moreover
 
n d  n−1
 d zQn−1 (z)
Z[k ] = −z Z[k ] = −z
dz dz (z − 1)n
" #
d
Qn−1 (z)(z − 1)n + z dz Qn−1 (z)(z − 1)n − zQn−1 (z)n(z − 1)n−1
= −z
(z − 1)2n
" #
d
nzQn−1 (z) − (z − 1)Qn−1 (z) − z(z − 1) dz Qn−1 (z)
=z
(z − 1)n+1
 
z d
= ((n − 1)z + 1)Qn−1 (z) − z(z − 1) Qn−1 (z)
(z − 1)n+1 dz
28CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

In this way we can see that


d
Qn (z) = ((n − 1)z + 1)Qn−1 (z) − z(z − 1) Qn−1 (z) (2.43)
dz
From this iterative formula we obtain that Q0 (z) = 1, Q1 (z) = 1,
Q2 (z) = z + 1, Q3 (z) = z 2 + 4z + 1 and that in general Qn (z) is a
polynomial of degree less than n.
• Let f (k) = k n pk ; in view of (2.33) and (2.42), we get the expression
z/p Qn (z/p) zpn Qn (z/p)
Z[f (k)] = = . (2.44)
(z/p − 1)n+1 (z − p)n+1

• Recall that the definition of the binomial coefficient


 
k k!
=
l l!(k − l)!
that is well defined only when 0 ≤ l ≤ k. With a slight abuse of
notation we assume that the binomialcoefficient
 is defined to be
k
zero otherwise. Consider first f (k) := . Notice that f (k) := 0
2
for k < 2 so, in particular f (k) is causal. We have
 2
k(k − 1)
  
k k
Z[f (k)] = Z =Z −
2 2 2
z(z − 1)
 
1 z(z + 1)
= −
2 (z − 1)3 (z − 1)2 (z − 1)
z
= . (2.45)
(z − 1)3
 
k
• Consider now the general case f (k) = so that f (k) := 0 for
l
k < l. We have
z
Z[f (k)] = . (2.46)
(z − 1)l+1
Proof. The proof of (2.46) will be carried over by induction on l.
 
k
Base case: for l = 0, 1 relation (2.46) holds as Z =
0
z
Z[δ−1 (k)] = .
z−1
2.5. PROPERTIES OF THE Z-TRANSFORM 29

Induction step: assume that (2.46) holds for l − 1. We can write

k k−1 k−1
       
k 1
Z =Z = Z k ,
l l l−1 l l−1

and by using the discrete derivative formula (2.40) and the trans-
lation formula (2.28),7 we get

k−1
    
1 z d −1 z z
Z k =− z = .
l l−1 l dz (z − 1)l (z − 1)l+1

 
k k−l
• Let f (k) = p ; in view of (2.46) and of (2.33), we get the
l
important expression
1 z/p z
Z[f (k)] = l l+1
= . (2.47)
p (z/p − 1) (z − p)l+1

9. Asymptotic behaviour (of the Z-Transform): Consider a sig-


nal f (k), and let F (z) = Z[f (k)] be its Z-Transform and ϱ0 be its
convergence radius. If ϱ0 ∈ R, i.e. if it is finite, then

lim F (z) = f (0). (2.48)


|z|→+∞

Remark 2.4. Formula (2.48) must be understood as follows: the limit


in the left-hand side is independent of the way in which |z| diverges
and is equal to the “first sample” f (0) of the sequence f (k).
Remark 2.5. Notice the analogy between the previous formula and the
Initial Value Theorem for the Laplace Transform.
Example 2.9. Let f (k) := cos(2k) + 3k ; in view of (2.20) and (2.25),
its Z-Transform is
z(z − cos 2) z
F (z) = + .
z2 − 2 cos 2z + 1 z − 3
7 k

Notice that l−1 is, by definition, causal.
30CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

We have
lim F (z) = 1 + 1 = 2 = f (0).
z→+∞

Remark 2.6. Relation (2.48) may be generalized as follows: if f (k)


vanishes for all k < r, then

lim z r F (z) = f (r). (2.49)


|z|→+∞

10. Final Value Theorem: Consider a signal f (k), and let F (z) =
Z[f (k)] be its Z-Transform. If limk→∞ f (k) exists and is finite,
then
lim f (k) = lim(1 − z −1 )F (z). (2.50)
k→∞ z→1

Example 2.10. Next we show how the Final Value Theorem can be
used and that if the assumptions of theorem do not holds, formula
(2.50) provides a wrong result!

• Let f (k) := 0.9k + 1. Since limz→∞ f (k) = 1, we can use formula


(2.50) that gives

z−1
  
z z
lim + = 1.
z→1 z z − 0.9 z − 1

• Let f (k) := sin(ϑk) with ϑ ̸= hπ, h ∈ Z. The limit limz→∞ f (k)


does not exist, and formula (2.50) (which cannot be used) wrongly
gives
lim f (k) = lim(1 − z −1 )F (z) = 0.
k→∞ z→1

• Let f (k) := k2k . In this case the limit limz→∞ f (k) exists but
is +∞ (and so is not finite). Again and formula (2.50) (which
cannot be used) wrongly gives

z−1
   
d z 2
lim (−z) = lim(z − 1) = 0.
z→1 z dz z − 2 z→1 (z − 2)2


2.5. PROPERTIES OF THE Z-TRANSFORM 31

11. Convolution: Let f (k), g(k) be two causal signals and F (z), G(z) be
the corresponding Z-Transforms. We define the discrete convolution of
f (k) and g(k) as
+∞
X
h(k) := f (k) ⊗ g(k) := f (l)g(k − l). (2.51)
l=−∞

Notice that, due to the causality of the signals f (k), g(k), we have that
k
X
f (k) ⊗ g(k) = f (l)g(k − l)
l=0

and hence we don’t have to wory about the convergence of the sum.
Let H(z)be the Z-Transform of h(k). Then

H(z) = F (z)G(z). (2.52)

Proof. Since f is causal, we have:


+∞
X +∞
X
h(k) = f (l)g(k − l) = f (l)g(k − l).
l=−∞ l=0

By computing the Z-Transform, we get


+∞ X
X +∞
H(z) = f (l)g(k − l)z −k
k=0 l=0
+∞ +∞
!
X X
= f (l) g(k − l)z −k
l=0 k=0
+∞ +∞
!
−k′ −l
X X
= f (l) g(k ′ )z
l=0 k′ =−l
+∞ +∞
!
−k′
X X
= f (l)z −l g(k ′ )z
l=0 k′ =0
= F (z)G(z),

where k ′ := k − l and the last-but-one equality is a consequence of the fact


that g is causal.
32CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

2.6 Inverse Z-Transform


Next we focus on the following inverse problem: given a function F (z) of the
complex variable, compute a signal f (k) such that F (z) = Z[f (k)]. The first
issue is clearly existence of the inverse.

2.6.1 Existence of the inverse Z-Transform


Before computing inverse Z-Transform we must confirm that this inverse
indeed exists, i.e. that the map is bijective, or, equivalently, both injective
and surjective. In order to prove this we need to define precisely what is the
domain and the codomain of the operator associated with the Z-Transform.
Recall first that the Z-Transform is a linear map that neglects the values
of the signal for negative time instants. In other words two signals f1 (k)
and f2 (k) such that f1 (k) = f2 (k) for all k ≥ 0 have the same Z-Transform
and hence the Z-Transform can not be injective on the domain consisting of
all the signals. For this reason we restrict the Z-Transform on the smaller
domain of the causal signals. Moreover to the domain of the Z-Transform
belong only the signal that are Z-transformable. It can be shown that a
signal f (k) admits the Z-Transform if only if it grows at most exponentially
fast, namely if there exist two positive constants A, ϱ such that
|f (k)| ≤ Aϱk ∀k.
Then we define the domain to the the vector space
D := {f (k) : f (k) = 0 ∀k < 0, |f (k)| ≤ Aϱk ∀k ≥ 0 for some A, ϱ > 0}.
The codomain is a subset of the complex functions that are ”regular” outside
a circle centred in the origine of of big enough radius. Precisely, we define
the codomain to be
D̃ := {F (z) : F (z) admits derivative for all z such that |z| > ϱ for some ϱ > 0}.8
8
The fact the F (z) admits derivative for all z such that |z| > ϱ for some ϱ > 0 is a
very strong assumption, since making the derivative with respect to the complex variable
is different from making the partial derivatives with respect to the two real variables (the
real and the imaginary part) that determine the complex variable. Indeed, it can be
shown that, if F (z) admits the derivative in the neighbour of a point z, then it admits
the derivative of any order. A complex function satisfying this property is said to be
holomorphic in z. Hence D̃ is more formally the set of the complex functions that are
holomorphic outside a circle centred in the origine of of big enough radius.
2.6. INVERSE Z-TRANSFORM 33

Theorem 2.2. The Z-Transform seen as a map from D to D̃ is injective


and surjective.
The proof of this theorem would need tools of complex analysis and it is
outside the scope of this notes. However the proof of injectivity is simpler
and is also instrumental to one of the methods we will show that allows to
build the signal f (k) that is the inverse Z-Transform of a function F (z) ∈ D̃.
Injectivity: To show that Z-Transform restricted to D is injective we
must proof that, if f1 (k) and f2 (k) are two causal sequences having the same
Z-Transform F (z), then f1 (k) = f2 (k) for all k. Since the Z-Transform is a
linear map, we may simplify the proof by observing that the Z-Transform of
d(k) := f1 (k) − f2 (k) will be D(z) = 0 for all z such that |z| > ϱ. Therefore
it is sufficient to show that if the Z-Transform of a causal signal d(k) is D(z)
and D(z) = 0 for all z such that |z| > ϱ, then d(k) = 0 for all k. To this end,
we fist observe that, by causality, d(k) = 0 for all k < 0. Moreover, by using
(2.48), we get:
d(0) = lim D(z) = 0. (2.53)
|z|→+∞

Hence,

X ∞
X ∞
X
0 ≡ D(z) = d(k)z −k = z −1 d(k)z −k+1 = z −1 d(h + 1)z −h . (2.54)
k=1 k=1 h=0

Therefore, by setting D1 (z) := zD(z) ≡ 0 and d1 (k) to be the causal sequence


d1 (k) := d(k + 1) we have

X
0 ≡ D1 (z) := zD(z) = d1 (k)z −k , (2.55)
h=0

or, equivalently, 0 ≡ D1 (z) is the Z-Transform of d1 (k). We can now use


again (2.48), to obtain 0 = d1 (0) := d(1). By iterating this argument, we
obtain d(2) = 0, d(3) = 0, and so on, so inductively d(k) = 0 for all k which
concludes the proof.
We remark that the previous argument can me adapted to find the first
l samples (l being an arbitrary integer) of the inverse Z-Transform.

2.6.2 Computation of the inverse Z-Transform


We present three different methods:
34CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

Iterative method: this method draws inspiration from the previous


proof of injectivity and allows to iteratively compute the samples f (k) of
inverse Z-Transform of an analytic function F (z). To this aim we iteratively
use formula (2.48) and get:
f (0) = lim F (z), F1 (z) := z[F (z) − f (0)]
|z|→+∞
f (1) = lim F1 (z), F2 (z) := z[F1 (z) − f (1)]
|z|→+∞
.. .. (2.56)
. .
f (k) = lim Fk (z), Fk+1 (z) := z[Fk (z) − f (k)]
|z|→+∞
.. ..
. .
Integral method: This method is very general and has great concep-
tual importance but it is seldom practically viable. Take F (z) ∈ D̃. Then
F (z) admits derivative for all z such that |z| > ϱ. Take any r > ϱ and let
G(z) := F (rz). It is clear that g(z) ∈ D̃ and that G(z) admits derivative
for all z such that |z| > ϱ/r. Since ϱ/r < 1, then G(z) admits derivative
for all z = ejθ , θ ∈ [−π, π]. Observe moreover that, if g(k) = Z −1 [G(z)]
and f (k) = Z −1 [F (z)], then from (2.33) we can argue that g(k) = f (k)/rk .
Finally, if we define
ϕ(θ) := G(ejθ )
we see that this function is periodic of period 2π and admits derivative for
all θ. Let ck be the Fourier coefficients, namely
Z +π
1
ck = ϕ(θ)ejθk dθ
2π −π
Then
+∞
X

G(e ) = ϕ(θ) = ck e−jθk
k=−∞

but also
+∞
X

G(e ) = g(k)e−jθk
k=−∞

from which we can argue that g(k) = ck and consequently


rk +π rk +π
Z Z
jθk
f (k) = ϕ(θ)e dθ = F (rejθ )ejθk dθ
2π −π 2π −π
2.6. INVERSE Z-TRANSFORM 35

Inverse Z-Transform of proper rational functions: If F (z)


a proper rational function, i.e. if F (z) = N (z)
D(z)
is the ratio of two poly-
nomials with deg[D(z)] ≥ deg[N (z)], we can compute explicitly its inverse
Z-Transform as shown below. We hasten to observe that as a consequence
of the following argument we can also characterize the set of the sequences
f (k) whose transfer function is rational.
Let us start by writing F (z) in the form

Pm
bl z l
F (z) = Pnl=0 l , (2.57)
l=0 al z

and let r := n − m ≥ 0 be the difference between the degree of the de-


nominator and that of the numerator, i.e. the so called relative degree of
F (z); moreover, without loss of generality, we assume that the denominator
is monic, i.e. an = 1. Divide both sides of (2.57) by z and define

Pm
F (z) bl z l
F1 (z) := = Pl=0 . (2.58)
z z nl=0 al z l

By factoring the denominator of F1 (z) in polynomials of first degree, we get:

N (z)
F1 (z) = QN , (2.59)
i=0 (z − pi )ni

where p0 = 0 is the zero at 0 of z nl=0 al z l . Notice that p0 is not present if


P
F (0) = 0. Otherwise, the multiplicity n0 of p0 is equal to the multiplicity of
the pole at 0 of F (z) plus 1, so that if F (z) does not have poles or zeros at
0 then the multiplicity n0 of p0 is equal to 1.
Let us now compute the partial fraction decomposition of F1 (z):

N ni −1
X X Ai,l
F1 (z) = . (2.60)
i=0 l=0
(z − pi )l+1
36CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

The coefficients Ai,l , called the residuals, can be computed, for example, as:

Ai,ni −1 = lim (z − pi )ni F1 (z); (2.61)


z→pi
 
Ai,ni −1
Ai,ni −2 = lim (z − pi )ni −1 F1 (z) − ; (2.62)
z→pi (z − pi )ni
 
ni −2 Ai,ni −1 Ai,ni −2
Ai,ni −3 = lim (z − pi ) F1 (z) − − ;(2.63)
z→pi (z − pi )ni (z − pi )ni −1
..
.
ni
!
X Ai,k−1
Ai,l = lim (z − pi )l+1 F1 (z) − . (2.64)
z→pi
k=l+2
(z − pi )k

We now multiply (2.60) by z and we write separately the terms corre-


sponding to the pole at 0. We get:
N ni −1 0 n −1
X X z X 1
F (z) = Ai,l + A 0,l . (2.65)
i=1 l=0
(z − pi )l+1 z l
| {z } l=0 | {z }
Fpi ,l F0,l

By recalling equations (2.13) and (2.47) we immediately recognise by inspec-


tion the sequences whose transforms are the functions Fpi ,l (z) and F0,l (z):
Z −1 Ai,l k k
Fpi ,l (z) = Ai,l (z−pzi )l+1 −−→ pli l
pi , (2.66)
Z −1
F0,l (z) = A0,l z −l −−→ A0,l δ(k − l). (2.67)

By taking into account that the inverse Z-Transform is linear, from (2.66),
(2.67) and (2.65), it follows:

N ni −1   0 −1
nX
X X Ai,l k
f (k) = pki + A0,l δ(k − l), k ≥ 0. (2.68)
i=0 l=0
pli l l=0

Remark 2.7. Notice that as the inverse Z-Transform is by construction a


causal sequence, the expression (2.68) holds only for k ≥ 0 while for k < 0,
f (k) = 0.
Remark 2.8. Notice that the relative degree r of F (z) has an important
interpretation. In fact, we easily see that exactly the first r samples of f (k),
2.6. INVERSE Z-TRANSFORM 37

i.e., f (0), f (1), . . . , f (r − 1) are zero. In other words f (r) is the first non-zero
sample of f (k). Thus, r can be viewed as the “inner delay” of f (k). This
fact may be used as a convenient “sanity check” after computing the inverse
Z-Transform.
Remark 2.9. Notice that if F (z) is, as it will always be in our setting, a
real rational function (i.e. the coefficients of the numerator and denomina-
tor of F are real) then for any complex pole p = ϱejϑ of F (z), its complex
conjugate p∗ = ϱe−jϑ is also a pole of F (z) and p and p∗ have the same mul-
Al
tiplicity. Moreover, in the partial fraction expansion (2.60) the terms (z−p) l+1
A′
and (z−p∗l)l+1 have complex conjugate coefficients, i.e. A′l = A∗l . Therefore,
Al
the inverse Z-Transforms of the two complex conjugate terms (z−p) l+1 and
A∗l
sum to a real signal. We can compute explicitly this signal. In fact,
(z−p∗ )l+1
by denoting by ℜ e ℑ the real and the imaginary part, we easily get:
A′l
 
−1 Al
Z + (2.69)
(z − p)l+1 (z − p∗ )l+1
Al k k A∗l k ∗k
     
k−l k
+ Āl e−j(k−l)ϑ
 j(k−l)ϑ 
= l p + ∗l p =ϱ Al e
p l p l l
 
1 k k
ϱ 2ℜ(Al e−jlϑ ) cos(ϑk) − 2ℑ(Al e−jlϑ ) sin(ϑk) .

= l (2.70)
ϱ l
Az A∗ z
In particular, if p is a simple pole (np = 1), the two terms z−p
and z−p∗
sum to
Az A∗ z z(α(z − ϱ cos ϑ) − βϱ cos ϑ)
+ ∗
= , (2.71)
z−p z−p z 2 − 2zϱ cos ϑ + ϱ2
with
2A = α + jβ = M ejφ . (2.72)
Therefore, we have
A∗ z
 
−1 Az
Z + = M ϱk cos(kϑ + φ). (2.73)
z − p z − p∗
Let us see an example of computation of the inverse Z-Transform of a
rational proper function.
Example 2.11. Let
3z 4 + 8z 3 + 7z 2 − 26z + 26
F (z) = .
z(z − 1)(z + 2)2 (z 2 − 2z + 2)
38CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM

We immediately see that the relative degree of F (z) is r = 2 and that F (z)
has a pole at 0. Since this pole corresponds to a one-step delay, we can
simplify the procedure by decoupling this delay. More precisely:
1. We define the new function
3z 4 + 8z 3 + 7z 2 − 26z + 26
F0 (z) := zF (z) = .
(z − 1)(z + 2)2 (z 2 − 2z + 2)

2. We compute the inverse Z-Transform f0 (k) of F0 .


3. The inverse Z-Transform of F will be obtained simply by f (k) = f0 (k−1).
For the second step, we define

F0 (z) A B C1 C2 D D∗
F1 (z) = (= F (z)) = + + + + + ,
z z z − 1 z + 2 (z + 2)2 z − p z − p∗
√ √
where p := 1 + j = 2 exp(jπ/4) and p∗ = 1 − j = 2 exp(−jπ/4) are the
roots of (z 2 − 2z + 2) and, by using (2.61), (2.62), (2.63), and (2.64), we get:

A = lim zF1 (z) = −13/4,


z→0
B = lim(z − 1)F1 (z) = 2,
z→1
C2 = lim (z + 2)2 F1 (z) = 3/2,
z→−2
3/2
 
z}|{
 C2 
C1 = lim (z + 2)  F 1 (z) −  = 5/4,
z→−2  (z + 2)2 

D = lim (z − p)F1 (z) = −j.


z→p

Thus
z z z z z
F0 (z) = zF1 (z) = A + B + C1 + C2 +D + D∗
z−1 z+2 (z + 2) 2 z−p z − p∗

so that

f0 (k) = Aδ(k) + Bδ−1 (k) + C1 (−2)k + C2 k(−2)k−1 + Dpk + D∗ (p∗ )k .

We know that f0 is a real-valued signal but the previous representation does


not highlight this fact. Hence, we regroup the two complex conjugate terms
2.6. INVERSE Z-TRANSFORM 39

Dpk and D∗ (p∗ )k as: Dpk +D∗ (p∗ )k = 2Re[Dpk ]. By writing D as D = α+jβ
and p as p = ρ exp(jθ), we get:

Dpk + D∗ (p∗ )k = 2Re[Dpk ] = 2Re[(α + jβ)ρk (cos(kθ) + j sin(kθ))]


= 2αρk cos(kθ) − 2βρk sin(kθ).

In this specific case, α = 0, β = −1, ρ = 2 and θ = π/4 so that:
13 5 3 √
f0 (k) = − δ(k)+2δ−1 (k)+ (−2)k + k(−2)k−1 +2( 2)k sin(kπ/4). (2.74)
4 4 2
Notice that this expression holds only for k ≥ 0. Notice also that the relative
degree of F0 is equal to 1 so that we must have f0 (0) = 0: by plugging k = 0
in (2.74), we easily get f0 (0) = − 13
4
+ 2 + 54 = 0 that is a reassuring sanity
check.
Eventually, we obtain f (k) as f0 (k − 1):
13 5 3
f (k) = f0 (k − 1) = − δ(k − 1) + 2δ−1 (k − 1) + (−2)k−1 + (k − 1)(−2)k−2
4√ 4 2
k−1
+ 2( 2) sin((k − 1)π/4), k > 0.

Clearly, for k = 0 (and for k < 0) we have f (k) = 0. Indeed, the expression
(2.74) of f0 (k) only holds for k ≥ 0.
40CHAPTER 2. DISCRETE-TIME SIGNALS AND SYSTEMS AND Z-TRANSFORM
Chapter 3

Analysis of Discrete-Time
Systems

As discussed before, we are interested in LTI systems described by difference


equations of the form
n
X m
X
ai y(k − i) = bi u(k − i). (3.1)
i=0 i=0

Next we will find tthe solution of this equation by resorting to the Z-


Transform. To this aim we assume that u(k) is a causal input, i.e. u(k) = 0
for all k < 0 and that a0 = 1. By taking the Z-Transform of both sides of
(3.1) and using the backward translation property (2.28), we get:

n
! n i m
!
X X X X
ai z −i Y (z) + ai y(−l)z −i+l = bl z −l U (z)
i=0
| {z } |i=1 l=1
{z } | l=0
{z }
Ã(z) −C̃(z) B̃(z)

(3.2)

where we denoted Y (z) := Z[y(k)] and U (z) := Z[u(k)]. Notice that C̃(z) is
a polynomial in z −1 whose coefficients are linear combinations of the “initial
conditions” y(k), k = −1, −2, . . . , −n of the output. Moreover, since u(k)
is assumed to be causal, the term analogous to C̃(z) involving the “initial
conditions” of u(k) is missing.
42 CHAPTER 3. ANALYSIS OF DISCRETE-TIME SYSTEMS

Now we can solve (3.2) for Y (z) and we get:

B̃(z) C̃(z)
Y (z) = U (z) + = H(z)U (z) +Yl (z) = Yf (z) + Yl (z) (3.3)
Ã(z) Ã(z) | {z }
| {z } | {z } Yf (z)
H(z) Yl (z)

The rational function H(z) is known as the transfer function (T.F.) of the
discrete-time system.
In (3.3) the Z-Transform Y (z) of the output of the system has been
decomposed as the sum of two terms: Yf (z) and Yl (z). The former, known a
forced response, only depends (linearly) on the input of the system while the
latter, known a free response, only depends (linearly) on the initial conditions
{y(−l)}N l=1 .
The forced response in the time domain is defined by yf (k) := Z −1 [Yf (z)] =
Z −1 [H(z)U (z)] and can be easily obtained by using the convolution result
(2.52):

yf (k) = h(k) ⊗ u(k), (3.4)

where
h(k) = Z −1 [H(z)], (3.5)
is the so-called impulse response of the system (the reason is obvious as when
u(k) = δ(k) is the discrete impulse, U (z) = 1, so that Yf (z) = H(z) and,
finally yf (k) = h(k)).
By taking N := max{n, m}, we can write
B(z)
H(z) =
A(z)

where we have set A(z) := z N Ã(z) and B(z) := z N B̃(z) are two polynomials
in z. Notice that A(z) is of degree N by the assumption we made that
a0 = 1. Hence H(z) is a proper rational function and so that its inverse Z-
Transform h(k) can be obtained by the procedure described in §2.6.2. Next,
we recall the main steps of this procedure: we first compute the partial
fraction decomposition of H(z)
z
:
h ni −1 nX0 −1
H(z) X X 1 A0,l
= Ai,l l+1
+
z i=1 l=0
(z − pi ) l=0
z l+1
43

where p1 , . . . , ph are the non-zero poles of H(z), n1 , . . . , nh are their multi-


plicities and Ai,l are the residuals. In the second sum we have isolated the
terms associated with the pole in zero that might be missing in case z = 0 is
not a pole of H(z). Then we have
h ni −1 nX0 −1
X X z
H(z) = Ai,l + A0,l z −l
i=1 l=0
(z − pi )l+1 l=0

and, finally, by taking the inverse Z-Transform:


h n i −1   n0 −1
X X Ai,l k k X
h(k) = l
pi + A0,l δ(k − l)
i=1 l=0
p i l l=0
h n i −1   nX0 −1
X X Ai,l k k
= l
p i + A0,l δ(k − l) (3.6)
i=1 l=0
p i l l=0
| {z }
| {z } FIR modes
IIR modes
Some comments are in order:
1. The expression in (3.6) shows that the impulse response of the system
can be additively decomposed as the sum of an  impulse and a linear
k k
combination of discrete functions of the form l pi and δ(k − l), where
pi are the non-zero poles of the system’s transfer function. The modes
δ(k − l) are associated with the poles of H(z) at the origin and are
non zero only at the discrete time instant l: for this reason they are
called the modes of the Finite Impulse Response (FIR) part of the
transfer function H(z). On the contrary, the modes of the form kl pki
are associated with the non-zero poles pi of H(z) and are non-zero for
all k ≥ l: for this reason they are called the modes of the Infinite
Impulse Response (IIR) part of the transfer function H(z).
The position of the poles in the complex plane are associated with the
asymptotic character of the corresponding modes as follows:

• if |pi | < 1 ⇒ the corresponding modes are convergent (i.e. they


decay to zero as k diverges);
• if |pi | > 1 ⇒ the corresponding modes are divergent (i.e. they
explode as k diverges);
• if |pi | = 1 and k > 1 ⇒ the corresponding modes are divergent;
44 CHAPTER 3. ANALYSIS OF DISCRETE-TIME SYSTEMS

• if |pi | = 1 and k = 1 ⇒ the corresponding modes are bounded but


not convergent (i.e. as k diverges, they remain bounded but they
do not converge to zero).

2. The transfer function H(z) has only the pole p = 0 if and only if the
difference equation (3.1) is of the following form

m
X
y(k) = bi u(k − i).
i=0

In this case the impulse response is

m
X
h(k) = bi δ(k − i). (3.7)
i=0

The sum in (3.7) is finite so that h(k) is identically zero after m steps.
In this case the system is purely FIR. Notice that this is a situation
that has no analogous in continuous time.

3. The transfer function H(z) is rational and proper; in fact, it can be


written as the ratio H(z) = B(z)
A(z)
, with deg[A(z)] ≥ deg[B(z)]. The
relative degree r := deg[A(z)]−deg[B(z)] of H(z) has a very important
meaning. Indeed let us consider an input u(k) and let U (z) be its Z-
Transform. Since U (z) is differentiable outside a circle centred in the
origin, we have that lim|z|→∞ U (z) exists and is finite and it is zero if
and only if u(0) = 0. Therefore, by using the argument in (2.56) we
immediately see that the forced response yf (k) of the system to the
input u(k) is such that

yf (0) = yf (1) = · · · = yf (r − 1) = 0;

moreover, yf (r) ̸= 0 if and only if u(0) ̸= 0. Therefore r has the mean-


ing of the intrinsic delay of the system. We recall that in continuous-
time a delay is only produced by a non-rational transfer function.

For determining the free response yi (k) is the time domain we need to
45

find the inverse Z-Transform of


n i
" n i
#
X X X X
− ai y(−l)z −i+l zn − ai y(−l)z −i+l
i=1 l=1 i=1 l=1
Yl (z) = n =− " n
#
X
−i
X
ai z zn ai z −i
i=0 i=0
n−1 n−j
X X
− an−j y(−l)z j+l
j=0 l=1
= n
X
an−j z j
j=0

Since this is rational and strictly proper, then its inverse Z-Transform can
be found similarly to the inverse Z-Transform of the transfer function H(z).
Notice that, since H(z) and Yl (z) have the same denominator, they have the
same poles with the same multiplicities and hence the same modes. This
is only partially true since there might be poles/zeros cancellations in the
rational functions which would cancel some of the modes. This fact is better
clarified in the following simple example highlighting an important issue that,
if neglected, may lead to blunders.

Example 3.1. Consider the following system:

y(k) + ay(k − 1) = u(k) + au(k − 1).

Its transfer function is:


z+a
= 1.
H(z) = (3.8)
z+a
One may be led to think that y(k) = u(k) but this is wrong! This is true
only for the forced response yf (k) = u(k) but for the full response of the
system, we need to take account also the free response yl (k) and hence the
initial conditions y(−1). In this case the free response is yl (k) = (−a)k y(0).
In the case when |a| > 1 the output diverges whenever y(0) ̸= 0! We can
conclude that

• When we consider the transfer function, we are restricting attention to


the the part of the behaviour of the systems associated with the forced
response yf (k).
46 CHAPTER 3. ANALYSIS OF DISCRETE-TIME SYSTEMS

• If the polynomials A(z) := z N [1 − Ã(z)] and B(z) = z N B̃(z), where


Ã(z) and B̃(z) are defined in (3.2), have common zeros, then part of the
system’s dynamics does not appear in the input-output behaviour of
the system; in fact, the only relevant factors for the latter are the poles
of the transfer function, i.e. the zeros of A(z) that are not “canceled”
by zeros of B(z). On the other hand, also the zeros of A(z) that are
canceled by zeros of B(z) are relevant for the system’s dynamics as
they correspond to modes appearing in the free response of the system.

• Even if we restrict attention to the input-output behaviour of the sys-


tem, zeros of A(z) that are canceled by zeros of B(z) are problematic
if those zeros have magnitude greater than or equal to 1. In fact, if as
a consequence of small variations of the parameters, the cancellation is
not perfect, new modes emerge in the input-output behaviour that do
not converge to zero.
Chapter 4

Properties of Discrete-Time
Systems

4.1 Stability of Discrete-Time Systems


Definition 4.1 (Asymptotic Stability). An LTI system (Σ) is said to be
asymptotically stable if

lim yl (k) = 0 ∀ initial condition {y(−1), y(−2), . . . }, (4.1)


k→+∞

where yl (k) is the free response of the system.

As a consequence of (3.3) we have that (Σ) is asymptotically stable if and


only if all the zeros of the polynomial A(z) have magnitude strictly smaller
than 1 i.e. they are inside the unit disk S1 = {z ∈ C : |z| < 1}. In fact,
we can always find initial conditions of y such that the polynomial C(z) is
constant so that it cannot cancel any of the zeros of A(z).

Definition 4.2 (BIBO Stability). An LTI system (Σ) is said to be BIBO


(Bounded Input Bounded Output) stable if for any (causal) bounded input
signal u(t), the corresponding forced response yf is bounded, i.e.

∀u ∈ ℓ∞ ∞
+ , yf (k) = h(k) ⊗ u(k) ∈ ℓ+ . (4.2)
48 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS
Pk P+∞
The forced response yf (k) = l=0 h(l)u(k − l) = l=−∞ h(l)u(k − l)
is a linear functional of the input u(k). Therefore, BIBO stability may be
mathematically formulated as the fact that this functional maps ℓ∞ to ℓ∞ ,
i.e. ℓ∞ is an invariant subspace of the functional. It can be shown that this
is equivalent to the fact that the impulse response of the system is in ℓ1 ,
i.e. +∞
P
k=0 |h(k)| < +∞. The latter condition is clearly equivalent to the fact
that all the poles of the transfer function H(z) are inside the unit disk, i.e.
|pi | < 1 for all pi that are poles of H(s).
As a consequence, we have the following implications:
• (Σ) asymptotically stable =⇒ (Σ) BIBO stable: in fact, the poles of
H(z) are necessarily a subset of the zeros of A(z).
• If the common zeros of A(z) and B(z), if any, have all magnitude less
than 1 then: (Σ) asymptotically stable ⇐⇒ (Σ) BIBO stable.

4.2 Criteria for Stability


As much as in continuous-time BIBO-stability and asymptotic stability of
an LTI finite-dimensional system can be tested by checking whether or not
a certain polynomial is Hurwitz stable (i.e. all its zeros are in the open left
half-plane), in discrete-time a similar condition holds. In fact, we have seen
that the system is asymptotically stable if and only if all the zeros of the
polynomial A(z) are strictly inside the unit disk {z ∈ C : |z| < 1} and the
system is BIBO-stable if and only if all the the poles of the transfer function
H(z) are strictly inside the unit disk {z ∈ C : |z| < 1} Clearly, if we have a
coprime representation of rational function, namely we have that H(z) = B(z)A(z)
where A(z), B(z) have no common zeros, then H(z) is Schur stable if and
only if the polynomial A(z) at the denominator is Schur stable. Therefore
the only difference with respect to the continuous-time case is that the region
of the complex plane where the zeros must be is the open unit disk instead of
the left half-plane. To check stability of a discrete-time system it would then
be useful to have a discrete-time countepart of the Routh-Hurwitz test, i.e.
a test that, given the coefficients of a polynomial, checks whether or not all
its zeros are in the open unit disk without computing said zeros. To discuss
this issue the following definition comes in handy.
Definition 4.3. A polynomial A(z) is said to be Schur stable if all its zeros
are inside the unit disk {z ∈ C : |z| < 1}. A rational function H(z) is said
4.2. CRITERIA FOR STABILITY 49

to be a Schur stable if all its poles are inside the unit disk {z ∈ C : |z| < 1}.
We now describe two different approaches to check whether or not a given
polynomial an/or rational function is Schur stable.

1. Jury test: it is an algebraic test that, given the coefficients of a polyno-


mial, provides a necessary and sufficient condition for the polynomial
to be Schur stable.

2. Bilinear transform: it is a transformation that, given a rational function


H(z), it provides a new rational function Hc (s) such that H(z) is Schur
stable and proper if and only if Hc (s) is Hurwitz stable and proper.

4.2.1 Jury Test


Let A(z) = a0 z n + a1 z n−1 + · · · + an−1 z + an = nk=0 ak z n−k and without
P
loss of generality, assume that a0 > 0. The following is known as the Jury
table associated with A(z). Here, the coefficients bi are obtained from the

a0 a1 a2 ··· ··· ··· an


an an−1 an−2 ··· ··· ··· a0
b0 b1 b2 ··· ··· bn−1 0
bn−1 bn−2 bn−3 ··· ··· b0 0
c0 c1 c2 ··· cn−2 0 0
cn−2 cn−3 cn−4 ··· c0 0 0
.. .. .. .. .. .. ..
. . . . . . .
.. .. .. .. .. .. ..
. . . . . . .
r0 0 ··· ··· ··· ··· 0

coefficients ai by  
1 a0 an−i
bi := det . (4.3)
a0 an ai
The coefficients ci are obtained from the coefficients bi by following the same
procedure, and so on for the subsequent rows of the table. Notice that from
(4.3) we get bn = 0, as shown in the table. If the first element of the row is
zero, then the procedure is blocked and we say that the Jury table can not
be completed.
50 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

The following result provides a simple test to check whether or not a given
polynomial is Schur stable.
Theorem 4.1. Consider a polynomial A(z) and the corresponding Jury table
built as just shown. Then the polynomial A(z) is Schur stable if and only if
the Jury table can be completed and all the coefficients a0 , b0 , c0 ,. . . , r0 of
the first row have the same sign.
Example 4.1. Let us consider the polynomial:
1
A(z) := z 3 + z 2 + z + .
2
and the corresponding Jury Table is
1 1 1 1/2
1/2 1 1 1
3/4 1/2 1/2 0
1/2 1/2 3/4 0
5/12 1/6 0 0
1/6 5/12 0 0
7/20 0 0 0
In this case, all the relevant coefficients of the first column of the table
are positive and we can conclude that A(z) is a Schur stable polynomial. By
computing explicitly the zeros of A(z) we get the three zeros p1,2 = −0.1761±
j0.8607, p3 = −0.6487, with magnitudes |p1,2 | = 0.878 and |p3 | = 0.647 which
are, as expected, smaller than 1. ♢

4.2.2 Bilinear (or Möbius) Transform


The Möbius transform, that will be denoted by M (·), is a map from C\{−1}
to C defined by:
z−1
M : C \ {−1} → C, z 7→ M (z) := . (4.4)
z+1
Notice that this map is injective and its image is C \ {1}. Hence there exists
its inverse that is
1+s
M −1 : C \ {1} → C, s 7→ M −1 (s) := . (4.5)
1−s
4.2. CRITERIA FOR STABILITY 51

The importance of this maps stands on the fact that |z| < 1 if √
and only
if ℜ (M (z)) < 0. To prove this fact, let z = a + jb so that |z| = a2 + b2 .
We have
z−1 a2 − 1 + b2 + j2b
M (z) = = ,
z+1 (a + 1)2 + b2

so that
a2 − 1 + b 2
ℜ (M (z)) = .
(a + 1)2 + b2
Therefore, we have

• |z| < 1 if and only if ℜ (M (z)) < 0.

• |z| > 1 if and only if ℜ (M (z)) > 0 and M (z) ̸= 1.

• |z| = 1 and z ̸= −1 if and only if ℜ (M (z)) = 0.

• z = −1 if and only if M (z) = ∞.

=(z) =(s)
z−1
s= z+1

=⇒
<(z) <(s)

Figure 4.1: Bilinear (or Möbius) Transform.

By using the bilinear transform, we can analize the BIBO-stability of a


rational function H(z) understood as the transfer function of a discrete-time
system.
More precisely, it is possible to prove the following result.

Proposition 4.1. The rational function H(z) is proper and has poles in {z ∈
C : |z| < 1} if and only if the rational function Hc (s) is proper and has poles
in {s ∈ C : ℜ[s] < 0}.
52 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

Proof. Observe preliminarily that

H(z) is proper ⇔ lim H(z) < ∞


z→∞

Hc (s) is proper ⇔ lim Hc (s) < ∞


s→∞

H(z) has Schur stable poles ⇔ lim H(z) < ∞ for all z̄ such that |z̄| ≥ 1
z→z̄

Hc (z) has Hurwitz poles ⇔ lim Hc (s) < ∞ for all s̄ such that ℜ (s̄) ≥ 0
s→s̄

Assume now that H(z) is proper and has Schur stable poles. We first
prove that Hc (s) is proper that is equivalent to proving that lims→∞ Hc (s) <
∞. Indeed,

lim Hc (s) = lim H(M −1 (s)) = lim H(z) < ∞


s→∞ s→∞ z→−1

where we used the fact that lims→∞ M −1 (s) = −1 and that the Schur stability
of the poles of H(z) implies that H(−1) is finite. We then prove the Hurwitz
stability of the poles of Hc (s). To this aim take s̄ ∈ C such that ℜ (s̄) ≥ 0.
We distinguish two cases:
1. If s̄ = 1, then

lim Hc (s) = lim H(M −1 (s)) = lim H(z) < ∞


s→s̄ s→s̄ z→∞

where we used the fact that lims→1 M −1 (s) = ∞ and that the proper-
ness of H(z) implies that H(∞) is finite.

2. If s̄ ̸= 1, then we let z̄ := M −1 (s̄). Observe that |z̄| ≥ 1. Then

lim Hc (s) = lim H(M −1 (s)) = lim H(z) < ∞


s→s̄ s→s̄ z→z̄

where we used the fact that, being |z̄| ≥ 1, the Schur stability of the
poles of H(z) implies that H(z̄) is finite.
The proof of the converse is similar.
From this result we see that the stability of H(z) can be checked by em-
ploying the Routh-Hurwitz test on the denominator of the rational function
Hc (s) (see Appendix C.1).
Pn Thei previous proposition can be used to check if
a polynomial P (z) = i=0 pi z is Schur stable.
4.2. CRITERIA FOR STABILITY 53
Pn
Corollary 4.1. Let P (z) = i=0 pi z i be a polynomial and let
n
X
−1 n
Pc (s) := P (M (s))(1 − s) = pi (1 + s)i (1 − s)n−i
i=0

Then

P (z) is Schur stable and has degree n ⇔ Pc (s) is Hurwitz stable and has degree n

Proof. Assume first that P (z) is Schur stable and has degree n, namely
pn ̸= 0. Then H(z) := 1/P (z) is proper and has Schur stable poles. By the
previous proposition this implies that

(1 − s)n (1 − s)n
Hc (s) := H(M −1 (s)) = Pn i n−i
=
i=0 pi (1 + s) (1 − s) Pc (s)

is proper and has Hurwitz stable poles. The properness of Hc (s) implies that
the degree of Pc (s) must be n. Moreover, since Pc (1) = pn 2n ̸= 0, then we
can argue that the numerator (1 − s)n and the denominator Pc (s) of Hc (s)
are coprime and hence the poles of Hc (s) coincides with the roots of Pc (s)
that must be Hurwitz stable.
The proof of the converse is similar.

Example 4.2. Consider same polynomial considered in the previous example,


namely A(z) := z 3 + z 2 + z + 21 . Then

1
Ac (s) = (1 + s)3 + (1 + s)2 (1 − s) + (1 + s)(1 − s)2 + (1 − s)3
2
1 3 5 2 3 7 1 3 2
= s + s + s + = (s + 5s + 3s + 7)
2 2 2 2 2
that has degree 3. We check the Hurwitz stability of Ac (s) by applying the
Routh Hurwitz test to the polynomial s3 + 5s2 + 3s + 7. The Routh table is

3 1 3
2 5 7
1 8/5
0 7

which proves that Ac (s) is Hurwitz stable and hence that A(z) is Schur stable.
54 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

4.3 Interconnection of discrete time systems


Interconnections between discrete-time systems can be handled exactly as
those for continuous time systems. In fact, in view of linearity and of the
convolution result, all the block-diagram representations of continuous-time
and discrete-time systems follow exactly the same rules. For example, con-
sider the top diagram of Figure 4.2 whose closed-loop transfer function is, as
C(s)P (s)
is well known, W (s) = 1+C(s)P (s)
. In the discrete time case, we can consider
the same diagram (see the bottom diagram of Figure 4.2) whose closed-loop
transfer function can be easily seen to be

C(z)P (z)
W (z) = , (4.6)
1 + C(z)P (z)

where the structure of W (z) is obtained by following exactly the same steps
as those used to obtain W (s) in continuous-time.
+ e u
r C(s) P (s) b y

+ ẽ ũ
r̃ C(z) P (z) b

Figure 4.2: Feedback interconnections of continuous-time systems (top dia-


gram) and of discrete-time systems (bottom diagram).

Notice that, if we start from transfer functions C(z) and P (z) that are
both proper anche hence associated with causal systems, it might happen
that the closed loop transfer function is not proper. In the discrete time case
this means that the closed loop transfer function can not be associated with
a causal system. It can be seen that this happens if and only if

lim C(z)P (z) = −1


z→∞

since in this case limz→∞ W (z) = ∞, showing the the numerator of W (z)
has degree larger than its denominator.
4.3. INTERCONNECTION OF DISCRETE TIME SYSTEMS 55

4.3.1 Stability of a feedback loop


Nc (z)
Consider the block diagram depicted in Figure 4.3, and let C(z) = Dc (z)
Np (z)
and D(z) = D p (z)
be a rational proper transfer functions. The closed-loop
transfer function is
kC(z)P (z) kNc (z)Np (z)
W (z) = = . (4.7)
1 + kC(z)P (z) Dc (z)Dp (z) +k Nc (z)Np (z)
| {z } | {z }
=:D(z) =:N (z)

e(k) u(k)
+
r(k) kC(z) P (z) b
y(k)

Figure 4.3: Negative feedback with a gain k in the control block.

To analyze the stability of W (z) we can resort to the Jury stability crite-
rion or to the bilinear transform. This can be done also in case in which the
parameter k varies and must be selected An alternative way, that is in some
case easier, resort to the use of the root locus. To draw the root locus we
use the same rules of the continuous-time case (clearly the form of the locus
does not depend on the fact that the complex variable is named z instead of
s). The important difference, however, is in the interpretation of the result.
Indeed, the critical points are no longer those in which the locus intersect
the imaginary axis; now they are the intersections between the locus and the
unit circle {z : |z|2 = ℜ(z)2 + ℑ(z)2 = 1}. In fact, now the stability region is
the one contained inside the unit circle, i.e. {z : |z|2 = ℜ(z)2 + ℑ(z)2 < 1}.
To compute the critical values kcr of the gain k, we need to solve the
equation
kcr N (ejφcr ) + D(ejφcr ) = 0. (4.8)
in the two unknowns kcr and φcr . This is particularly easy when the form of
the locus allows to conclude that z = ±1 are the only critical points.
Example 4.3. Let N (z) = c ∈ R+ e D(z) = z + 12 . The relative degree of
N (z)/D(z) is r = 1 so that the locus has an asymptote lying in the negative
real half-line. The (positive) root locus is depicted in Figure 4.6.
56 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

=(z) =(s)
ϕcr

ωcr

<(z) ωcr
<(s)
ϕcr

Figure 4.4: Root locus (in red) in the z and s domains with the corresponding
critical points.

=(z)

kcr
×
<(z)

Figure 4.5: Root locus (in red) for the example 4.3.

In this case, equation (4.8) is easy to solve and gives


1 1
kcr c + (−1) + = 0 ⇒ kcr = .
2 2c

Another case in which the use of the root locus to study the stability of
D(z) + kN (z) is particularly simple is when this polynomial has degree 2.
Indeed, in this case, denoting by z1 , z2 the roots of this polynomial, we can
argue that

D(z) + kN (z) = az 2 + bz + c = a(z − z1 )(z − z2 ) = az 2 − a(z1 + z2 )z + az1 z2


4.3. INTERCONNECTION OF DISCRETE TIME SYSTEMS 57

In case z1 , z2 are not real, then z2 = z1∗ and hence the degree zero coefficient
of D(z) + kN (z) is c = az1 z2 = a|z1 |2 . We can argue that the roots of
D(z) + kN (z) are critical (namely |z1 | = 1) if and only if c = a.
Example 4.4. Let N (z) = z − 2 e D(z) = (z − 1) z − 21 . The (positive) root


locus is depicted in Figure 4.6.

ℑ(z)

× × ◦
ℜ(z)

Figure 4.6: Root locus (in red) for the example 4.4.

Since in this case


   
1 2 3 1
D(z) + kN (z) = (z − 1) z − + k(z − 2) = z k − z + − 2k
2 2 2
the critical point can be obtained by solving the equation
1 1
− 2kcr = 1 ⇒ kcr = .
2 4

In general, to compute the points where the root locus crosses the unit
circumference {z : |z|2 = ℜ(z)2 + ℑ(z)2 = 1} we can use the bilinear trans-
form (4.5) which, as we have seen, bijectively maps the unit circumference
deprived of the point −1, to the imaginary axis of the complex plane. No-
tice the the missing point −1 can be treated separately by using (4.8). By
resorting to the bilinear transform, equation (4.8) may be rewritten as
   
1+w 1+w
kN (z) + D(z) =0 ⇒ kN +D = 0. (4.9)
1−w 1−w
58 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

1+w 1+w
By setting N ′ (w) := N and D′ (w) := D
 
1−w 1−w
, we get
kN ′ (w) + D′ (w) = 0,
and we can compute the critical values kcr of k by using the Routh Criterion.
Once obtained kcr , we can compute the values jωcr such that kcr N ′ (jωcr ) +
D′ (jωcr ) = 0, and hence the values of z for which the root locus crosses the
1+jωcr
unit circumference are easily obtained as zcr = ejφcr = 1−jω cr
.
Alternatively, we can compute the points where the root locus crosses the
unit circumference by resorting to the Jury Criterion (after constructiong a
Jury table whose elements are functions of k).
Example 4.5. Consider the closed-loop system depicted in Figure 4.7. Com-
pute the corresponding critical values of the gain k.
+
k
r(k) (z−1)(2z−1)
b
y(k)

Figure 4.7: Closed-loop system of Example 4.5.

=(z)
kcr,2

kcr,1
× ×
<(z)

kcr,2

Figure 4.8: Root locus (in red) for the example 4.5.

In this case the denominator of the closed-loop transfer function is Q(z) :=


2z 2 − 3z + k + 1. By using the bilinear transform, we get
(1 + w)2 (1 + w)(1 − w) (1 − w)2
 
1+w
Q =2 − 3 + (k + 1)
1−w (1 − w)2 (1 − w)2 (1 − w)2
4.3. INTERCONNECTION OF DISCRETE TIME SYSTEMS 59

whose numerator

NQ (w) = (6 + k)w2 + 2(1 − k)w + k,

is a Hurwitz polynomial if and only if 0 < k < 1. Therefore, kcr,1√ = 0 ⇒


wcr,1 = 0 ⇒ zcr,1 = 1 and kcr,2 = 1 ⇒ wcr,2,3 = ±j √17 ⇒ zcr,2,3 = 3±j4 7 . ♢

4.3.2 Internal stability of an interconnection


BIBO stability of an interconnection is a very weak and sneaky property.
Indeed, one is led to think that the behaviour of a BIBO-stable system is
docile while if this stability is achieved thanks to pole-zero cancellations in
the instability region, disasters will almost certainly happen in practice. For
this reason, when dealing with interconnections a much stronger property is
always imposed:
Definition 4.4. Let us consider an interconnection with p blocks (see Figure
4.9) and let Pl (z), l = 1, 2, . . . , p, be the transfer function of the l−th block.
Let us perturb additively the input of each block by adding an auxiliary input
ul , l = 1, 2, . . . , p. Let yl denote the output of the l-th block. The original
interconnection is said to be internally stable if for all i, l = 1, 2, . . . , p, the
overall transfer function Wil (z) from the input ul to the output yi is BIBO-
stable.

1
u y u P1 y
u2
5
4 2 P
P2

P2
P4

b y2
3
P3

Figure 4.9: General interconnection.

If we consider the standard feedback interconnection with a controller


block with transfer function C(z) and a process to control block with transfer
function P (z) shown in Fig. Figure 4.10, then we have four transfer function
whose stability need to be checked, namely
60 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

y(k) u(k)

C(z)P (z) C(z)


r(k)
1 + C(z)P (z) 1 + C(z)P (z)

C(z)P (z) −C(z)P (z)


u(k)
1 + C(z)P (z) 1 + C(z)P (z)

In the previous table the rows are associated with the inputs while the
columns are associated with the outputs of the transfer functions.
n(k)
+ u(k)
r(k) C(z) P (z) b
y(k)

Figure 4.10: Feedback interconnection.

z+2
Example 4.6. Consider the block diagram in Figure 4.10, where C(z) := z−1/2
z−1/2
and P (z) := (z+2)(z−1/3) . Notwithstanding the fact that the transfer function
from r to y is BIBO-stable:
Y (z) 1
= ,
R(z) z + 23

the transfer function from n to y is not stable because of the pole in −2:

Y (z) z − 12
= .
N (z) (z + 2)(z + 2/3)
Therefore, the interconnection is not internally stable and an unstable be-
haviour would almost surely emerge in practice. ♢

Testing internal stability may be a long process as the BIBO-stability of


p2 transfer functions must be checked. There is a very interesting case in
which this process can be simplified thanks to the following result.
4.4. FREQUENCY RESPONSE 61

Proposition 4.2. Consider an interconnection made with a single negative


Nl (z)
feedback loop. Let Pl (z) = D l (z)
, l = 1, 2, . . . , p, be the transfer function of
the l-th block of the interconnection and assume that Nl (z) and Dl (z) are
co-prime polynomials for each l = 1, 2, . . . , p. Then the interconnection is
internally stable if and only if
Q Q
1. D̄(z) := l Nl (z) + l Dl (z) is a Schur polynomial (i.e. all its roots
are inside the unit circle of the complex plane),
Q
2. deg[D̄(z)] = deg[ l Dl (z)].
Remark 4.1. The previous proposition has an important consequence. Notice
that the BIBO stability from one of the inputs and one outputs can be lost
because of a perturbation of the transfer functions of arbitrarily small size.
Take for instance the block diagram in Figure 4.10, with C(z) := z+2 z
and
1 1
P (z) = 2(z+2) . The transfer function from r to y is W (z) = 2z+1 which is
BIBO stable. However, if we perturb the pole of P (z) so that it becomes
1
P̃ (z) = 2(z+2+ϵ) , then the resulting closed loop transfer function becomes

1 + z/2
W̃ (z) =
(z + 1/2)(z + 2) + ϵz
that is BIBO stable only when ϵ = 0. This shows that the BIBO stability is
lost no matter how small is the perturbation ϵ.
It can be shown that internally stable systems do not suffer of this type
of fragility. In fact from Proposition 4.2 it follows that if the feedback in-
terconnection in Fig. Figure 4.10 is such that NC (z)NP (z) + DC (z)DP (z) is
Schur stable and deg[NC (z)NP (z) + DC (z)DP (z)] = deg[DC (z)DP (z)], then
these two properties continue to hold true for perturbed versions of C(z) and
P (z) under the condition that the perturbation size is small enough.

4.4 Frequency response


One of the key tools for the design of continuous-time controllers and for
the closed-loop stability analysis are the frequency response and its graphical
representations, i.e. the Bode plots and the Nyquist plot.
In discrete-time similar results hold. Let G(z) be the transfer function of
a LTI system and fix a positive frequency ϑ. Consider the input
u(k) = u0 cos(kϑ0 + Ψ0 ), u0 > 0
62 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

and we want to determine the corresponding corresponding forced output.


Observe preliminarely that the frequency ϑ0 has to belong to the finite in-
terval [0, π]. Indeed it is easy to see that cos(kϑ′0 + Ψ0 ) = cos(kϑ′′0 + Ψ0 ) if
ϑ′0 − ϑ′′0 is a multiple of 2π.
If the system is BIBO-stable then the corresponding forced output tends
to become a pure oscillation with the same frequency ϑ0 of the input:
k→∞
y(k) −→ y0 cos(kϑ0 + χ0 ), (4.10)

where

y0 = |G(ejϑ0 )|u0 , (4.11)


χ0 = Ψ0 + arg G(ejϑ0 ) .

(4.12)

Similarly to the continuous-time case we set

M (ϑ0 ) := |G(ejϑ0 )|, φ(ϑ0 ) := arg G(ejϑ0 )




and the function


G(ejϑ ) = M (ϑ)ejφ(ϑ) , (4.13)
is called frequency response of the system.
If the system is asymptotically stable (instead of only BIBO-stable) then
the result holds for the whole output of the system (with arbitrary initial
conditions) instead of the sole forced output.
We prove (4.10). Observe that

u0 j(ϑ0 k+Ψ0 ) u0 −j(ϑ0 k+Ψ0 ) u0 ejΨ0 jϑ0 k u0 e−jΨ0 −jϑ0 k


u(k) = e + e = e + e
2 2 2 2
Then the Z-transform U (z) of u(k) is

u0 ejΨ0 z u0 e−jΨ0 z
U (z) = +
2 z−e jϑ0 2 z − e−jϑ0
and the Z-transform Y (z) of the corresponding forced response is

Y (z) = G(z)U (z)

As usual, in order to obtain the inverse Z-transform of Y (z), we need to find


the partial fraction decomposition of Y z(z) . Notice that the potential poles
4.4. FREQUENCY RESPONSE 63

of Y z(z) are the poles pi of G(z)


z
, that are inside the stable unit disk, and the
±jϑ0
poles of U (z) that are e . Hence
Y (z) A B X X Cij
= + +
z z − ejϑ0 z − e−jϑ0 i j
(z − pi )j

Observe that B = A∗ . Then


Az A∗ z X X Cij z
Y (z) = + +
z − ejϑ0 z − e−jϑ0 i j
(z − pi )j

and hence

y(k) = Aejϑ0 k + A∗ e−jϑ0 k + (signals converging to 0)


= 2ℜ[Aejϑ0 k ] + (signals converging to 0)
= 2ℜ[|A|ej arg(A) ejϑ0 k ] + (signals converging to 0)
= 2|A| cos(ϑ0 k + arg(A)) + (signals converging to 0)

where we did not made the inverse Z-transforms of the exponential signal
associated with the poles pi since it is enough to know that they all converge
to zero due to the BIBO stability of G(z). It remains to obtain the residual
A that is
Y (z)
A= (z − ejϑ0 )|z=ejϑ0
z 
W (z) u0 ejΨ0 u0 e−jΨ0

z z
= + (z − ejϑ0 )|z=ejϑ0
z 2 z − ejϑ0 2 z − e−jϑ0
u0 ejΨ0 u0 e−jΨ0 z − ejϑ0
   
= W (z) + W (z)
2 |z=ejϑ0 2 z − e−jϑ0 |z=ejϑ0
u0 ejΨ0
= W (ejϑ0 )
2
Since
u0
|A| = |W (ejϑ0 )| , arg(A) = arg(W (ejϑ0 )) + Ψ0
2
we obtain that

y(k) = |W (ejϑ0 )|u0 cos(ϑ0 k + Ψ0 + arg(W (ejϑ0 )) + (signals converging to 0)

that coincides with (4.10).


64 CHAPTER 4. PROPERTIES OF DISCRETE-TIME SYSTEMS

Remark 4.2. Since in (4.13), ϑ enters in G(·) via the periodic function ejϑ ,
the frequency response is clearly a periodic function of period 2π. Moreover,
it is also symmetric: G(e−jϑ ) = G(ejϑ )∗ . Therefore, we need only to study
the function in the interval [0, π].
The graphics of M (ϑ) and φ(ϑ) in the frequency interval [0, π] are the
discrete-time Bode plots and are analogous to the continuos-time ones.

4.5 Nyquist plot and Nyquist criterion


Consider the feedback interconnection (4.10) and let G(z) := C(z)P (z). As
in the continuous-time case, the Nyquist plot is the representation in the
complex plane of the parametric curve G(ejϑ ) (parametrized in ϑ):

G(ejϑ ) = ℜ(G(ejϑ )) + jℑ(G(ejϑ )), ϑ ∈ [0, π]. (4.14)

The Nyquist criterion holds verbatim in the discrete-time case:

Proposition 4.3 (Nyquist Criterion). Consider a discrete-time LTI system


corresponding to a negative feedback loop where the open-loop transfer func-
tion is G(z). Assume that the poles of G(z) are all of magnitude ̸= 1 and
that Nyquist contour of G(ejϑ ) does not cross the point −1.
Let

• N : number of times that the Nyquist contour of G(ejϑ ) encircles clock-


wise the point −1 + j0;

• P : number of poles of G(z) having magnitude larger than 1 (counted


with multiplicity).

Then the closed loop system is BIBO-stable if and only if N = −P .

The proof of this result my be obtained by invoking the corresponding


continuous-time result (the classical Nyquist criterion).
In fact, by using
the bilinear transform, we can set Ĝ(w) := G(z) z= 1+w so that the discrete

1−w

Nyquist plot of G(ejϑ ) coincides with the classical Nyquist plot of Ĝ(jω).

Corollary 4.2. Under the assumptions of Proposition 4.3, if we have P = 0


the closed loop system is BIBO-stable if and only if N = 0.
Chapter 5

Interconnections of
continuous-time and
discrete-time systems

5.1 The sampler and the interpolator


We consider in this section operators that translate a continuous-time signal
into discrete-time one and vice versa. Let C = {f (t), t ∈ R} the set of all
continuous-time signals D = {f˜(k), k ∈ Z} be the set of all discrete-time
signals. Given T > 0 it is possible to build an operator from C to D mapping
continuous-time signal f (t) into the discrete time signal f˜(k)

ST : C −→ D
f (t) 7→ f˜(k)

by letting
f˜(k) := f (kT ), ∀k ∈ Z.
This operator is called Sampler and it is denoted by the symbol ST so that
we can write
f˜(k) = ST [f (t)]
Figure 5.1 shows its representation in the block diagrams. According to this
sampling method the sampling time T is assumed to be constant. It is clear
that sampling is a linear operator.
66CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

clock

f (t) T

ց f˜(k)
sampling

Figure 5.1: Sampling block.

Vice-versa an interpolator is an operator mapping discrete time signals


into continuous time signals

Int : D −→ C
f˜(k) 7→ fˆ(t)

In this case we will write


fˆ(t) = ST [f˜(k)]
Figure There are many ways to build a continuous time signal from a discrete
time one. However, if we limit to linear and time invariant operators, then the
characterisation of all possible interpolators becomes easier. The definition of
interpolator linearity is clear. Time invariance means that there exists T > 0
such that, if Int[f˜(k)] = fˆ(t), then Int[f˜(k − 1)] = fˆ(t − T ) and hence more
in general Int[f˜(k − ℓ)] = fˆ(t − ℓT ). In this case we say that the interpolator
is time invariant of period T .
Now if Int is any linear, period T time invariant interpolator, let h(t) be
the its impulse response namely the image of the discrete impulse signal

h(t) = Int[δ(k)]

Then for any discrete time signal f˜(k) we have that


" +∞ # +∞
X X
˜
Int[f (k)] = Int ˜
f (ℓ)δ(k − ℓ) = f˜(ℓ)Int [δ(k − ℓ)]
ℓ=−∞ ℓ=−∞
+∞
X
= f˜(ℓ)h(t − ℓT )
ℓ=−∞

Hence a time invariant interpolator is determined by its period T and its


impulse response h(t) and we will the notation InthT . A linear, period T time
5.1. THE SAMPLER AND THE INTERPOLATOR 67

invariant interpolator is said to be causal if the value of fˆ(t) depends only on


f˜(k) for k such that kT ≤ t. This occurs if and only if h(t) = 0 for all t < 0.
We see now some examples of linear, period T time invariant interpolators.

1. The Impulsive interpolator corresponds to the choice h(t) = δ(t) so


that fˆ(t) = Inth [f˜(k)] has the following form
+∞
X
fˆ(t) = f˜(k)δ(t − kT )
k=−∞

2. The Zero-holder interpolator (ZOH) corresponds to the choice h(t) =


RectT (t) where (
1 if 0 ≤ t < T
RectT (t) :=
0 otherwise
which is a signal that is always zero except the interval [0, T [ where it
is equal to one. Hence fˆ(t) = Inth [f˜(k)] has the following form
+∞
X
fˆ(t) = f˜(k)RectT (t − kT ), (5.1)
k=−∞

This most typical choice for the interpolator interface in control archi-
tectures, usually denoted by a block H0 . If the discrete-time signal f˜(k)
is the input of a zero holder, the corresponding output is a piece-wise
constant continuous-time signal fˆ(t) that takes the value fˆ(t) = f˜(k)
for all t ∈ [kT, (k + 1)T [. Notice that the zero-holder is a causal inter-
polator.

3. The One-holder interpolator corresponds to the choice h(t) = TrianT (t)


where 
1 + t if −T ≤ t ≤ 0

TrianT (t) := 1 − t if 0 ≤ t ≤ T

0 otherwise

In this case the interpolated signal is continuous and piecewise affine,


namely it is affine in any interval [kT, (k + 1)T ]. Notice that the one-
holder is not a causal interpolator. A delayed version of the one-holder,
namely in which we take h(t) = TrianT (t − T ), is causal.
68CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

4. The Shannon interpolator corresponds to the choice h(t) = sincT (t)


where
sin π Tt

sincT (t) :=
π Tt
We will see that this interpolator plays an important role for the recon-
struction of finite bandwidth signals from their sampled versions. This
is particularly relevant in telecommunication applications. Notice that
the Shannon interpolator is not causal and that no delayed version of
this interpolator is causal.

Notice that any causal interpolator InthT can be built as the series of the
impulsive interpolator and a continuous time system with transfer function
H(s) = L(h(t)).
It is clear that the sampler is not an injective operator and hence it is not
possible to reconstruct the continuous time signal from its samples version,
because sampling always cause an information loss. This is clarified by the
following example that considers a particularly important case.
Example 5.1. Take f1 (t) = A cos(ω1 t + ϕ) and f2 (t) = A cos(ω2 t + ϕ). It is
easy to see that if ω1 − ω2 = 2π
T
h for some h ∈ Z, then ST [f1 (t)] = ST [f2 (t)].
Indeed,

f˜1 (k) = A cos(ω1 T k + ϕ) = A cos((ω2 + h)T k + ϕ)
T
= A cos(ω2 T k + 2πh + ϕ) = A cos(ω2 T k + ϕ) = f˜2 (k)

From the previous example we see that a sinusoidal signal with frequency
ω belonging to the low frequencies interval [− Tπ , Tπ ] after sampling yields the
same signal as the sinusoidal signal with a possibly high frequency ω + 2π T
h
where h ∈ Z. In other words the signal in high frequency is translated by
the sampling operation to a low frequency signal. This is called the aliasing
effect.
For preventing the lack of injectivity we need to restrict the domain of
the operator ST that is the set of continuous time signals as shown in the
following example.
Example 5.2. Let
π π
C := {f (t) : f (t) = A cos(ωt + ϕ), for some A, ϕ ∈ R and ω ∈] − , ]}
T T
5.1. THE SAMPLER AND THE INTERPOLATOR 69

be the set of all sinusoidal signals with frequency belonging to ] − Tπ , Tπ ]. It


can be seen that the sampling operator ST restricted to this set is injective.
The well-known theory by Shannon is based on the intuition provided
by the previous two examples by extending it to superposition of sinusoidal
signals, namely to signals admitting the Fourier transform. The Fourier
transform of a continuous-time signal f (t) is a function F(ω) of ω ∈ R
defined as follows Z +∞
F(ω) := f (t)e−jωt dt
−∞

The signal f (t) can be obtained back from its Fourier transform throught he
formula Z +∞
1
f (t) := F(ω)ejωt dω
2π −∞
that is called the inverse Fourier transform. We can interpret the previous
formula by saying that the signal f (t) is the superposition (as a sum or an
integral) of complex exponential ejωt each having amplitude F(ω). Notice
that, in case f (t) is causal and its (unilateral) Laplace transform F (s) is well
defined in the imaginary axis, then the Fourier transform of f (t) coincides
with its Laplace transform evaluated on the imaginary axis, namely

F(ω) = F (s)|s=jω

Theorem 5.1 (Shannon). Let ω̄ > 0 and define the set of signal with bandwidth
less than ω̄, namely

Cω̄ := {f (t) : F(ω) = 0 ∀ω such that |ω| > ω̄} (5.2)

Let moreover

Ω :=
T
be the sampling frequency of the sampler ST . For any ω̄ < Ω/2 the sampling
operator ST restricted to Cω̄ is injective.
We see that in the previous theorem the sampling frequency Ω plays a
crucial role. Indeed, the frequency Ω/2 is known as Nyquist frequency. Since
sampling operator ST restricted to Cω̄ is injective, then it should be possible
to reconstruct f (t) ∈ Cω̄ from its sampled version f˜(k) = ST [f (t)]. In fact it
can be proved that f (t) can be obtained by applying the Shannon interpolator
to f˜(k).
70CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

Remark 5.1. We show now that it is possible in principle to choose other


subsets of the continuous time signals C in order to obtain that the sampler
becomes injective when restricted to it. The general principle is the following
question:

We know that in principle it is not possible to reconstruct a continuous time


signal from its sampled version. But is it possible instead to reconstruct a
discrete time signal from its interpolation? In other words, is an interpolator
always an injective operator?

Observe that an interpolator InthT is injective if it has impulse response h(t)


such that it sampled version in the discrete time delta, namely ST [h(t)] =
δ(k). Indeed, it is easy to see that in this case if fˆ(t) = InthT [f˜(k)], then
ST (fˆ(t)) = f˜(k), namely sampling the interpolation of f˜(k) gives back f˜(k).
The interpolators with this property will be called 0-normal. More in gen-
eral, an interpolator InthT whose sampled impulse response is a delayed ver-
sion of the delta namely ST [h(t)] = δ(k − k̄) is injective since in this case, if
fˆ(t) = InthT [f˜(k)] then ST (fˆ(t)) = f˜(k − k̄), namely sampling the interpo-
lation of f˜(k) gives a time translation of f˜(k). The interpolators with this
property will be called k̄-normal.
This has the following consequence. Indeed, if we denote by Sh the range
of a 0-normal interpolator InthT that is subset of continuous time signals
n o
ˆ h ˜ ˜
Rh := f (t) = IntT [f (k)] ∈ C | where f (k) varies over all discrete time signals

then sampling restricted to Rh , namely ST : Rh → D is surjective and


injective and hence invertible and the interpolator InthT is its inverse. In
other words, if we know that the continuous time f (t) belongs to Rh , then
sampling it will not cause any information loss because the interpolator Inth
is able to reconstruct it correctly.
From the previous arguments we can reinterpret the Shannon theorem by
saying simply that for the Shannon interpolator we have that Rh coincides
with the finite bandwidth signals defined in (5.2).

5.2 Shannon sampling theory


For completeness we will recall here the arguments yielding the Shannon
theorem.
5.2. SHANNON SAMPLING THEORY 71

Let f (t) be a causal continuous-time signal and consider the signal


+∞
X +∞
X
fδ (t) = f (t)δ(t − kT ) = f (kT )δ(t − kT ) (5.3)
k=0 k=0

The signal fδ (t) can be viewed as the modulation of the input signal f (t)
with a Dirac comb carrier signal i.e. a train of pulses of the form:
+∞
X
φ(t) = δ(t − kT ).
k=−∞

namely we have that


fδ (t) = φ(t)f (t) (5.4)
Observe that, fδ (t) contains the same information carried by f˜(k) = ST [f (t)].
Indeed, fδ (t) can be obtained from f˜(k) by (5.3), and viceversa f˜(k) can be
obtained from fδ (t) for example by taking the following integrals for all k
Z kT +T /2
f˜(k) = fδ (τ )dτ. (5.5)
kT −T /2

We will see that fδ provide a powerful tool to connect the Laplace trans-
form of f (t) with the Z-transform of f˜(k) .
CUT
Remark 5.2. It is clear that the continuous-time signal fδ (t) and the discrete-
time signal f˜(k) = f (kT ) contain exactly the same information: we can
obtain one from the other by using (5.3) or (5.5). Therefore fδ is an intu-
itive and mathematically consistent way of representing the information of a
discrete-time signal by using a continuous-time one.
Remark 5.3. Notice that technically the Dirac “delta function” is not a
function but a distribution defined by the following integral action on the test
functions f (i.e. infinitely differentiable functions having compact support):
Z +∞
δ(t − τ )f (τ )dτ = f (t).
−∞

Remark 5.4. Observe that the signal fδ (t) can be seen as the limit of a Pulse
Amplitude Modulation (PAM) fh (t) of f (t) using the carrier signal that is
72CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

the periodic repetition with period T of a box signal having height 1/h and
support [−h/2, h/2] with h < T , namely
+∞
X
φh (t) = δh (t − kT ).
k=−∞

where δh (t) = 1/h for t ∈ [−h/2, h/2] and δh (t) = 0 otherwise. In this way
we have
fh (t) := f (t)φh (t).
In this way fh (t) can be understood as signal that is equal to the input scaled
by the factor 1/h in each of the intervals [kT − h2 , kT + h2 ], k ∈ Z, and is zero
R kT + T
outside these intervals. Therefore, for all k ∈ Z, kT − T2 fh (t)dt is equal to the
2
average value of f (t) on the interval [kT − h2 , kT + h2 ]. This value converges
to f (kT ) as h → 0. In this way we can say that δh (t) converges to the Dirac
delta δ(t) and hence that fh (t) converges to fδ (t).
We want now to obtain the Laplace transform of fδ (t) and how it is
connected with the Z-transform of f˜(k). Observe that
L[δ(t)] = 1, L[δ(t − kT )] = e−kT s for all k ≥ 0.
Then we have
∞ ∞
" #
X X h i
Fδ (s) := L[fδ (t)] = −kT s
f (kT )e = f (kT )z −k ˜
= Z[f (k)] ,
|z:=eT s
k=0 k=0 |z:=eT s
(5.6)
˜
In other words the Laplace transform of fδ (t) is the Z-transform of f (k) =
f (kT ) evaluated at z = eT s .
Remark 5.5. To analyze the map z = esT , let us consider s = σ + jω. First of
all we observe that e(s+j2lπ/T )T = esT ej2lπ = esT for all l ∈ Z. In other words
the map z = esT is periodic of period Ω := 2π/T along any vertical axis
of the complex plane. Therefore, to study the map z = esT we can restrict
attention to the primary stripe
 
Ω Ω
S := s : s = σ + jω, − < ω ≤ .
2 2
It easy to see that, once restricted to S, the map z = esT is injective, i.e. if
s1 , s2 ∈ S and s1 ̸= s2 then es1 T ̸= es2 T . Moreover the image of the map is
5.2. SHANNON SAMPLING THEORY 73

=(s)


2

5 2 6
7

3 1 4 <(s)

− Ω2

=(z)

6 2 5 3 1 4 <(z)

Figure 5.2: Correspondence between some points on the primary stripe and
their images according to the map esT .

the whole complex plane except for the origin, i.e. for all z̄ ∈ C \ {0} there
exists s̄ ∈ S such that z̄ = es̄T . The origin z̄ = 0 can be achieved as the
image of the map only at the limit for s tending to −∞. More precisely, we
have

lim esT = 0.
ℜ(s)→−∞
74CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

It is easy to check that the intersection of S with the left half complex plane
is mapped to the open unit disk, the intersection of S with the imaginary axis
is mapped to the unit circle S1 , and the intersection of S with the right half
complex plane is mapped to the open region outside unit circle as depicted
in Figure 5.2.
Next we analyze in greater detail the connections between F (s) = L[f (t)],
Fδ (s) = L[fδ (t)] and F̃ (z) = Z[f˜(k)]. In particular, we look for an explicit
connection between a continuous-time causal signal f (t) (or its Laplace trans-
form F (s)) and the Laplace transform Fδ (s) of the corresponding modulated
signal fδ (t). In this way we can complete the following diagram.

L
f (t) F (s)
L−1
·φ(t) ?
L
ST fδ (t) Fδ (s) ?
L−1
≈ z = esT

Z
f˜(k) F̃ (z)
Z −1

Recall that fδ (t) = f (t)φ(t), where φ(t) := +∞


P
k=−∞ δ(t − kT ). Since φ(t)
periodic of period T , it can be written as a Fourier series as
+∞
X
φ(t) = ak ejkΩt , (5.7)
k=−∞


where Ω = T
and
Z T /2 Z T /2
1 −jkΩt 1 1
ak = φ(t)e dt = δ(t)e−jkΩt dt = .
T −T /2 T −T /2 T

Notice that ak is independent of k. By plugging these values in (5.7), we get


+∞
1 X jkΩt
φ(t) = e . (5.8)
T k=−∞
5.2. SHANNON SAMPLING THEORY 75

By taking the latter into account, (5.4) becomes


∞ +∞
X 1 X
fδ (t) = f (t)δ(t − kT ) = f (t)φ(t) = f (t)ejkΩt , (5.9)
k=0
T k=−∞

We can now compute the Laplace transform in both sides of (5.9). We get:
+∞
1 X
Fδ (s) := L[fδ (t)] = F (s − jkΩ). (5.10)
T k=−∞

We can now connect the previous reasoning with the Shannon theorem.
Indeed recall that the Fourier transforms coincides with the Laplace trans-
forms evaluated on the imaginary axis, namely

F(ω) = F (jω), Fδ (ω) = Fδ (jω)

and hence, from (5.10) we obtain that


+∞
1 X
Fδ (ω) = F(ω − kΩ)
T k=−∞

Therefore, Fδ (ω) is periodic of period Ω and it is obtained by periodic rep-


etition of F (jω). For this reason the conditions imposed by the Shannon
theorem on the support of F(ω) allows the reconstruction of the original
signal from the sampled one by means of a low-pass filter. This is better
clarified by Figure 5.3.
When the conditions of the Shannon theorem are satisfied, f (t) can be
reconstructed from fδ (t) simply by making a convolution. Namely by taking
a signal a0 (t) such that the corresponding Fourier transform A0 (ω) is an ideal
low-pass 
T if |ω| ≤ Ω/2,
A0 (ω) = (5.11)
0 otherwise.
then
A0 (ω)Fδ (ω) = F(ω)
Since the inverse Fourier transform, as the Laplace transform, maps the
convolution in the time domain into the product in the frequency domain,
76CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

|F (jω)|

−ωB ωB Ω
0 2 Ω f

Figure 5.3: The solid line represents |F (jω)| for a band-limited signal (with
band ωB smaller than Ω2 ). The dashed line represents |F (jω)| for a signal
with unlimited band: we see that in this case the multiple copies of the
signal generated by the sampling (corresponding to periodic repetition in
the frequency domain) interfere with one another so that the original signal
F (jω) can no longer be recovered from the periodic repetition.

then we can conclude that


+∞
!
X
f (t) = a0 (t) ∗ fδ (t) = a0 (t) ∗ f˜(k)δ(t − kT )
k=−∞
+∞
X
= f˜(k) (a0 (t) ∗ δ(t − kT ))
k=−∞
+∞
X
= f˜(k)a0 (t − kT )
k=−∞

In this way we see that the fact that Shannon interpolator is able to recon-
struct the continuous time signal is simply the consequence of the fact that
inverse Fourier transform a0 (t) of A0 (ω) defined in (5.11) is

sin π Tt

a0 (t) = sincT (t) =
π Tt

5.3 Anti-Aliasing Filters for control


In telecommunication it is crucial to be able to reconstruct the original signal
from its sampled version as precisely as possible. Hence, in order to minimize
the reconstruction error, it is convenient to preprocess the continuous-time
signal f (t) by filtering it through a so called Anti-Aliasing filter. This is
5.3. ANTI-ALIASING FILTERS FOR CONTROL 77

a low-pass filter designed in order reduce the effects of the aliasing. The
anti-aliasing filter is a convolution filter, namely a map

f (t) 7→ fˆ(t) := a(t) ∗ f (t)

where a(t) is chosen so that its Fourier transform A(ω) is the ideal low pass
filter, namely a perfect “box”:

1 |ω| ≤ Ω/2
A(ω) :=
0 |ω| > Ω/2

In this way, in the frequency domain, the anti-aliasing filter operates through
a multiplication, namely

F(ω) 7→ F̂(ω) := A(ω)F(ω)

that clearly provides a filtered signal satisfying the hypothesis of the Shannon
theorem.
In control, however, this approach is not the right one for a number of
reasons.
1. This ideal anti-aliasing filter is not causal and hence, in order to obtain
fˆ(t) we need to know the signal f (τ ) for all time τ , even for τ > t,
namely in the time instants that are in the future.

2. This ideal anti-aliasing filter can not be realized by any physical device.
In fact, from electronic devices constituted by operational amplifiers,
resistor, capacitors and inductors we can obtain only rational filters,
namely a(t) that are inverse Laplace transforms of rational functions
A(s)

3. The control objective is different from the typical objective the is pur-
sued in telecommunication. Indeed, in control we are not worried about
the accuracy of the reconstruction from the sampled signal, but instead
to satisfy the control specifications. The most important negative im-
pact that the aliasing might in control consists in the possibility to
translate high frequency disturbances into much more dangerous low
frequency disturbances.
Considering these facts, we propose now a method for designing an anti-
aliasing filter suitable to our goals. Roughly speaking, this filter has to
78CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

provide a sensible reduction of the high-frequency components of the signals


while introducing a limited delay which reduces the system phase margin
and hence negatively affecting the closed loop system transient behavior or
even destabilizing the system. Therefore, when designing the control system
we need to take into account of the presence of the A.A. filter in the control
loop. Usually a first order low-pass (Bessel) filter or, better, a second order
(Butterworth) filter is used.

+
r(t) C(s) P (s) b
y(t)

d(t)

Figure 5.4: Continuous time controlled system.

More precisely assume that we have a continuous time system in Figure


5.4 that is controlled in closed loop. We assume that P (s) is the transfer
function of the physical system that we want to control and that C(s) is
the transfer function of the controller. We assume that the controller has be
designed to satisfy some specifications in steady state and in transient. The
steady state specifications require that the closed loop system is able to track
asymptotically a step or a ramp or a parabolic ramp with a bounded asymp-
totic error. The transient specifications describe the admissible behavior of
the step response in terms of rise time and overshoot.
According the frequency domain method (or Bode method), the steady
state specifications are translated in the number of poles in the origin and
on the Bode gain on the open loop transfer function L(s) := C(s)P (s) (see
Section 6.1). The transient specifications are translated into prescriptions in
the frequency domain, namely:

1. The rise time of the step response is translated into a prescribed crossover
frequency ωc∗ of L(s).

2. The overshoot of the step response is translated into a minimum phase


margin m∗φ of L(s).
5.3. ANTI-ALIASING FILTERS FOR CONTROL 79

We recall that that the crossover frequency ωc of L(s) is defined to be the


frequency such that
|L(jωc )| = 1
while the phase margin mφ of L(s) is defined as
mφ := π + arg(L(jωc ))
where we recall that arg(a) is the phase of the complex number a. Hence
the transient specifications in the frequency domain are translated into the
following mathematical constraints on L(s)
|L(jωc∗ )| = 1, mφ = π + arg(L(jωc∗ )) ≥ m∗φ
Notice that the previous conditions are easily checkable using the Bode plots
of L(s).
Assume that we have also a sinusoidal disturbance d(t) = A cos(ωt + ϕ)
entering in the systems as depicted in Figure 5.4 and assume that we want to
translate the previous control to a digital control. We are aware of the fact
that, even if the frequency ω of the disturbance is high, due to the aliasing
effect, this could possibly have effect at the low frequencies, fact that can be
very detrimental to the behavior of the digital controlled system. For this
reason, as shown in Figure 5.5, it is convenient to add an anti-aliasing filter
A(s) in the loop.

+
r̃(k) C(z) H0 P (s) b
y(t)

T

ց A(s)

d(t)

Figure 5.5: Digital controlled system with an anti-aliasing filter.

We need to design A(s) is such a way that:


1. The disturbance d(t) is attenuated at least a times by this filter, where
a > 0, for all frequencies ω ≥ Ω/2 where Ω = 2π/T .
2. The crossover frequency ωc′ of the new open loop transfer function
L′ (s) = C(s)P (s)A(s) is equal to the prescribed one ωc′ = ωc∗ .
80CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

3. The phase margin m′φ of the new open loop transfer function L′ (s) =
C(s)P (s)A(s) is greater than or equal to the prescribed one m′φ ≥ m∗φ .
We choose as candidate anti-aliasing filter the following second order transfer
function
K
A(s) = 2 (5.12)
1 + 2ξ ωsn + ωs 2
n

We need to find the three parameters K, ωn , ξ. We will make the following


assumption
ωc∗ << Ω/2
and hence, for simplifying the design, we will choose ωn such that
ωc∗ << ωn << Ω/2
The design of A(s) is divided in three steps.
1. We start by imposing that ωc′ = ωc∗ that is equivalent to impose that
|C(jωc∗ )P (jωc∗ )A(jωc∗ )| = 1
Since we know that |C(jωc∗ )P (jωc∗ )| = 1, then we need to impose that
|A(jωc∗ )| = 1. Using the fact that ωc∗ << ωn , we can argue that
|A(jωc∗ )| ≃ K which yields K = 1.
2. We impose now that m′φ ≥ m∗φ . This is equivalent to impose that
m∗φ ≤ m′φ = π + arg(C(jωc∗ )P (jωc∗ )A(jωc∗ ))
= π + arg(C(jωc∗ )P (jωc∗ )) + arg(A(jωc∗ )) = mφ + arg(A(jωc∗ ))
and hence
− arg(A(jωc∗ )) ≤ φ := mφ − m∗φ
where φ := mφ − m∗φ ≥ 0 is the extra phase margin we obtained in the
design of the continuous time controller C(s). Observe that
2ξωc∗ /ωn ωc∗
 

− arg[A(jωc )] = arctan ≃ 2ξ
1 − ωc∗2 /ωn2 ωn
where we used the fact that ωc∗ /ωn is small and the fact that arctan(x) ≃
x ′ ∗
x and that 1−x 2 ≃ x when x is small. Hence the condition mφ ≥ mφ is

equivalent to
ω∗
2ξ c ≤ φ (5.13)
ωn
5.3. ANTI-ALIASING FILTERS FOR CONTROL 81

3. We impose finally the disturbance attenuation condition. This can be


translated into the following frequency domain constraint of A(s)
1
|A(jω)| ≤ ∀ω ≥ Ω/2 (5.14)
a

By choosing ξ ≥ 1/ 2 we have that
1
|A(jω)| =
|1 − ω 2 /ωn2 + j2ξω/ωn |
is monotonic decreasing. Then (5.14) is equivalent to imposing that
|A(jΩ/2)| ≤ 1/a. We impose the minimum requirement, namely that
1
|A(jΩ/2)| = (5.15)
a
Observe that, using the fact that ωn << Ω/2, we argue that
1 4ωn2
|A(jΩ/2)| ≃ =
|−(Ω/2)2 /ωn2 | Ω2
and hence (5.15) is equivalent to

ωn = √
2 a

In conclusion, considering the last equation and the condition (5.13), we


obtain that √
Ωφ ≥ 4ξωc∗ a (5.16)

that, chosing ξ = 1/ 2 that is the smallest ξ ensuring monotonicity of
|A(jω)|, yields √
Ωφ ≥ 2ωc∗ 2a (5.17)
that can be see as a bound on the sampling frequency end hence on the
sampling time or a bound on the extra phase margin that has to be obtained
by a better choice of C(s).
Example 5.3. Consider the following specs: a = 10, φ = 0.1, ξ = √1 and
2
ωc = 10 rad/s. Compute the minimum sampling frequency Ω.
By using (5.17) we easily get

4 5
Ω ≥ 10 ≃ 900 rad/s
0.1

82CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

Remark 5.6. If we take a first order anti-aliasing filter


K
A(s) =
1 + sτ
it can be proved that K = 1 and that the final condition that has to be
satisfied is √
Ω tan(φ) ≥ 2ωc∗ a2 − 1
Remark 5.7. Let us consider the circuit depicted in Figure 5.6. It realizes a

C1
R R

+
vin C2 −


+ +
vout

Figure 5.6: Circuit realizing a second order low-pass Butterworth filter.

second order Butterworth filter. In particular, by selecting C1 = 2ξC 3


, C2 =
3C 1 2 1

, ωn = RC , C = C1 C2 , ξ = 2 , the transfer function of the filter is

Vout (s) −1 −1
= 2
= 2 , (5.18)
Vin (s) sR C1 C2 + 3RC1 s + 1 s
ωn
+ 2 ωsξn + 1

where ωn is the 3dB bandwidth of the filter.

5.4 Comments on quantization and its effects


In digital systems not only time is discrete: the values of the signals are
quantized. At each time k the quantized signal can only assume a finite
5.5. SAMPLING SIGNALS WITH RATIONAL LAPLACE TRANSFORM83

number of values known as quantization levels. While discretizing time is a


linear process, quantization introduces a “nasty” non-linearity. To deal with
it, we treat quantization as noise, i.e. we consider the quantized signal to be
the sum of the original signal an a noise n(k) for which we have the bound
−d/2 < n(k) ≤ d/2, with d being the difference between two consecutive
quantization levels.
x(t) x(k) x(t) xq (k)
1

3 0
Q
0 1 2 t 0 1 2 3 t
1-bit Quantizer

Figure 5.7: Quantization of the sampled signal x(k).

5.5 Sampling signals with rational Laplace trans-


form
As we have already observed in (5.6) a central role in the connection between
the spectral representations of a continuous-time signal and of its sampled
version is played by the map z = esT which (once T is fixed) maps the
complex plane where the Laplace variable s is defined to the complex plane
where the Z-variable z is defined. This map is of crucial importance in
digital control and it is therefore important to clarify its meaning. We have
the following result that clarifies that, when the look at the map

L−1
f (t) F (s)

ST
Z
f˜(k) F̃ (z)

and we start from a strictly proper rational function F (s) then the associated
F̃ (z) is itself a proper rational function and the poles of the two rational
84CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

functions are related through the map z = esT . We show this fact first by
the means of an example and then providing the general result.
Example 5.4. Consider the continuous-time signal f (t) = eat , t ≥ 0. The
1
corresponding Laplace transform is F (s) = s−a . On the other hand, the
˜ aT k
sampled version of f (t) is f (k) = (e ) , k ≥ 0 and its Z-transform is
F̃ (z) = z−ez aT , where T is the sampling time. Thus, not only both F (s) and
F̃ (z) are rational functions but the pole eaT of F̃ (z) is obtained by mapping
the pole a of F (s) via the map z = esT .
Proposition 5.1. If F (s) is a strictly proper rational function, then
F̃ (z) := Z[ST [L−1 [F (s)]]]
is a proper rational function. Moreover, if p1 , . . . , pN are the poles of F (s),
then ep1 T , . . . , epN T are the poles of F̃ (z) and they have the same multiplicities
of p1 , . . . , pN .
Proof. If F (s) is a strictly proper rational function, then it admits a partial
fraction decomposition (see (2.60))
N ni −1
X X Ai,l
F (s) = ,
i=1 l=0
(s − pi )l+1
where p1 , . . . , pN are the poles of F (s), n1 , . . . , nN are their multiplicities,
and the coefficients Ai,l are the residuals that can be computed using the
formulas (2.61), (2.62), (2.63), (2.64). Then
N ni −1
−1
X X Ai,l
f (t) := L [F (s)] = tl epi t δ−1 (t)
i=1 l=0
l!
and hence
N n i −1
X X Ai,l T l
f˜(k) := ST [f (t)] = f (kT ) = L−1 [F (s)] = k l (epi T )k δ−1 (k)
i=1 l=0
l!
Taking into account (2.44), we can argue that
N n i −1
X X Ai,l (T epi T )l zQl (z/epi T )
F̃ (z) := Z[f˜(k)] =
i=1 l=0
l! (z − epi T )l+1

which is a proper rational function with poles ep1 T , . . . , epN T with multiplic-
ities n1 , . . . , nN .
5.6. THE ZERO HOLDER INTERPOLATOR 85

5.6 The zero holder interpolator


This section will be dedicated to the analysis of interconnections of discrete
time and continuous time systems that can be done using as interfaces a
sampling block (A/D converter) or an interpolator (D/A converter). This
will be instrumental for our objective that is to design closed-loop control
systems where the controller is modelled as a discrete-time system and the
to-be-controlled plant as continuous-time one (see the block diagram depicted
in Figure 5.8).

D/A
r̃ + ẽ ũ ū
r T

ց C(z) H0 P (s) b y

T
ց

Figure 5.8: Interconnection of continuous-time and discrete-time systems


with D/A and A/D converters.

The most typical choice for the interpolator interface in control architec-
tures is the Zero Order Holder (ZOH). Recall that we can write the relation
between the discrete-time signal ũ(k) and the corresponding continuous-time
signal ū(t) as:
+∞
X
ū(t) = ũ(k)RectT (t − kT ), (5.19)
k=−∞

where (
1 if 0 ≤ t ≤ T
RectT (t) =
0 otherwise
Since RectT (t) = δ−1 (t) − δ−1 (t − T ), then we have
+∞
X
ū(t) = ũ(k) (δ−1 (t − kT ) − δ−1 (t − (k + 1)T )) . (5.20)
k=−∞

Since the input of H0 is a discrete-time signal and its output is a continuous-


time signal we cannot define a transfer function. We can, however, consider
a continuous-time version of H0 , that we denote by H0′ that has the same
86CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

output of H0 , but whose input is a continuous-time version of ũ(k) i.e. the


pulse signal defined as
X
uδ (t) := ũ(k)δ(t − kT ).
k

It is now easy to obtain the (continuous-time) transfer function of H0′ . In


fact, the Laplace transform of ū(t) is

+∞
X 1 −ksT
− e−(k+1)sT

Ū (s) = ũ(k) e
k=0
s
+∞
1 − e−sT X
= ũ(k)e−ksT , (5.21)
s
|k=0 {z }
Uδ (s):=L[uδ (t)]

with T being the sampling time. It immediately follows from (5.21) that the
transfer function of the ZOH in the Laplace variabile s is

1 − e−sT
H0′ (s) = . (5.22)
s

This result will be useful below when we compute the overall discrete transfer
function of the series interconnection obtained by cascading H0 , P (s) and a
sampling block.
Remark 5.8. Some comments are in order.

• In place of the ZOH we could use higher order holders, for example the
first order holder (FOH). These devices however, perform a derivative
action with the disadvantage of amplifying the noise.

• ZOH are fast, cheap and easy to implement as they can be built by
using op-amp, and resistances.

• As we will see with more details in the following, on average the ZOH
introduces a delay corresponding to a half of sampling time T .
5.7. CONVERSION BETWEEN CONTINUOUS AND DISCRETE SYSTEMS87

5.7 Conversion between continuous and dis-


crete systems
For digital control design we are interested in two opposite types of conver-
sions:
1. Discrete-time model of a continuous-time system: this is represented
in Figure 5.9 and is used when we need to design the controller in the
discrete domain but the to-be-controlled plant is a continuous-time one.

2. Continuous-time model of a discrete-time controller : this is represented


in Figure 5.10 and is used when we need to design a discrete-time
controller emulating the behaviour of a given continuous time one.

ũ(k) F̃ (z) ỹ(k)


ū(t) y(t)
ũ(k) H0 F (s) T

ց ỹ(k)

Figure 5.9: Discrete-time model of a continuous-time system.

e(t) C(s) u(t)


ẽ(k) ũ(k)
e(t) T

ց C̃(z) H0 ū(t)

Figure 5.10: Continuous-time model of a discrete-time controller.

In other words, in the first case we are interested in a discrete-time trans-


fer function F (z) describing the behaviour of the cascade of systems depicted
at the bottom of Figure 5.9. In the second case we seek for an approximation
C(s) of the behaviour of the cascade of systems depicted at the bottom of
Figure 5.10.
88CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

The remainder of this section will be dedicated to the solution of the first
problem. Indeed, we shall compute the transfer function F (z) and show that
it accounts exactly for the the behaviour of the cascade of systems depicted
at the bottom of Figure 5.9, i.e. no approximations are needed.
Consider the cascade depicted in Figure 5.11 and assume that F (s) is the
transfer function of a causal system. We also assume (and this is a funda-
mental assumption) that the sampling and the hold devices are synchronous,
i.e. that they work with the same clock of period T .
ū(t) y(t)
ũ(k) ∼ uδ (t) H0 F (s) 
T
ց ỹ(k)

Figure 5.11: Block diagram of the cascade ”ZOH”, F (s) and ”Sampling”.

ORIGINAL DERIVATION
Let H(s) := F (s)H0′ (s), where H0′ (s) is the continuous-time version of
the ZOH as defined above. We first compute the forced output of the filter
with transfer function H(s) fed with the pulse input

X
uδ (t) = ũ(l)δ(t − lT ).
l=−∞

Let h(t) := L−1 [H(s)]; we have:


Z +∞
y(t) = h(τ )uδ (t − τ )dτ
−∞

!
Z +∞ X
= h(τ ) ũ(l)δ(t − τ − lT ) dτ
−∞ l=−∞

X Z +∞ 
= ũ(l) h(τ )δ(t − τ − lT )dτ . (5.23)
l=−∞ −∞

Notice that Z +∞
h(τ )δ(t − τ − lT )dτ = h(t − lT )
−∞

Notice also that both the system of transfer function F (s) and the continuous-
time versione of the ZOH are causal so that their cascade, whose transfer
5.7. CONVERSION BETWEEN CONTINUOUS AND DISCRETE SYSTEMS89

function is H(s) := F (s)H0′ (s), is a causal system as well, i.e. its impulse
response h(t) := L−1 [H(s)] is a causal function i.e. it vanishes for all negative
values of t. By plugging this expression in formula (5.23) and by setting
h̃(k) := h(kT ), we get the sampled version of y(t) that is given by

X
ỹ(k) = y(kT ) = h̃(k − l)ũ(l). (5.24)
l=−∞

By taking the Z-Transform on both members of (5.24), in view of the con-


volution Theorem, we get:
Ỹ (z) = H̃(z)Ũ (z), (5.25)
where H̃(z) is the Z-Transform of h̃ which, in turn, is the sampled version
of h(t) = L−1 [H0 (s)F (s)].
Let ST [·] denote the sampling operator with time T which is defined by:
ST [f (t)] = f˜(k) = f (kT ), k ∈ Z.
By taking into account (5.22) and the linearity of the operators of inverse
Laplace Transform, Z-Transform and sampling ST [·], we now have
F̃ (z) = Z ST [L−1 [H0 (s)F (s)]
 
−sT
−1 1 − e
   
= Z ST L F (s)
s
   
−1 −1 F (s)
= (1 − z )Z ST L . (5.26)
s
h i
−1 F (s)
Often, with abuse of notation, the expression F̃ (z) = (1 − z )Z s is
used.
END ORIGINAL DERIVATION

ALTERNATIVE DERIVATION
Observe from (5.20) that ū(t) is a linear combination of translated in time
versions of the step signal δ−1 (t) and that y(t) is the output of the system
with transfer function F (s) with input ū(t). By linearity and time invariance
of this system we have that
+∞
X
y(t) = ũ(l) (g(t − lT ) − g(t − (l + 1)T )) (5.27)
l=−∞
90CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

where  
−1 F (s)
g(t) = L
s
Then
+∞
X
ỹ(k) = y(kT ) = ũ(l) (g(kT − lT ) − g(kT − (l + 1)T ))
l=−∞
+∞
X
= ũ(l) (g̃(k − l) − g̃(k − 1 − l))
l=−∞
+∞
X +∞
X
= ũ(l)g̃(k − l) − ũ(l)g̃(k − 1 − l),
l=−∞ l=−∞

(5.28)

namely ỹ(k) is the difference of two convolutions. By taking the Z-Transform


on both members of (5.28), in view of the convolution Theorem, we get:

Ỹ (z) = G̃(z)Ũ (z) − z −1 G̃(z)Ũ (z) = (1 − z −1 )G̃(z)Ũ (z) (5.29)

In this way we proved that the input ũ(k) and the output ỹ(k) are related
by the transfer function F̃ (z) that can be computed as follows

F̃ (z) = (1 − z −1 )G̃(z)
G̃(z) = Z [g̃(k)]
g̃(k) = ST [g(t)]
 
−1 F (s)
g(t) = L (5.30)
s
where ST [·] denotes the sampling operator with sampling time T defined as
follows
ST [f (t)] = f˜(k) = f (kT ), k ∈ Z.
h i
Often, with abuse of notation, the expression F̃ (z) = (1 − z −1 )Z F (s)
s
is
used.
END ALTERNATIVE DERIVATION
It is important to observe that the transfer function F̃ (z) obtained in
(5.26) does not have any approximation so that we have an exact formula for
5.7. CONVERSION BETWEEN CONTINUOUS AND DISCRETE SYSTEMS91

the discrete-time transfer function of the cascade depicted in Figure 5.11. As


we shall discuss below, if F (s) is a rational function then F̃ (z) turns out to
be also rational and this is an important and surprising result as the transfer
function H0′ (s) is not rational.
Remark 5.9. Some comments are in order.

• It is important to observe that the cascade where we first hold a


discrete-time signal and then sample the result, performs the identity
operator on the set discrete-time signals. This is essentially the reason
why the function F̃ (z) obtained in (5.26) does not have any approx-
imation. On the contrary the cascade of the same two operators in
the opposite order is not the identity: its output, indeed, is always a
piece-wise constant function which is typically different from the input
signal which is, in general, not a piece-wise constant signal. Indeed, the
latter cascade is a projection whose output retains only the information
corresponding to values of the input at the sample points.

• In spite of the previous observation, the cascade in Figure 5.10 (in


which some of the information coded in the input is indeed lost and
hence does not appear in the output) can be used to approximate a
continuous-time system.

• Equation (5.26) has a very important interpretation. In fact, by taking


into account that L[δ−1 (t)] = 1s and Z −1 [δ−1 (k)] = (1 − z −1 )−1 , (5.26)
can be rewritten as
    
−1 z −1 F (s)
Z F̃ (z) ≡ ST L . (5.31)
z−1 s

Thus the step-response of the discrete-time system of transfer func-


tion F̃ (z) coincides with the sample version of the step-response of the
original continuous-time system of transfer-function F (s).

• As already observed, when F (s) is rational, F̃ (z) is also rational and,


by using the rules for the inverse Laplace Transform and for the Z-
Transform, we easily see that the poles of F (s) are mapped into poles
of F̃ (z) by following the map

s = pi → z = e p i T
92CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

that we have already seen in the sampling chapter. A more delicate


question concerns the zeros: what is the relation between the zeros of
F (s) and those of F̃ (z)? Not only there is no simple way to describe
how the zeros of F (s) are mapped into those of F̃ (z), but it may also
well happen that the number of zeros of F (s) is different from the
number of zeros of F̃ (z): typically, F̃ (z) has more zeros than F (s);
these extra zeros are called sampling zeros.

• The presence of the sampling zeros may be intuitively explained as


follows: if the transfer function
h Fi(s) is rational and strictly proper,
−1 F (s)
its step-response y(t) = L s
is zero for t = 0 and is non-zero
for almost all values of t > 0, as it is a linear combination of the
modes of the system and of the step signal. Then, except for very
special choices of the sampling time T , the relative degree of the transfer
function F̃ (z) is 1, independently on the relative degree of F (s). This
notwithstanding, it could happen that

– F (s) is not rational and has a delay so that it has the form F (s) =

e−T s F0 (s) with T ′ > T ;
h i
– F (s) is rational but its step-response y(t) = L−1 F (s) s
is such
that y(T ) = 0.

In these cases the relative degree is at least equal to 2.

Let us see an example of computation of (5.26).

2
Example 5.5. Let G(s) := (s+1)(s+2) . Compute the corresponding discrete
transfer function obtained by cascading a sampling block of period T , G(s),
and a ZOH.
Solution. By developing the computations in (5.26), we have

G(s) 1 2 1 L−1
= − + −−→ (1 − 2e−t + e−2t )δ−1 (t),
s s s+1 s+2
5.7. CONVERSION BETWEEN CONTINUOUS AND DISCRETE SYSTEMS93

so that

z−1 
Z δ−1 (k) − (2e−t )k + (e−2t )k

G̃(z) =
z 
z−1

z z z
= −2 +
z z−1 z − e−T z − e−2T
(e−T − 1)2 (z + e−T )
= .
(z − e−T )(z − e−2T )

Notice that the relative degree of G(s) is 2 while the discrete-time counterpart
G̃(z) has relative degree equal to 1. One sampling zero z1 = −e−T is present
in G̃(z) while G(s) has no zeros. ♢

Example 5.6. Let us consider the block diagram depicted at the top of Figure
1
5.12, where P̃ (s) := s+1 (we use the notation P̃ (s) in such a way that P
can be reserved for the discrete-time system). We want to compute the
corresponding discrete-time system depicted at the bottom of Figure 5.12
and discuss the stability of the closed-loop system. By using equation (5.26),

+ e ẽ ũ ū
 k′
r(t) T
ց
1−z −1
H0 P̃ (s) b
y(t)

+
k′
r(k) 1−z −1
P (z) b
y(k)

Figure 5.12: Block diagrams for example 5.6.


94CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS

we get:
" " " # ##
P̃ (s)
P (z) = (1 − z −1 )Z S L−1 ;T
s
    
−1 −1 1 1
= (1 − z )Z S L − ;T
s s+1
= (1 − z −1 )Z S δ−1 (t) − e−t ; T
  
 
−1 z z
= (1 − z ) −
z − 1 z − e−T
1 − e−T
= .
z − e−T
The open-loop transfer function is

′ z ′ N (z) ′ z 1 − e−T z
k P (z) = k =k = k , (5.32)
z−1 D(z) z − 1 z − e−T (z − 1)(z − e−T )

where we have defined k := k ′ (1 − e−T ). We want to analyze stability of


(5.32) in terms of k ′ and T . To this aim we draw the locus (in terms of the
new parameter k) and, after computing the critical values of k, we will easily
solve the problem in terms of the original parameters k ′ and T .
The root locus is depicted in Figure 5.13, where we see that φcr = π, so
that (4.8) implies

−T −T ′ 2(1 + e−T )
0 = kcr +(−1−1)(−1−e ) ⇒ kcr = 2(1+e )⇒ kcr = . (5.33)
1 − e−T

From (5.33), we immediately see that the critical value kcr depends on T
and gets smaller and smaller as T increases. On the contrary, as T tends to

0 critical value kcr tends to infinity so that the closed loop system tends to
be stable for any positive value of the gain. ♢
5.7. CONVERSION BETWEEN CONTINUOUS AND DISCRETE SYSTEMS95

=(z)

kcr
◦× ×
<(z)

Figure 5.13: Root locus (in red) for the example 5.6.
96CHAPTER 5. DISCRETE/CONTINUOUS-TIME AND INTERCONNECTIONS
Chapter 6

Control problem and controller


design

A control problem is specified by the following ingredients:

1. A nominal model for the system to be controlled and the uncertainty


associated to it, when applicable;

2. The control input variables and their constraints and the measured
output variables;

3. Performance specifications on the controlled system output variables.


Typically these include:

• Stability (asymptotic, BIBO, internal) of the controlled system;


• A regulation and or tracking task (the central aim of the con-
trol design) and the associated specifications on the asymptotic
regime;
• Specifications on the transient regime, expressed either in the time
or in the frequency domain. These address the required dynamic
precision and trade-off between the promptness and the filtering
properties of the controlled system.

A solution to the control problem, when possible, consists in finding con-


trol trajectories, possibly obtained from the knowledge of the available out-
puts, such that the performance specifications are met. Since we have already
discussed how to guarantee the controlled system stability in section §4.1, in
98 CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

the next sections we will concentrate on methods allowing to satisfy the per-
formance specifications on the asymptotic and transient regime.

6.1 Specifications on the asymptotic regime:


Tracking
There exists a very general approach to the design control systems able to
track some classes of reference target signals. This is based on the so called
internal model principle. Consider the tracking problem for the control in-
terconnection described in Fig. 6.1. Since the reasonings hold true both for
continuous and in discrete time systems, in the following in the transfer func-
tions we avoid to make explicit their dependence on s (for the continuous-time
systems) or on z (for the discrete-time systems).

+ E U
R C P b
Y

Figure 6.1: Control interconnection used in tracking problem.

Let R be the Z- or L-transform of the reference signal to be tracked that


is assumed to be rational
NR
R= . (6.1)
DR
The open loop transfer function is
NL
L := CP = . (6.2)
DL
In this way, the transform of the error variable E can be written as:
NE
E= = R − LE
DE
NE 1 DL NR
E= = R= . (6.3)
DE 1+L NL + DL DR
In the continuous-time case, asymptotic tracking is achieved if
t→+∞
|y(t) − r(t)| −−−−→ 0, (6.4)
6.1. SPECIFICATIONS ON THE ASYMPTOTIC REGIME: TRACKING99

and this is possible if and only if the polynomial DE is Hurwitz stable (all
its roots have negative real part). In discrete time, tracking is achieved if
k→+∞
|y(k) − r(k)| −−−−→ 0, (6.5)

and this is the case if and only if DE is Schur stable (all its roots have absolute
values strictly less than one). For the sake of simplicity, with a slight abuse
of terminology, such a DE will be said to be stable. Note that from (6.12) we
have that
roots(DE ) ⊆ roots(NL + DL ) ∪ roots(DR )
due to possible cancellations. Because of the closed-loop stability require-
ments, we know that NL + DL is stable. Thus, two possibilities has to be
discussed:
1. DR is stable,

2. DR is not stable.
In the first case tracking is already guaranteed, while to address the second
S U
it is convenient to factorize DR in a stable and an unstable part, DR and DR
respectively:
S U
DR = DR DR . (6.6)
In order to remove the effect of the non-convergent modes associated with
U
DR on the error E it is necessary to cancel such factor in (6.12). Assuming
U
NR and DR coprime the only possible cancellation is between DL and DR .
U
Precisely we need that DR is a factor of DL . In other words asymptotic
tracking is achieved if any unstable root pRi of the polynomial DR is a root
of DL with equal or higher multiplicity.
This conclusion is a formulation of the so-called internal model principle
the input-output description, that we give in the following proposition for
discrete-time systems.
Proposition 6.1 (Internal model principle). For a stable feedback intercon-
nectionn as in Fig.6.1, asymptotic tracking of a reference signal r(k) with
rational Z transform R(z), is achieved if and only if the unstable poles of
R(z) are also poles of the open loop transfer function L(z) := C(z)P (z) with
equal or higher multiplicities.
An equivalent result holds for continuous-time systems and their transfer
functions.
100CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

6.1.1 Tracking steps and ramps


Consider again the control interconnection of Figure 6.1. In the continuous-
time case, if the reference is the polynomial
tl tl−1
r(t) = rl + rl−1 + · · · + r1 t + r0 , t≥0
l! (l − 1)!
then
1 1 1 NL (s)
R(s) = rl + · · · + r1 + r0 = l+1
+ rl−1
sl+1 s l s s
l l+1
where NL (s) = rl + rl−1 s + · · · + r1 s + r0 s . Hence, in order to attain
asymptotic tracking, it is necessary to choose C(s) so that L(s) = C(s)P (s)
has at least l + 1 of poles in s = 0. In case the poles in zero are only l,
then using the final value theorem (see e.g. [?]) it can be shown that the
asymptotic tracking error is not zero but it is finite. Precisely, let L(s) = L0s(s)
l

where L0 (s) has no poles and no zeros in s = 0. The value of L0 (0) is called
the Bode gain of L(s). Then
1 NL (s) NL (s) 1
E(s) = l l+1
= l
1 + L0 (s)/s s s + L0 (s) s
By applying the final value theorem (it is possible since sl +L10 (s) is stable) we
obtain 
NL (0) rl
 1+L0 (0) = 1+L0 (0) if l = 0

lim e(t) = (6.7)
t→∞
 NL (0) = rl

if l > 1
L0 (0) L0 (0)

Hence, we can make the tracking error small by increasing the Bode gain.
We now treat the discrete-time case. Consider the discrete time reference
to be the degree l polynomial,
kl k l−1
r(k) = rl + rl−1 + · · · + r1 k + r0 , k≥0
l! (l − 1)!
Then by (2.42) we know that its Z-transform is
zQl (z)/l! zQl−1 (z)/(l − 1)! z z NL (z)
R(z) = rl l+1
+rl−1 l
+· · ·+r1 2
+r0 = ,
(z − 1) (z − 1) (z − 1) z−1 (z − 1)l+1
From the internal model principle, in order to attain asymptotic tracking, it
is necessary to choose C(z) so that L(z) = C(z)P (z) has at least l + 1 of
6.1. SPECIFICATIONS ON THE ASYMPTOTIC REGIME: TRACKING101

poles in z = 1. In case the poles in z = 1 are only l, also in this case we can
use the final value theorem to show that the asymptotic tracking error is not
L0 (z)
zero but it is finite. Precisely, let L(z) = (z−1) l where L0 (z) has no poles and

no zeros in z = 1. The value of L0 (1) is the Bode gain of L(z). Then

1 NL (z) NL (z) 1
E(z) = l l+1
= l
1 + L0 (z)/(z − 1) (z − 1) (z − 1) + L0 (z) z − 1

By applying the final value theorem (it is possible since (z−1)l1+L0 (z) is stable)
we obtain

NL (1) rl Ql (1)/l! rl
 1+L0 (1) = 1+L0 (1) = 1+L0 (1) if l = 0

lim e(k) = (6.8)
k→∞  NL (1) = rl Ql (1)/l! = rl

L0 (1) L0 (1) L0 (1)
if l > 1

where, from (2.43), we could argue that Ql (1) = l!.


r̃ + ẽ ũ u
r T

ց C̃(z) H0 P (s) b y

T
ց

Figure 6.2: Architecture of a digital control system.

Remark 6.1. Consider the interconnection in Fig. 6.2 in which we use the
l
discrete time controller C̃(z) in order to track r(t) = tl! with a prescribed
error. In continuous time feedback interconnection the controller C(s) should
to be selected so that the open loop transfer function L(s) = C(s)P (s) has
l poles in s = 0 and a Bode gain that can be derived from (6.7). In order
to analyse the asymptotic behaviour of the error ẽ(k), we need compute the
sample/hold version P̃ (z) of P (s). By letting L̃(z) := C̃(z)P̃ (z), in order
to have finite asymptotic error we need to impose that L̃(z) has l poles in
L̃0 (z)
z = 1. Moreover, if we write L̃(z) = (z−1) l , with L̃0 (1) being the Bode gain

of L̃(z), then asymptotic error ẽ(∞) can be obtained using formula (6.8).
Notice however that in the interconnection in Fig. 6.2 the sampled version
of r(t) is
kl
r̃(k) = r(kT ) = T l
l!
102CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

and hence the asymptotic error is given by formula (6.8) but multiplied by T l .
We can conclude that, in order to achieve the same asymptotic error we need
to impose the following relation between the Bode gains of the continuous
and the discrete time counterparts

L̃0 (1) = T l L0 (0) (6.9)

Hence C̃(z) has to be chosen such that L̃(z) has globally l poles in z = 1 and
and that the global Bode gain satisfies (6.9).

6.1.2 Tracking of sinusoidal signals


Consider again the control interconnection of Figure 6.1 and assume that
r(k) = r0 cos(ϑ0 k + Ψ0 ). We know that if the closed loop system is BIBO
stable, then from (4.10) we know that the system’s forced response is still
a sinusoidal signal with amplitude and phase determined by the harmonic
response of the system:

y(k) = y0 cos(ϑ0 k + χ0 ).

It can be found that the Z transform of r(k) is

z(cos(Ψ0 )z − cos(ϑ0 − Ψ0 ))
R(z) = r0 .
z 2 − 2z cos(ϑ0 ) + 1

In both cases the U (z) denominator is the same, with poles p1,2 = e±jϑ0 .
Here, due to the internal model principle, if P (z) has no pole in e±jϑ0 ,
then C(z) must be chosen of the form

N (z)
C(z) = ,
D̃C (z)(z 2 − 2z cos(ϑ0 ) + 1)
and such that

NC (z)NP (z) + D̃C (z)(z 2 − 2z cos(ϑ0 ) + 1)DP (z)

is stable.
Remark 6.2. Notice that the condition in Proposition 6.1 in principle guar-
antees asymptotic tracking also for signals r(k) that have Z-transforms R(z)
with strictly unstable poles, namely with poles with absolute value strictly
6.1. SPECIFICATIONS ON THE ASYMPTOTIC REGIME: TRACKING103

larger than 1. However, since the internal model principle is based on a un-
stable pole/zero cancellation, we need that the unstable poles are known with
infinite precision, otherwise a small difference between the unstable poles and
zeros will prevent the cancellation and the tracking will not achieved since
the tracking error will grow exponentially. If instead the poles of R(z) are
on the unit circle and are simple, it can be proved that, in case there is a
mismatch between these poles and the poles of L(z), then the tracking error
will remain bounded and its size will be proportional to the poles mismatch.
Example 6.1. Consider the feedback interconnection in Fig. 6.3 and assume
that π 
r(k) = cos k
3
so that
z(z − 1/2)
R(z) = 2
z −z+1
+
k
r(k) z 2 −(1+ǫ)z+1
b
y(k)

Figure 6.3: Control interconnection used in Example 6.1.

The value of the parameter ϵ represents the mismatch between the refer-
ence signal frequency and its estimate in the internal model. Indeed, when
ϵ = 0 we have perfect tracking while is ϵ ̸= 0 then the tracking error will not
converge to zero. We want to evaluate how big is this error as a function of
ϵ assuming that this is small. We can find that
E(z) = Wre (z)R(z)
where
z 2 − (1 + ϵ)z + 1
Wre (z) =
z 2 − (1 + ϵ)z + 1 + k
is the transfer function from the input r(k) and the output e(k). In case
ϵ = 0 the transfer function Wre (z) has denominator z 2 − z + 1 + k that can
be proved to be Schur stable if and only if −1 < k < 0. We choose k = −3/4
so that th denominator of Wre (z) is
z 2 − (1 + ϵ)z + 1 + k = z 2 − (1 + ϵ)z + 1/4 = (z − 1/2)2 − ϵz
104CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

It can be prove that this is stable for − 94 < ϵ < 14 and hence it will be stable
if ϵ is small enough.
Taking the frequency response of Wre (z) we know that asymptotically the
e(k) tends to reduce to the following sinusoidal signal
π 
e(k) ≃ |Wre (ejπ/3 )| cos k + arg(Wre (ejπ/3 ))
3
We need to determine Wre (ejπ/3 ) that is
z 2 − z + 1 − ϵz
Wre (z) = 2
z − z + 1 − ϵz + k |z=ejπ/3
−ϵejπ/3 ϵejπ/3 4ϵejπ/3 4
= jπ/3
= jπ/3
= jπ/3
≃ ϵejπ/3
−ϵe +k ϵe + 3/4 ϵe +3 3
where the last approximation holds since ϵ is small. We can then argue that
asymptotically the error tends to reduce to the following sinusoidal signal
4 π π
e(k) ≃ ϵ cos k+
3 3 3
This formula shows how the error amplitude depends on the precision of our
estimate of the reference signal frequency in the internal model.

6.1.3 Asymptotic disturbance rejection


With the same techniques used above we can evaluate the response of the
feedback system to disturbances having rational Z- or L-transform. Consider
the feedback interconnection in Fig. 6.4. Let D be the Z- or L-transform of
the disturbance that is assumed to be rational
ND ND
D= = S U (6.10)
DD DD DD
S U
where we factorized DD into a stable and an unstable part, DD and DD
respectively. Then the transform of the output signal Y is
NY 1 DL ND
Y = = D= S U
. (6.11)
DY 1+L NL + DL DD DD
NL
where L is the open loop transfer function L := CP = D L
. In this way the
effect of the disturbance tends to zero if and only if the polynomial DY is
6.2. PERFORMANCE SPECIFICATIONS ON THE TRANSIENT REGIME105

U
stable and this is possible if and only if the unstable polynomial DD is a
factor of DL , namely if and only if the unstable poles of D are poles of C or
of P . This condition coincides with the one obtained in for the asymptotic
tracking of reference signals with rational transform.
D
+ E U
R C P b
Y

Figure 6.4: Control interconnection used for the disturbance rejection prob-
lem.

Consider now the interconnection in Fig. 6.5 in which the disturbance


enters in a different point of the scheme. In this case we find that
NY P DC NP ND
Y = = D= S U
. (6.12)
DY 1+L NL + DL DD DD
NC NP
where C = D C
and P = D P
. Again the effect of the disturbance tends to
zero if and only if the polynomial DY is stable and in this case this happens
U
if and only if the unstable polynomial DD is a factor of DC NP , namely if
and only if the unstable poles of D are poles of C or zeros of P . Notice
that this condition is different from the previous one that concided with the
one obtained in for the asymptotic tracking of reference signasl with rational
transform.
We can argue that the reasonings for obtaining the conditions for the
exact disturbance rejection are similar to the ones done for treating the ref-
erence tracking. However for disturbance rejection we see that these condi-
tions depend on the position where the disturbance enters in the feedback
interconnection.

6.2 Performance specifications on the tran-


sient regime
The performance specifications on the transient regime aim to impose the
desired promptness and precision on the response of the controlled system
106CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

D
+ E U
R C P b
Y

Figure 6.5: Control interconnection used for the disturbance rejection prob-
lem.

to reference input changes. As it has been mentioned earlier, there are two
approaches for translating such qualities of the response into tunable param-
eters:

• In the time domain. According to this approach, we assume that the


reference input is a step signal and the transient performance is ex-
pressed in terms of a desired “shape” of the controlled system response
to this input. In principle we would like this response to be as close
as possible to the input. The specifications are subsumed in a number
of relevant parameters characterizing how this agrees with the desired
shape. These parameters are (see Fig. 6.6):

– raise time tr : the time the controlled system takes to move from
the 10% to the 90% of the asymptotic value of the response to the
step;
– settling time ts,5% : the time starting from which the response of
the controlled system is bounded within ±5% of its asymptotic
value1 ;
– overshoot mp : if in the transient regime the response surpasses
temporarily its asymptotic value, it is the maximum overshoot (in
percentage) that the response attains.

Intuitively it is possible to think that tr prescribes the desired prompt-


ness, mp the precision and ts,5% the duration of the transient itself.

• in the frequency domain. According to this approach, a desired “shape”


of the frequency response W (jω) of the controlled system is specified.
1
Values different from 5% might be used.
6.2. PERFORMANCE SPECIFICATIONS ON THE TRANSIENT REGIME107

y(t)
y(∞)

1 + mp

1 + 0.05
1
0.9 1 − 0.05

0.1
tr ts,5% t

Figure 6.6: Specifications on the transient regime in the time domain: mp ,


tr e ts,5% .

In principle we would like to obtain a value of |W (jω)| ≃ 1 for ω < ωB ,


if the expected reference signals have bandwidth less than ωB , while
|W (jω)| ≃ 0 is desirable for ω > ωA , frequency beyond which only
noise is expected. We allow some tolerances to this prescriptions and
hence the specifications become

(i) −3dB-bandwidth
√ ωB : the frequency (in rad/sec) such that |W (jω)| ≥
W (0)/ 2 (that is −3 decibel) for all ω ≤ ωB ;
(ii) −20dB-attenuation ωA : the frequency (in rad/sec) such that |W (jω)| ≤
W (0)/10 (that is −20 decibel) for all ω ≥ ωA ;
(iii) resonant peak M : the normalized maximum absolute value (i.e.
divided by W (0)) of the frequency response, that should not overly
exceed 1.

Notice that for obtaining the asymptotic tracking of the step reference
√ that W (0) ≃ 1 and hence the thresholds for
signal, we need to impose
ωB and ωB become 1/ 2 and 1/10.

Remark 6.3. Other performance specifications can also be used:

• Internal stability.
108CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

|W (jω)| |W (jω)|dB

M |M |dB
1 ωB ωA
√ ω
1/ 2 −3 dB

1/10 −20 dB
ωB ωA ω

Figure 6.7: Specifications on the transient regime in the frequency domain


in case. How the parameters M , ωA and ωB can be extracted from the plot
of |W (jω)| and from the plot of |W (jω)|dB := 20 log10 |W (jω)| that is the
Bode plot.

• Optimality: among all possible inputs or control laws one is chosen so


that a given cost functional is minimized J(y, u). The cost functional
is typically either a quadratic function that can be interpreted as an
“energy” of the signals, or the time in which a certain task is achieved.

• Robustness: a control law is sought which ensures that a set of perfor-


mance specifications is guaranteed for a whole class of systems, or input
signals. The latter is used to model the uncertainty on the system.

6.3 Translation of the time-domain perfor-


mance specifications: a brief summary
Assume that a controller C(s) has to be designed for a plant P (s) to be used
in a feedback interconnection, so that the closed-loop transfer function W (s)
satisfies certain performance indexes. A common approach is to approximate
W (s) with a first order or a second-order transfer function Ŵ (s)

K K
Ŵ (s) = , Ŵ (s) = s2
, (6.13)
1 + sτ 1 + 2ξ ωsn + 2
ωn

where
6.3. TRANSLATION OF THE TIME-DOMAIN PERFORMANCE SPECIFICATIONS: A BRIEF SUM

• K the DC gain since Ŵ (0) = K. Recall that for obtaining the asymp-
totic tracking of the step reference signal, we need to impose that
K ≃ 1.

• τ is the first order system time constant.

• ωn > 0 is the second order system natural frequency.

• ξ ∈ [0, 1] is the second order system damping ratio.

The first order system in (6.13) have a pole in p = −1/τ while the second
order system has poles in
 p 
p1,2 = −ωn ξ ± j 1 − ξ 2 . (6.14)

Let φ such that ξ = cos φ. Then p1,2 = −ωn (cos φ ± j sin φ) = −ωn e±jφ .
When the dominant pole of W (s) is real, W (s) is approximated by the
first order system, while when the dominant poles are complex conjugate,
W (s) is approximated by the second order system.

First order systems

Time domain analysis. From the step response of the first order system
we can argue that
2.2 3
tr ≃ 2.2τ = ts ≃ 3τ = mp = 0
|p| |p|

Frequency domain analysis. From the frequency response we can see that
1
ωB ≃ = |p| M = 1
τ
From these fact we can argue that
2.2
1 + mp = 1 = M tr ≃
ωB

Second order systems


110CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

Time domain analysis. From the step response of the second order system
we can argue that

f (ξ) g(ξ)
tr = ts = mp = e−π/ tan φ
ωn ωn
where f (ξ) and g(ξ) are functions that do not admit closed form expressions.
While g(ξ) ≃ 3ξ is a simple and good approximation for all ξ, there exist
various approximations2 of f (ξ). One very simple but very rough is f (ξ) ≃
2.2 that is acceptable when ξ ∈ [0.5, 0.7]. A better approximation is f (ξ) ≃
3.3ξ that is valid for all ξ.
Summarizing the previous relations we have that
π
mp = e− tan φ (6.15)
2.2
tr ≃ for ξ ∈ [0.5, 0.7] (6.16)
ωn
3
ts,5% ≃ (6.17)
|σ|
(6.18)

where σ is the real part and ωn is the absolute value of the dominant poles. All
the previous formulas hold both in case the dominant pole is real and in case
of complex conjugate dominant poles3 . Given specifications on tr , ts,5% , mp ,
the region of the complex plane in which to place the two poles so that these
requirements are satisfied corresponds to
2.2 π 3
ωn ≥ tan φ ≤ |σ| ≥ (6.19)
tr ln(1/mp ) ts,5%

and it is illustrated in Fig.6.8. Alternatively, only for the case of complex


conjugate dominant poles, we can use the more general and precise formula

3.3ξ
tr ≃ (6.20)
ωn
2
In the literature many different approximations have been proposed.
3
even formula (6.15) con be considered valid in case of real dominant pole since in this
case the angle φ can be considered zero.
6.4. CONTROL SYSTEM DESIGN 111

that would yield to a much more complicated region of the complex plane.
An analysis can be pursued of the 1% settling time which yields

4.6
ts,1% ≃ (6.21)
|σ|
(6.22)

Frequency domain analysis. From the frequency response of the second


order system it is possible to show that

if φ ≤ π/4

1
q p
4
ωB = ωn 1 − 2ξ + 4ξ − 4ξ + 2 M =2
1
sin(2φ)
if φ ≥ π/4

A simple approximations of the first formula is

ωB ≃ ωn

which yields the following approximation

2.2
tr ≃ (6.23)
ωB

When comparing the overshoot and the resonant peak we can argue the
following approximation

1 + mp ≃ M (6.24)

6.4 Control system design


The key steps for designing a digital control system are the following ones:

1. acquiring (or building) a model (either linear or linearized, continuous-


time or discrete-time);

2. acquiring and translating the requirements, like

• the required asymptotic behavior;


112CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

=(s)

ωn

σ ϕ
<(s)

Figure 6.8: Admissible regions for the poles of a second-order system satis-
fying a given set of specifications in the time domain.

• |σ|, ξ, ωn , for a second order system, evaluated through either


time-domain requirements (mp , tr , ta ) or frequency-domain spec-
ifications (M, ωB , ωA ), as described in the previous section;
3. choosing both the control architecture and the controller structure;
4. choosing the sampling time;
5. designing/tuning of free parameters;
6. performance evaluation (through simulations);
7. choosing sensors, actuators, A/D and D/A converters;
8. engineering evaluation.

The first two points have been already discussed in the previous sections, so
in the following we’ll focus on phases 3, 4 and 5.

6.4.1 The choice of the control architecture


Four main control structures exist: open-loop control, closed-loop control,
closed-loop control based on two degrees of freedom with a pre-filter, closed-
6.4. CONTROL SYSTEM DESIGN 113

loop control based on two degrees of freedom with a pre-filter and an addi-
tional feedforward structure. We want to mention the pros and cons of each
these solutions, by referring ourselves to the following architecture charac-
teristics (expressed in the frequency-domain, Fig. 6.9):

(i) Denoting with W (s) the whole scheme transfer function, the tracking
property requires |W (jω)| ≃ 1 for ω ∈ [0, ωB ].

(ii) For a good disturbance rejection, we need |W (jω)| small for ω ∈


[ωA , +∞).

(iii) Denoting with WNi (s) the i−th disturbance transfer function, we need
it to be as small as possible (internal stability and disturbances rejec-
tion).

(iv) We need system robustness: a useful parameter for quantifying robust-


ness is the sensitivity function. This is defined as

W (s) ∂W
SPW (s) =
P (s) ∂P

and quantifies how much W (s) changes when P (s) changes. Precisely
we have that
∆W ∆P
≃ SPW (s)
W P
where ∆W and ∆P are the (absolute) variations on W and P respec-
tively, while ∆W
W
and ∆P P
are the relative variations. Robustness is
obtained if the sensitivity is small since in this case a certain relative
variation on P will cause a small relative variation on W .

(v) For avoiding the actuator overloading, we need to keep “small” in the
reference bandwidth [0, ωB ] the transfer function A(s) := UR(s)(s)
from
the reference signal r(t) to the signal u(t) generated by the actuator
and applied to the plant. Observe that A(s) = W (s)
P (s)
and hence in any
control structure we will have that A = C ≃ P1 in the frequency-range
[0, ωB ]. This shows that we risk to overload the actuator if P (jω) is
“small” for some ω in [0, ωB ] and this happens whenever P has zeros
close to the imaginary axis in the reference signal bandwidth [0, ωB ].
114CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

|W (jω)|

M
1 √
1/ 2

1/10
ωB ωA ω
tracking filtering

Figure 6.9: Desired characteristics in the frequency-domain.

1. Open-loop control (Fig. 6.10). This is the simplest control architec-


ture that we have to resort to whenever the output measurement is
not available to the controller, so that the output can’t be used as a
feedback signal. In order to satisfy the requirements listed above, it is
necessary to have
(i) The tracking property translates into C ≃ P1 in the frequency-
range [0, ωB ].
(ii) The filtering property translates into |C| ≪ 1 in the frequency-
range [ωA , +∞).
(iii) The noise transfer functions are WN1 = P , WN2 = 1 and are not
influenced by the controller.
(iv) For the sensitivity function we see that SPW = 1 and hence the
controller can not contribute to the systems robustness.

n1 (k) n2 (k)
u(k)
r(k) C P y(k)

Figure 6.10: Open-loop control architecture (with possible disturbances).

2. Closed-loop control (feedback) (Fig. 6.11). This makes the various trans-
fer function given by, respectively,
CP P 1
W = , WN1 = , WN2 =
1 + CP 1 + CP 1 + CP
6.4. CONTROL SYSTEM DESIGN 115

1 C
SPW = SCW = , A=
1 + CP 1 + CP
Any of these transfer functions has to be BIBO-stable if internal sta-
bility of the whole system is required.

(i) The tracking property translates into |CP | ≫ 1 in the frequency-


range [0, ωB ].
(ii) The filtering property translates into |CP | ≪ 1 in the frequency-
range [ωA , +∞).
(iii) For ω ∈ [0, ωB ] we have that WN1 ≃ C1 and hence |WN1 | ≪ 1
1
if we choose |C| ≫ 1, while WN2 ≃ CP and hence we have that
|WN2 | ≪ 1 follows from the tracking condition. For ω ∈ [ωA , +∞)
we have that WN1 ≃ P and WN2 ≃ 1.
(iv) For the sensitivity functions we see that SPW , SCW ≃ 0 in the
frequency-range [0, ωB ].

By summarizing, the closed-loop control leads to significant improve-


ments w.r.t. the open-loop structure. Furthermore, C can embed inter-
nal model components, so leading to an asymptotically exact tracking
of the reference signal.4

n1 (k) n2 (k)
+ u(k)
r(k) C P b
y(k)

Figure 6.11: Closed-loop control architecture (with possible disturbances).

3. Closed-loop control based on two degrees of freedom with a pre-filter


(Fig. 6.12). The dynamic control action is split up between L, C, H.
The various transfer functions are given by
LCP P 1
W = , WN1 = , WN2 = SPW =
1 + HCP 1 + HCP 1 + HCP
4
However, in case of unstable plants P , some limitations arise, overall w.r.t. the dis-
turbance rejection (cf. Bode’s integrals).
116CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

LC
A=
1 + HCP
This way, L allows the freedom of suitably splitting the control tasks in
such a way that A can remain small, C is devoted to endowing internal
model components, while the H’s task is usually that of obtaining
stabilization. A careful design of the block L is needed as SLW = 1.

(i) The tracking property can be obtained by imposing |HCP | ≫ 1


and H ≃ L and in the frequency-range [0, ωB ].
(ii) The filtering property can be obtained in two different ways: or
by imposing |HCP | ≫ 1 and |H| ≫ |L| in the frequency-range
[ωA , +∞) or by imposing |HCP | ≪ 1 and |LC| ≫ 1/|P | again in
the frequency-range [ωA , +∞).
1
(iii) For ω ∈ [0, ωB ] we have that WN1 ≃ HC and hence |WN1 | ≪ 1 if
1
we choose |HC| ≫ 1, while WN2 ≃ HCP and hence we have that
|WN2 | ≪ 1 follows from the tracking condition. For ω ∈ [ωA , +∞),
1
if we choose the first filtering condition we get again WN1 ≃ HC
1
and WN2 ≃ HCP that are both small. If we instead apply the
second filtering condition we get WN1 ≃ P and WN2 ≃ 1.

n1 (k) n2 (k)
+ u(k)
r(k) L C P b
y(k)

H

Figure 6.12: Closed-loop control based on two degrees of freedom with a


pre-filter (with possible disturbances).

4. Closed-loop control based on two degrees of freedom with a pre-filter and


an additional feedforward (Fig. 6.13). Let’s notice that if e(t) (the error
which appears to be the input of C) is zero, then the scheme works like
an open-loop control through the feedforward block F , while the feed-
back takes place whenever “some things are going bad” (disturbances
presence, unsatisfying output behavior, etc). Intuitively F takes care
of the nominal behavior, H takes care of the scheme stabilization, C
6.4. CONTROL SYSTEM DESIGN 117

takes care of the internal model components and L is useful for pre-
filtering the reference. For a more precise analysis we determine the
transfer functions, that are
(F + LC)P P 1
W = , WN1 = , WN2 =
1 + HCP 1 + HCP 1 + HCP
F + LC
A=
1 + HCP
1
(i) The tracking property can be obtained by imposing F (jω) ≃ P (jω)
and H(jω) ≃ L(jω) in the frequency-range [0, ωB ].
(ii) The filtering property can be obtained by imposing F (jω) ≃ 0
and H(jω) >> L(jω) in the frequency-range [ωA , +∞).

F n1 (k) n2 (k)
+ u(k)
r(k) b
L C P b
y(k)

H

Figure 6.13: Closed-loop control based on two degrees of freedom with a


pre-filter and an additional feedforward (with possible disturbances).

6.4.2 The choice of the sampling period


This phase is typical in digital control for continuous-time plants. The sam-
pling-period T has both to satisfy some constraints and to be chosen based
on some criterions, that we will deal with in the sequel.

Main constraints. If a micro-processor (µP) based system is used, let Tc


denote the time required to the control algorithm for performing computa-
tions and other functions, then the obvious requirement is T ≥ Tc . The most
common µP tasks to be considered are listed below:
?
• Input signal test (e(k) ∈ [emin , emax ]).
• Digital filtering (e.g. “smoothing” in derivatives computation).
118CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

• Signal conditioning (BIAS elimination, values adaptation).

• Estimation (or extraction) of the signal values. This happens for in-
stance when the sensor does not provide the output y(kT ) but instead
a function ỹ(k) = f (y(kT )) where f (·) is a generic function. In this
case the computation of f −1 (ỹ(k)) is needed.

• I/O interfaces management (log’s creation, data storage, data visual-


ization).

In case the µP is shared (multitasking)P


for the control of more than one pro-
cess, then the constraint becomes T ≥ i Tc,i , where Tc,i is the computation
time needed to manage the control of process i.

Criterions for the T choice. A detailed list follows, dealing with vari-
ous criterions to be taken into account for the sampling-time T choice:
1. Bounding variability (roughness) of the control signal. In digital con-
trol the control occurs only at the sampling time while in between the
system behaves in open-loop. Consequently, if T is too large w.r.t. the
P (s), N (s), R(s) bandwidths, or in case of instability of some of these trans-
fer functions in the control loop (see Figure 6.14), then the error ẽ(k + 1)
could become too large, and hence u(k + 1) could become large as well. In
this case some troubles could arise:

• input signal saturation;

• arising of mechanical resonances;

• arising of limit cycles due to non-linearities.

These are often critical problems in many applications.


n
r̃ + ẽ ũ ū
r T

ց C(z) H0 P (s) b y

T
ց

Figure 6.14: Architecture of a digital control system.


6.4. CONTROL SYSTEM DESIGN 119

2. System dynamics and related delays. Assume that at a certain time


instant tg either a step reference signal appears or a disturbance w gets
started to act. Let (k − 1)T and kT be two subsequent sample acquiring
times. The controller “sees” the step reference (or the disturbance) only at
time kT (Fig. 6.15). So a delay given by ∆ = kT − tg is unavoidable and,
in the worst case (the step signal suddenly arises as soon as the time instant
k got over), ∆ = T . We already saw that the system proptness depends on
the rise-time tr , and that delay phenomena have a negative impact on the
system transient performance. In order to overcome that, we usually assume

tr
T ≤ , (6.25)
10

or, equivalently,
2π10
Ω≥
tr

r(t)

• •

• • •(k − 1)T

tg kT t

Figure 6.15: Delay caused by the sampling-period T on response to the step


reference signal (r(t) = δ−1 (t − tg )).

Example 6.2 (first order system). Assume that, by considering only the dom-
inant poles, the closed loop system can be described by the transfer func-
1
tion W (s) = 1+sτ , with τ ∈ R+ the system time constant. We know that
tr ≃ 2.2τ . So in order to obtain T ≤ tr /10 it should hold Tτ ≤ 0.22. Re-
call that −3dB-bandwidth is of W (s) is ωB = 1/τ . Hence, in the frequency
domain the previous inequality translates into ωΩB = 2πτ
T
≥ 28. ♢

Example 6.3 (second order system). Assume now that, by considering only
the dominant poles, the closed loop system can be described by the transfer
1
function W (s) = s s2
. Using formula (6.23) we obtain that the same
1++2ξ ω + 2
n ωn
120CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

contraint we found in the previous example between sampling frequency Ω


and the −3dB-bandwidth of W (s) that is ωΩA ≥ 28. If instead we use the
more precise formula (6.20), then

3.3ξ
T ≤
10ωn

If, for instance, ξ = 1/ 2, then 3.3ξ = 0.23 and hence the previous inequality
is equivalent to ωΩn ≥ 27. If we take instead ξ = 1/2, then 3.3ξ = 0.18 and
hence the previous inequality is equivalent to ωΩn ≥ 34 ♢

3. Anti-Aliasing filter effects. This issue has already been addressed in


Chapter ??. We proved that adding an Anti-Aliasing filter in the feedback
loop is possible only at the cost of a reduction of the phase margin. Hence this
can be done only if the design of the continuous time controller prescribed
an extra phase margin with respect to the one resulting from the satisfaction
of transient specifications. More precisely assume that the transient spec-
ifications corresponds to a minimum phase margin equal to mφ ∗ but that
we have designed the continuous time controller so that the resulting phase
margin is mφ > m∗φ so that the extra phase margin is φ := mφ −m∗φ . Assume
that a part φAA of this extra phase margin is devoted to the Anti-Aliasing
filter phase compensation. Referring to second order anti-aliasing filters, in
order to obtain an attenuation a at frequency Ω/2 with damping factor ξ, an
estimate of the phase margin variation φ is given by (5.16)
√ ωc √ ωc
φAA ≥ 4ξ a =⇒ Ω ≥ 4ξ a (6.26)
Ω φAA
4. Delay generated by the zero-holder interpolator. As depicted in Fig. 6.16,
the zero-order interpolator leads to a “delay” of T /2.
By referring ourselves to Fig. 6.14, we see that a way to consider this
zero-holder delay is to take as the plant transfer function P̃ (s) ≃ e−sT /2 P (s).
This implies a clockwise rotation of the Nyquist plot, and, consequently, a
decrease in the phase margin given by
ωc T
φH = , (6.27)
2
with ωc the open-loop transfer function crossover frequency. As for the Anti-
Aliasing filter, in order to prevent that this phase margin decrease could
6.4. CONTROL SYSTEM DESIGN 121

u(t), û(t)

T t

Figure 6.16: Delay induced by the zero-holder interpolator on the input signal
in Fig. 6.16. The signal u(t) - blue - represents the non-delayed signal, while
û(t) - red dashed - is the approximation of the sampled and interpolated
signal.

deteriorate the transient behaviour, we need to consider this at the phase


continuous time controller design. Hence some extra phase margin need to
be considered and a portion φH has to be dedicated to the compensation
of the delay introduced by the interpolator. Consequently, the following
inequality has to hold
ωc T πωc
φH ≥ =⇒ Ω≥ . (6.28)
2 φH
Remark 6.4. Putting together the constraints (6.26) and (6.28) and since
φAA + φH = φ, where φ is the total extra phase margin, we obtain that
√ √
ωc [π + 4ξ a] ωc [π + 4ξ a]
Ω≥ , φ≥ (6.29)
φ Ω
that can be seen either as a constraint on sampling frequency, once the extra
phase margin is given, or as a constraint on the extra phase margin, once the
sampling frequency is given.
5. Parameters variation sensitivity. Given the first order system P (s) =
k b
1+sτ
, its discretized version takes the form P̃ (z) = z−a with a = e−T /τ and
b = k(1 − e−T /τ ). If the knowledge of τ is imprecise, then we know that
∆a ∆τ
≃ Sτa
a τ
a
where Sτ is the sensitivity of a as a function of τ . We know that
τ ∂a T
Sτa = = . (6.30)
a ∂τ τ
122CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

Hence this sensitivity increases linearly in T and hence this can make the
overall controlled system less robust to variations of τ . Precisely if we think
that P (s) is inserted in the control scheme in Figure 6.17 and if W (z) is
the transfer function of the corresponding closed loop system, then it can be
shown that the sensitivity SτW of W (s) with respect to variations on τ admits
the following decomposition
SτW = SaW Sτa
and hence a large sensitivity Sτa can make the sensitivity SτW large as well.
+ e u
r(k) C(z) H0 P (s) 
T
ց b
y(k)

Figure 6.17: Feedback control for a plant obtained via sample/hold.

6. Effect of the quantization in the derivative computation. One building


block of some digital control algorithms is the computation of an approximate
derivative of the signals. The quality of the approximation is not only affected
by the sampling interval T but also by roughness of the quantizer. To better
understand this, assume that q[·] is a uniform quantizer with quantization
interval ∆ and let f (t) be a continuous time signal. Then the following
approximation of the derivative can be used

df (t) f (kT ) − f ((k − 1)T )
≃ (6.31)
dt t=kT
T
and the quality of this approximation depends on the variability of the deriva-
tive of f (t) around kT . If only q[f (kT )] are available for all k, then we can
use instead the following approximation

df (t) q[f (kT )] − q[f ((k − 1)T )]
≃ (6.32)
dt t=kT
T
To better understand the quality of this approximation let’s assume that
f (t) = mt + q so that the first approximation (6.31) is exact and hence the
quality of the second approximation (6.32) depends only on the quantization.
In this case dfdt(t) = m constant for all t. Observe that
q[f (kT )] − q[f ((k − 1)T )] = ℓ∆
6.4. CONTROL SYSTEM DESIGN 123

where ℓ is the number of times that f (t) crosses the quantization levels
∆Z = {. . . , −3∆, −2∆, −∆, 0, ∆, 2∆, 3∆, . . .}. This number ℓ coincides with
the number of elements in ∆Z that are between f ((k − 1)T ) and f (kT ) that
is the number of elements in [f ((k − 1)T ), f (kT )] ∩ ∆Z = [m(k − 1)T +
q, mkT + q] ∩ ∆Z. It can be seen that (see remark below)
 
Tm
ℓ= + δk (6.33)

where δk can be 0 or 1 according to the relative positions of th and kT , as
better understandable from Figure 6.18. Then
  
Tm
q[f (kT )] − q[f ((k − 1)T ] = ∆ + δk

We know that ⌊x⌋ = x − e(x), where e(x) ∈ [0, 1]. Hence
q[f (kT )] − q[f ((k − 1)T ]
 
∆ Tm ∆ df (t) ∆ ′
= + δk = m + δk′ =

+ δk
T T ∆ T dt T

where δ ′ ∈ [−1, 1] is the sum of δk end the function e(x) evaluated in T∆m . We
can argue that the size of the approximation error is at most ∆ T
and hence if T
is too small, then it can become unacceptable. Observe that this is the only
case in which a too small sampling interval can cause problems. However
this problem can be solved simply by taking the following approximation of
the derivative
df (t) q[f (kT )] − q[f ((k − s)T )]

dt t=kT
sT
where s is a positive integer. Indeed, in this case the approximation error

becomes sT .
Remark 6.5. We detail here why formula (6.33) holds true. Consider any

interval [a, b] and a positive constant c. We want to estimate [a, b] ∩ cZ ,

namely the number of elements in this set [a, b] ∩ cZ. Observe that

[a, b] ∩ cZ = [a, a + c[∩cZ + [a + c, b] ∩ cZ = 1 + [a + c, b] ∩ cZ

= 2 + [a + 2c, b] ∩ cZ = · · · = ℓ + [a + ℓc, b] ∩ cZ

where
b−a
 
ℓ=
c
124CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN

f (t)

f (kT )−

(s+1)∆ −

s∆ −
(s−1)∆

f ((k − 1)T ) −

| |
(k − 1)T kT

Figure 6.18: The signal f (t) = mt + q is quantized according to a quantizer


q[·] with quantization intervals of size ∆. These quantization intervals are
highlighted by the red horizontal lines. The signal f (t) is sampled at time
instants kT and (k + 1)T producing f ((k − 1)T ) and f (kT ) and than it is
quantized.

Observe
finally that since the interval [a + ℓc, b] has length less than c, then
[a + ℓc, b] ∩ cZ can be 0 or 1 according to the position of this interval.

Remark 6.6. We are going to conclude this section by listing in Table 6.1 some
practical considerations regarding typical sampling-time choices in relation
to the physical system of interest.
6.4. CONTROL SYSTEM DESIGN 125

to-be-controlled variables T magnitude order


tank liquid level, temperature seconds, second fractions
pressure, flow rate tens of msec
voltage, electric current ∼ msec
power electronics tens of µs
solid state electronic, QM applications ∼ nsec

Table 6.1: Some examples of the magnitude order for the sampling time T .
126CHAPTER 6. CONTROL PROBLEM AND CONTROLLER DESIGN
Chapter 7

Digital controller synthesis:


Emulation methods

7.1 Introduction
We are going to deal with the problem of designing a discrete-time controller
C(z) for a given continuous-time plant P (s), as shown in Fig. 7.1.

r̃ + ẽ ũ u
r T

ց C̃(z) H0 P (s) b y

T
ց

Figure 7.1: Digital feedback interconnection.

Various methodologies exist for designing a suitable C(z):

• Emulation method. An approximate method based on the following


procedure:

– We start from the continuous transfer function of the process P̃ (s)


– C(s) is thereafter designed, by continuous-time reasonings. We
need to introduce an extra phase margin for taking care of delay
introduced by the ZOH and the phase decrement due to the anti
aliasing filter;
128CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

– C(s) is “translated” in the z domain, namely a C̃(z) is determined


in such a way that the series of the sampler, C̃(z) and the holder
behaves similarly to C(s).
• Synthesis via bilinear transformation. It requires the following
steps:
h h h i ii
−1 −1 P (s)
– P̃ (z) = (1 − z )Z S L s
; T is first of all evaluated;

– the Tustin (bilinear) transformation is adopted1 P̃1 (w) := P̃ (z) 1+ wT ;

2
z=
1− wT
2

– C̃1 (w) is designed in the continuous-time domain;



– the inverse transformation is finally used: C̃(z) = C̃1 (w) .

z−1
w= T2 z+1

This method allows to obtain an exact pole placement, as no approxi-


mations have been introduced.
• Direct synthesis in the discrete-time domain. This way the
required steps are the following ones:
h h h iii
−1 −1 P (s)
– a domain change is first done: P̃ (z) = (1−z )Z ST L s
;
– C̃(z) is designed directly in the discrete domain, by means of either
a direct synthesis formula or a diophantine equation.
In the following sections both the first and the third method will be discussed
in detail.

7.2 Emulation method: the digital conver-


sion of a continuous time controller
Given a desired C(s), we want to implement it in the z domain as C̃(z) in
such a way that the series of the sampler, C̃(z) and the holder provides an
input/output behavior similar to the one provided by C(s) (see Fig. 7.2).
This will be possible only at a cost of some approximations due to the finite
bandwidth related to both sampling and ZOH’s delay.
1
The “scaling factor” T2 has no relevance from a mathematical viewpoint, but the
reason why we introduced that will be clear in the sequel.
7.2. EMULATION METHOD: THE DIGITAL CONVERSION OF A CONTINUOUS TIME CONTRO

e(t) C(s) u(t)


ẽ(k) ũ(k)
e(t) T

ց C̃(z) H0 ū(t)

Figure 7.2: Emulation method: find C̃(z) such that the series interconnection
shown in the figure behaves similarly to the continuous time transfer function
C(s).

Remark 7.1. Consider the tracking problem treated in Section 6.1.1, in which
we have to design a controller for the digital control system in Fig. 7.1 able to
l
track with a prescribed asymptotic error a reference r(t) = tl! . By designing
the controller in continuous time, we will need to find a C(s) such that
L(s) = C(s)P (s) has l poles in s = 0, namely

L0 (s)
L(s) =
sl
and a Bode gain L0 (0) determined according formula (6.7). If P̃ (z) is the
sample/hold version of P (s), we have shown that, in order to obtain the
desired asymptotic error, we need to choose a discrete time controller C̃(z)
be such that L̃(z) := C̃(z)P̃ (z) has l poles in z = 1, namely

L̃0 (z)
L̃(z) =
(z − 1)l

and has Bode gain L̃0 (1) in the following relation with the Bode gain of L(s)

L̃0 (1) = T l L0 (0) (7.1)

In general when a continuous time transfer function F (s) that has l poles
in s = 0 and a discrete time transfer function F̃ (z) with l poles in z = 1
are such that their Bode gains satisfy F̃0 (1) = T l F0 (0), we say that they are
matched at zero frequency. In this way we can say that, in order to have the
same asymptotic error, we need that the continuous time open loop transfer
function L(s) and the discrete time open loop transfer function L̃(z) need
to be matched at zero frequency. To this respect it an be shown that the
130CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

sample/hold version P̃ (z) of P (s) are matched at zero frequency. Then in


order to have open loop transfer function L(s) and L̃(z) matched at zero
frequency we need to impose that also C(s) and C̃(z) are matched at zero
frequency.
We give now various discretization methods. We start from the invariance
methods.
1. Invariance methods. This method is an example within the class of
invariance methods. An invariant method is based on the requirement
that the series of C(s) and the sampler and the series of the sampler
and C̃(z) provide the same output when driven by the same input (see
Fig. 7.3). Given the driving inpute(t) with Laplace transform E(s) the
problem is to obtain C̃(z) such that the following equation is satisfied
C̃(z)Z[ST [e(t)]] = Z[ST [L−1 [C(s)E(s)]]]
The various invariance methods differ according to the selected driving
input.

e(t) T

ց C̃(z) ũ(k)

e(t) C(s) T

ց ũ(k)

Figure 7.3: Invariance method: find C̃(z) such that the two series intercon-
nections shown in the figure behaves identically, namely they provide the
same output when they are driven by the same input e(t).

(a) Impulse invariance method: In this case we choose the input


to be the impulse signal δ(t). We assume that the sampled version
of δ(t) is δ(k). Therefore we need to impose that
C̃(z) = Z ST L−1 [C(s)]
  

(b) Step invariance method: In this case choose the input to be


the step signal e(t) = δ−1 (t) and hence we need to impose that
   
−1 C(s)
C̃(z)Z[δ−1 (k)] = Z ST L
s
7.2. EMULATION METHOD: THE DIGITAL CONVERSION OF A CONTINUOUS TIME CONTRO

which yields

z−1
   
−1 C(s)
C̃(z) = Z ST L
z s

This method coincides with the one presented in Section 5.7.


(c) Ramp invariance method: In this case choose the input to be
the ramp signal tδ−1 (t) whose sampled version is T kδ−1 (k). Hence
we need to impose that
   
−1 C(s)
C̃(z)Z[T δ−2 (k)] = Z ST L
s2

which yields

(z − 1)2
   
−1 C(s)
C̃(z) = Z ST L
Tz s2
Example 7.1. We are now going to show the application of the invari-
K
ance methods. Let C(s) = s+p . Then

(a) Applying the impulse invariance method we see that L−1 [C(s)] =
Ke−pt and hence
Kz
C̃(z) = Z Ke−pT k =
 
z − e−pT

Observe that C(0) = K/p while C̃(1) = 1−eK−pT and hence C(s)
and C̃(z) are not matched at the zero frequency.
h i
−1 C(s)
(b) Applying the step invariance method we see that L s
=
K
p
(1 − e−pt ) and hence

z−1 Kz−1
   
K −pT k z z
C̃(z) = Z (1 − e ) = −
z p p z z − 1 z − e−pT
z−1 K 1 − e−pT
 
K
= 1− =
p z − e−pT p z − e−pT

Observe that C(0) = K/p = C̃(1) and hence in this case C(s) and
C̃(z) are matched at the zero frequency.
132CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

(c) Applying the ramp invariance method we see that

 
−1 C(s) K
L 2
= 2 (pt − 1 + e−pt )
s p

and hence

(z − 1)2
 
K −pT k
C̃(z) = Z 2 (pT k − 1 + e )
Tz p
K (z − 1)2
 
pT z z z
= − +
T p2 z (z − 1)2 z − 1 z − e−pT
(z − 1)2
 
K
= pT − (z − 1) +
T p2 z − e−pT
K z(pT − 1 + e−pT ) + (1 − pT e−pT − e−pT )
=
T p2 z − e−pT

Observe that C(0) = K/p = C̃(1) and hence in this case C(s) and
C̃(z) are matched at the zero frequency.

Remark 7.2. It can be shown that the step invariance and the ramp in-
variance methods provide C̃(z) that are matched at the zero frequency
with C(s) whenever C(s) has no zeros in s = 0.

2. Matched Pole-Zero method (MPZ). The invariance methods


provide transfer functions C̃(z) that behave well when driven by a
particular input. These methods however may not perform well when
the driving signals are different such as, for instance, sinusoidal signals.
Indeed these methods may provide a transfer function C̃(z) with fre-
quency response that may differ much from the one of C(s). The MPZ
method is preferable when we want to obtain a better agreement be-
tween the two frequency responses. Observe that the invariance meth-
ods provide transfer functions C̃(z) with poles epi T , where pi are the
poles of C(s). In order to obtain a better fitting of the frequency re-
sponse, the MPZ method uses the same mapping for the choice of the
zeros of C̃(z). Precisely, given C(s) we can
7.2. EMULATION METHOD: THE DIGITAL CONVERSION OF A CONTINUOUS TIME CONTRO

• Evaluate poles and zeros of C(s) and express it in Evans’ form


m
Y
(s − zi )
i=1
C(s) = K n−ν
, (7.2)
Y
ν
s (s − pi )
i=1

where the pole in the origin has been highlighted for reasons which
will be later clarified.
• map the computed poles and zeros through z = esT , so obtaining
C̃1 (z)
Ym
(z − ezi T )
i=1
C̃1 (z) := n−ν
. (7.3)
Y
(z − 1)ν (z − epi T )
i=1

• equip C̃1 (z) with further l zeros at −1, for diminishing the delay

C̃2 (z) = C̃1 (z)(z + 1)l . (7.4)

From a practical perspective, if r := n − m is the relative degree


of C̃1 (z), we can choose l = r, if no delay is desired, otherwise
l = r − 1, if a delay step in the controller is preferred, for allowing
the microprocessor to have time enough for implementing the con-
trol algorithm (l < r − 1 is almost never chosen). It’s worthwhile
to remark that zeros at z = −1 are needed whenever the relative
degree of C̃1 (z) (which is the same of C(s)) is greater than zero.
This way C(s) is endowed with “zeros at infinity”, i.e. some ze-
ros at infinite frequency. Zeros at z = −1 are the discrete-time
counterpart of that, as the maximum discrete frequency is given
by ejπ = −1.
• Let
C̃(z) = K̃m C̃2 (z), (7.5)
Find K̃m in such a way that C(s) and C̃(z) are matched at zero
frequency. We distinguish between two cases:
134CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

[ν = 0] This implies that C(s) is devoid of poles at s = 0, so


that C̃2 (z) hasn’t poles at z = 1, and we have simply to make
equal the zero-frequency gains C(0) = C̃(1).
[ν > 0] This happens whenever C(s) is endowed with a pole at
s = 0 (and therefore C̃2 (z) is endowed with a pole at z = 1
with the same multiplicity ν). For taking into account the
neglected pole (with the corresponding multiplicity ν), the
gain needs to be multiplied by T ν . Precisely, if we write
C(s) = C0s(s))
ν
C̃0 (z)
and C̃(z) = (z−1) ν and by imposing that

C̃0 (1) = T ν C0 (0)

Example 7.2. We are now going to show two applications of the MPZ
method.

• Let C(s) = K s+a


s+b
. Then

z − e−aT
C̃1 (z) = = C̃2 (z),
z − e−bT

Then C̃(z) = K̃m C̃2 (z) with K̃m such that

a 1 − e−aT a 1 − e−bT
C(0) = K = C̃(1) = K̃m ⇒ K̃ m = K .
b 1 − e−bT b 1 − e−aT

• Let C(s) = K s(s+b)


s+a
. If we want zero relative degree

(z + 1)(z − e−aT )
C̃2 (z) = .
(z − 1)(z − e−bT )

Then C̃(z) = K̃m C̃2 (z) with K̃m such that C̃0 (1) = T C0 (0) where

(z + 1)(z − e−aT ) s+a


C̃0 (z) = K̃m C0 (s) = K
z − e−bT s+b

Then
T a 1 − e−bT
K̃m = K .
2 b 1 − e−aT
7.2. EMULATION METHOD: THE DIGITAL CONVERSION OF A CONTINUOUS TIME CONTRO

Remark 7.3. We give now both a mathematical reasoning and an ex-


ample supporting the fact that the transfer function C̃(z) given by
MPZ method provides a good approximation of C(s) with respect to
the frequency response, namely that C(jω) ≃ C̃(ejωT ). First observe
that C(jω) is composed of factors such as jω − a, where a ∈ C, which
jωT aT
correspond to factors e T−e in C̃(ejωT ). Observe that Taylor expan-
jωT aT
sion in T of e T−e is
ejωT − eaT
 
jω + a
= (jω − a) 1 + T + ···
T 2
jωT aT
We can argue that, if T is small, then jω − a ≃ e T−e which justifies
the presence of the factors z − ezi T and z − epi T in C̃(z) 2 .
We give now instead a numerical evidence of the good behavior of the
MPZ method with respect to the frequency response approximation.
Take
s2 + 0.5s + 10
C(s) =
(s + 1)(s + 2)
In matlab the discretization of a continuous time transfer function can
be done by the command c2d. This allows different type of discretiza-
tion methods, impulse invariance method (’impulse’), step invariance
method (’zoh’), ramp invariance method (’foh’) and MPZ method
(’matched’). The bode plots of the different discretized transfer func-
tions compared with the Bode plot of the original continuous time
transfer function is shown in Fig. 7.4. We see that the MPZ is the
method that performs better than the others.
3. Methods based on the derivative approximation. A transfer
function implements a differential equation. So we need to approximate
d
the time derivative dt (in the s domain is the multiplication by s) with
a suitable operator in discrete time (and hence a suitable operator in
the z domain). Three main methods exist:
• Euler forward method (EF).
dx(t) x(t + T ) − x(t)
≃ , (7.6)
dt T
Indeed, it can be shown that the normalized factors (ejωT − eaT ) eaTa−1 are even better
2

approximation of jω − a since for them the approximation holds both when T is small or
when ω is small.
136CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

Bode Diagram
15

10

5
Magnitude (dB)

-5

-10

-15

-20
360

270

180
Phase (deg)

90

-90

-180
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)

2
Figure 7.4: The Bode plot of C(s) = s(s+1)(s+2)
+0.5s+10
(blu dashed line) compared
with Bode plots of the discretized transfer functions obtained by applying
the impulse invariant method (blu line), the step invariant method (black
line), the ramp invariant method (green line) and the MPZ method (red
line). Notice that all the discretized transfer functions are matched at the
zero frequency with C(s) except the one determined by the impulse invariant
method.

with T the sampling time. In the transform domain it becomes

z−1
s≃ . (7.7)
T
7.2. EMULATION METHOD: THE DIGITAL CONVERSION OF A CONTINUOUS TIME CONTRO

Since z = esT (in time domain they both correspond to delay) it


follows that we are actually resorting to the approximation

esT ≃ 1 + sT ; (7.8)

• Euler backward method (EB).

dx(t) x(t) − x(t − T )


≃ , (7.9)
dt T
which corresponds to
1 − z −1
s≃ . (7.10)
T
and to the approximation

1
esT ≃ ; (7.11)
1 − sT

• Tustin’s method. It corresponds to a trapezoidal approximation of


the integral. Indeed
Z t+T  
dx(τ ) T dx(t + T ) dx(t)
x(t + T ) − x(t) = dτ ≃ +
t dτ 2 dt dt
(7.12)
which corresponds to

T
z−1≃ (zs + s). (7.13)
2
which yields
2 z−1
s≃ . (7.14)
T z+1
According to this approach z = esT is approximated by
sT
sT 1+ 2
z=e ≃ sT
, (7.15)
1− 2

Let’s remark that the Tustin’s transformation is exactly the bilin-


ear one, except for a “scaling” factor.
138CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

After having chosen one of the previous approximations, we can com-


pute

C̃b (z) = C(s) z−1 C̃f (z) = C(s) 1−z−1 C̃t (z) = C(s) 2 z−1

s= T
s= T
s= T z+1
(7.16)
Remark 7.4. It can be shown that all the presented discretizations
based on the derivative approximation leads to discrete time transfer
functions that are matched at zero frequency with the original contin-
uous time transfer function. We prove this fact only for the Tustin
discretization. Write C(s) = C0sν(s) with C0 (s) without poles and zeros
in s = 0 so that C0 (0) is the Bode gain of C(s). Then
T ν (z + 1)ν
C̃t (z) = ν C (s)

ν 0
2 (z − 1)
2 z−1
s= T z+1

C̃0 (z)
so that we can write C̃t (z) = (z−1)ν
, where

Tν ν

C̃0 (z) = ν (z + 1) C0 (s) 2 z−1

2 s= T z+1

Then the Bode gain of C̃t (z) is C̃0 (1) = T ν C0 (0).


Remark 7.5. Notice that, if we start from a C(s) expressed in the
following form
Ym
(s − zi )
i=1
C(s) = K n−ν
,
Y
ν
s (s − pi )
i=1
then after some computations it is possible to find that the EF approx-
imated transfer function is

m
Y
(z − z̃if )
i=1
C̃f (z) = K̃f n−ν
(7.17)
Y
(z − 1)ν (z − p̃if )
i=1
where
7.2. EMULATION METHOD: THE DIGITAL CONVERSION OF A CONTINUOUS TIME CONTRO

(a) The generic zeros and poles are z̃if := 1 + zi T and p̃if := 1 + pi T .
(b) The poles in s = 0 are mapped into poles z = 1.
(c) The relative degrees of Cf (z) and C(s) coincide, namely rdeg(Cf (z)) =
rdeg(C(s)) or in other words they share the multiplicity of the pole
at infinity.
(d) The gain is K̃f := KT n−m .

Considering instead the EB approximated transfer function, we obtain


m
Y
n−m
z (z − z̃ib )
i=1
C̃b (z) = K̃b n−ν
(7.18)
Y
(z − 1)ν (z − p̃ib )
i=1

where
1 1
(a) The generic zeros and poles are z̃ib := 1−zi T
and p̃it := 1−pi T
.
(b) The poles in s = 0 are again mapped into poles in z = 1.
(c) Cb (z) has relative degree zero, namely rdeg(Cb (z)) = 0 since the
poles in s = ∞ are mapped into poles in z = 0.
(d) The gain is
m
Y
(1 − zi T )
i=1
K̃b := KT n−m n−ν .
Y
(1 − pi T )
i=1

We consider finally the Tustin approximation which yields


m
Y
(z + 1)n−m (z − z̃it )
i=1
C̃t (z) = K̃t n (7.19)
Y
(z − 1)ν (z − p̃it )
i=1

where
140CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

1+zi T /2 1+pi T /2
(a) The generic zeros and poles are z̃it := 1−zi T /2
and p̃it := 1−pi T /2
.
(b) The poles in s = 0 are again mapped into poles in z = 1.
(c) Cf (z) has relative degree zero, namely rdeg(Cf (z)) = 0 since the
poles in s = ∞ are mapped into poles in z = −1.
(d) The gain is
m
Y
 n−m (1 − zi T /2)
T i=1
K̃t := K n−ν
. (7.20)
2 Y
(1 − pi T /2)
i=1

Notice that
Remark 7.6. As far as the stability of the approximated transfer func-
tions, observe that, if p is a pole of C(s), then the poles p̃f , p̃b , p̃t of
the corresponding discrete time transfer functions obtained according
to the EF, EB and Tustin approximations are
pT
1 1+ 2
p̃f = 1 + pT p̃b = p̃t = pT
1 − pT 1− 2

This shows that for the analysis of the Schur stability we have that
|p̃f | < 1 ⇔ |p + 1/T | < 1/T
|p̃b | < 1 ⇔ |p − 1/T | > 1/T
|p̃t | < 1 ⇔ ℜ[p] < 0

Hence, particular care has to be devoted to stability in case of EB


method since in that case only a subset of the stability domain for p
yields stability of p̃f and hence for that method a Hurwitz stable C(s)
yields a Schur stable C̃(z) only is T if small enough. These stability
properties are illustrated for Fig. 7.5. Notice moreover that the Taylor
expansions of the p̃f , p̃b , p̃t are
p2 T 2 p3 T 3
p̃f = 1+pT, p̃b = 1+pT +p2 T 2 +p3 T 3 +· · · p̃t = 1+pT +
+ +· · ·
2 4
(7.21)
Hence they all coincides until the first order term. Similar arguments
hold for the Taylor expansions of the zeros.
7.2. EMULATION METHOD: THE DIGITAL CONVERSION OF A CONTINUOUS TIME CONTRO

=(z)

<(z)

=(s) =(s) =(s)


p p

<(s) <(s) <(s)

Figure 7.5: The different mappings of the Schur stability region into the
corresponding regions in the s domain resulting from the three discretization
methods.

Remark 7.7. Notice that for each pole of C(s) the Taylor expansion of
the associated pole p̃m = epT (similar arguments hold for the zeros) of
Cm (z) is
p 2 T 2 p3 T 3
p̃m = 1 + pT + + ···
2 6
which coincides with the expansion of the Tustin approximation poles
(see (7.21)) until the second order term. Indeed, it can be shown that
the Tustin approximation behaves well with respect to the frequency
response.
Remark 7.8. Let’s list some important remarks:

• The MPZ method is equivalent to the approximate Tustin’s one


from a performance perspective, while both EF and EB are worst;
• depending on the values assumed by the sample frequency Ω, we
usually get
142CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

– Ω ≤ 5ωb , with ωb the system bandwidth, instability often


appear;
– Ω ≤ 10ωb , the system if significantly underdamped;
– Ω ≥ 20ωb , good performances are usually obtained;
– Ω ≥ 30ωb , the performances become almost those of the con-
tinuous system as well;
• The most significant contribution to the errors is due to ZOH. This
contribution can be included in the design procedure by resorting
T
to a Padé approximation of e−s 2 , e.g. an approximation of degree
2/T
(0, 1): GZOH (s) = s+2/T . This way allows us to design a controller
for P̃ (s) = GZOH (s)P (s), instead of P (s).

QUESTION: In the emulation method we start from C(s) such that the
closed loop with P (s) is stable. Can you prove that the digital closed loop
system with C̃(z) determined with the emulation method is stable as well?

7.3 P.I.D. controllers


P.I.D.’s are controllers endowed with three actions (proportional, integral
and derivative) which can be implemented both in an analogic structure and
in a digital one. These are widely spread regulators because of the existence
of simple “tuning” procedures. They have the following transfer function
 
KI 1
CP ID (s) = KP + + KD s = KP 1 + + sTD (7.22)
s sTI
KI
= (1 + TI s + TI TD s2 ) (7.23)
s
with KP the proportional gain and

KD
TD := , advance time, (7.24)
KP
KP
TI := , integral action time. (7.25)
KI

By being (7.22) an improper transfer function, because of the term sTD , we


need to introduce another pole at high frequencies both for physical realiz-
7.3. P.I.D. CONTROLLERS 143

ability and for noise filtering:


 
KI KD s 1 sTD
CP′ ID (s) = KP + + = KP 1+ + ,
s 1 + sTL sTI 1 + sTL
= KP (1 + CI (s) + CD (s)) ,

where KP is called the proportional action, CI (s) = sT1I is called the integral
sTD
action and CD (s) = 1+sTL
is called the derivative action. The typical choice
of TL is
TD TD
≤ TL ≤
10 3
Now we have to extend the previous considerations to the discrete-time
case, where often discrete-time versions of the three actions are considered.

• Proportional action: CP (s) = KP is mapped into C̃P (z) = KP .

• Integral action: by the EB again, CI (s) = sT1I becomes, after some


computations,
T 1
C̃I (z) = . (7.26)
TI 1 − z −1
which in time-domain can be rewritten as

T
uI (k) = uI (k − 1) + e(k), (7.27)
TI

• Derivative action: by the EB method CD (s) = sTD


1+sTL
becomes, after
some computations,

TD 1 − z −1
C̃D (z) = TL
. (7.28)
T + TL 1 − T +T z −1
L

which in time-domain can be rewritten as

TL TD
uD (k) = uD (k − 1) + (e(k) − e(k − 1)). (7.29)
T + TL T + TL
144CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

7.4 Review of the phase margin based syn-


thesis
In the continuous-time case, an effective technique is available for the con-
troller design, that is based on the phase margin of the open loop transfer
function. This technique will be extended to the discrete-time case through
the techniques developed in §7.2. We know how to translate closed-loop
(W ) time-requirements (e.g. tr , ts,5% , mp ) in terms of open-loop (L := CP )
frequency-requirements (crossover frequency ωc∗ and phase margin m∗φ ). Let’s
the following assumptions hold true (cf. Fig. 7.6):
• the Nyquist plot of P exhibits a sole crossing point on the (negative)
real axis;
• the Nyquist plot of P exhibits a sole crossing point on the unit circle
(corresponding to the crossover frequency ωc );
• the Nyquist plot of P ends at s = 0, which means that P (s) is a
strictly-proper rational function.

ℑ(s)

mφ ωc ℜ(s)

Figure 7.6: P (s)’s Nyquist plot.

Assume we want the open-loop system to have both a (desired) crossover


frequency ωc∗ and a (desired) phase margin m∗φ . In case P doesn’t satisfy
these requirements, we need to design C in such a way to obtain those to
be satisfied for the open-loop CP . So the C(s)’s design can be explained in
terms of a steps sequence:
7.4. REVIEW OF THE PHASE MARGIN BASED SYNTHESIS 145

1. translation of requirements on tr and mp in terms of ωc∗ and m∗φ , via


the useful formulas:
2
ωc∗ ≃ , (7.30)
tr
m∗φ ≃ 1.04 − 0.8mp ; (7.31)

2. from steady-state requirements (or even from internal model compo-


nents which need to be added) a first version of the controller is de-
signed:
KC
C(s) = ν C ′ (s). (7.32)
sC
We can imagine this term being a “fictitious” part of the plant P (s)

KC KC
P ′ (s) = ν
P (s) = ν P (s); (7.33)
s C sC

3. by either analytical reasonings or Bode plots, we can evaluate the terms:


arg(P ′ (ωc∗ )) and |P ′ (ωc∗ )|;

4. thereafter, the remaining part of the controller C ′ (s) is designed in such


a way that
C ′ (jωc∗ ) = M ejφ , (7.34)
with
1
M :=
|P ′ (jωc∗ )|
φ := m∗φ − π − arg(P ′ (jωc∗ ).

Therefore, it easily follows that the open-loop transfer function now


is satisfying the requirements about both phase margin and crossover
frequency (clearly, closed-loop stability must be preserved first of all).
C(s) can be now designed by resorting either to one of the elementary
compensators or to a standard controller (namely PID controllers), as
explained in the next section. Obviously, the whole controller will be
the product of the steady-state requirements part and of the transient
requirements (phase margin and crossover frequency adjustment) one
as in (7.32).
146CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

7.5 P.D. - P.I. - P.I.D. design based on the


phase margin
Standard controllers can be employed to achieve the requirements on the
steady state (when possible) and on the transient given in (7.34). In partic-
ular P.D. and P.I. controllers have only two degrees of freedom that are both
needed to satisfy the two constraints the equation (7.34) impose. Hence with
P.D. and P.I. controllers there is no possibility to satisfy specific steady state
requirements. Instead P.I.D. controllers have three degrees of freedom and
hence steady state requirements can be treated in this case.

7.5.1 P.D. design


Starting from
CP D (s) = KP (1 + TD s), (7.35)
and after an evaluation of the crossover frequency, (7.35) leads to

CP D (jωc∗ ) = KP + jωc∗ KP TD = M ejφ . (7.36)

Again, by equating both real and imaginary parts we get

ℜ : KP = M cos φ, (7.37)
tan φ
ℑ : TD = ,. (7.38)
ωc∗

Since we need that TD > 0 then φ ∈ [0, π/2] is needed. The region in the M ,
φ plane for which the P.D. controller design is possible is shown in Figure
7.7.

7.5.2 P.I. design


Starting from  
1
CP I = KP 1+ , (7.39)
sTI
and after the evaluation of ωc∗ we obtain
1
CP I (jωc∗ ) = KP + KP = M ejφ . (7.40)
jωc∗ TI
7.5. P.D. - P.I. - P.I.D. DESIGN BASED ON THE PHASE MARGIN 147

Again, by equating both real and imaginary parts we get


ℜ : KP = M cos φ, (7.41)
1
ℑ : TI = − ∗ . (7.42)
ωc tan φ
Since we need that TI > 0 then φ ∈ [−π/2, 0] has to be fulfilled. The region
in the M , φ plane for which the P.I. controller design is possible is shown in
Figure 7.7. Notice that the presence of the pole in the origin improves the
steady state performance of the closed loop system.

7.5.3 P.I.D. design


Starting from
KI  KI ′
CP ID (s) = 1 + TI s + TI TD s2 = C (s), (7.43)
s s P ID
we can choose the term KI accordingly to the steady-state requirements,
while the other two parameters have to be designed according to the above
procedure. Indeed, letting P ′ (s) := KsI P (s), the values of M and φ (defined
in (7.34)) are given by (the added integrator causes a further phase delay of
π/2):
ω∗ 1 ∗ π
M= c ∗
, φ = m φ − − arg(P (jωc∗ )). (7.44)
KI |P (jωc )| 2
By equating both real and imaginary parts of
CP′ ID (jωc∗ ) = 1 + TI jωc∗ − TI TD (ωc∗ )2 = M ejφ , (7.45)
the values of TI , TD easily follow:
M sin φ
ℑ : TI ωc∗ = M sin φ ⇒ TI = , (7.46)
ωc∗
1 − M cos φ
ℜ : 1 − TI TD (ωc∗ )2 = M cos φ ⇒ TD = , (7.47)
ωc∗ M sin φ
Remark 7.9. In order to make it possible the C(s)’s design, we need what
follows:
• TD > 0, which implies M < 1
cos φ
;
• TI > 0, which implies φ ∈ [0, π];
The region in the M , φ plane for which the P.I.D. controller design is possible
is shown in Figure 7.7.
148CHAPTER 7. DIGITAL CONTROLLER SYNTHESIS: EMULATION METHODS

M M M
PD PI PID

φ φ φ
−π/2 π/2 −π/2 π/2 −π/2 π/2 π

Figure 7.7: The regions in the parameters M and φ for which the design of
P.D. or P.I. or P.I.D. controller is possible.
Chapter 8

Digital Controllers Synthesis:


Direct Synthesis Methods

8.1 Discrete-time direct synthesis by “can-


celing”

n(k)
+ u(k)
r(k) C(z) P (z) b
y(k)

Figure 8.1: Closed loop system in presence of disturbances.

Let’s consider the closed-loop control architecture depicted in Fig.8.1,


and assume the controlled system desired behavior and requirements have
been translated into some closed-loop transfer function W (z) that hence is
assumed to be given. A simple way for exactly obtaining W (z) passes through
expressing C(z) in terms of both W (z) and P (z) as follows. From
C(z)P (z)
W (z) = ,
1 + C(z)P (z)
we get W (z)(1 + C(z)P (z)) = C(z)P (z) which implies W (z) = C(z)P (z) −
W (z)C(z)P (z) = C(z)P (z)(1 − W (z)) and so the desired controller is given
150CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

by the formula
W (z) 1
C(z) = . (8.1)
1 − W (z) P (z)
However this naive solution can’t be considered satisfactory as some troubles
may arise, namely:

• C(z) could result to be too complex. Indeed, although the computation


capabilities of actual µP’s could overcome that problem, it might be
preferable to deal with simpler controllers.

• C(z) could result to be not a proper, and hence not a causal controller.
The conditions on W (z) (our degree of freedom in the design) which
might prevent this problem are not clear.

• Some zero-pole cancellations could arise, leading to internal stability


problems.

CAUSALITY First of all let’s investigate what conditions need to be satis-


fied in order to obtain a causal controller, i.e. a C(z) that is a proper transfer
function. We will obviously assume that both P (z) and W (z) are causal.
Causality of a transfer function is equivalent to the fact that its relative
degree is non-negative. Hence we will study for which W (z) we obtain a
controller C(z) with non-negative relative degree.
To this aim, we use the notation rdeg(·) as the relative degree of a rational
function. Observe that if A, B are two rational functions, then rdeg(AB) =
rdeg(A) + rdeg(B) and rdeg(A/B) = rdeg(A) − rdeg(B). Hence, from

W 1
C=
1−W P
we can argue that

rdeg(C) = rdeg(W ) − rdeg(1 − W ) − rdeg(P )

Observe now that, since both 1 and W are proper rational functions, then
1 − W is proper as well and hence rdeg(1 − W ) ≥ 0. We distinguish two
cases

1. Assume that W (∞) = 1. In this case rdeg(W ) = 0 and rdeg(1−W ) > 0


and hence rdeg(C) < 0 that would yield a non-causal controller.
8.1. DISCRETE-TIME DIRECT SYNTHESIS BY “CANCELING” 151

2. Assume that W (∞) ̸= 1. In this case rdeg(1 − W ) = 0 and hence


rdeg(C) = rdeg(W ) − rdeg(P ). So we have that C is causal if and only
if rdeg(W ) ≥ rdeg(P ).
In conclusion, admissible W (z) yielding causal controllers has to satisfy
this two conditions
W (∞) ̸= 1
(8.2)
rdeg(W ) ≥ rdeg(P )
In the typical case of P (z) obtained from a continuous time system via
sampling and holding, we know that rdeg(P ) = 1 that yields the condition
rdeg(W ) ≥ 1 that ensures also that W (∞) = 0 ̸= 1.
INTERNAL STABILITY Now let’s switch to the controlled system inter-
nal stability. This is obtained if and only if all the following transfer functions
are Schur stable (see Fig. 8.1)
Y CP
= = W, (8.3)
R 1 + CP
Y P 1 NW DC
= =W = , (8.4)
N 1 + CP C DW NC
U −CP
= = −W, (8.5)
N 1 + CP
U C 1 NW DP
= =W = . (8.6)
R 1 + CP P DW NP
So the conditions we are searching for are the following:
• From (8.3) and (8.5) it follows that W has to be stable and hence DW
has to be Schur stable. This, on the other hand, is a fundamental
requirement on the selected W .
• From (8.6) all the unstable roots of NP must be canceled by the roots
of NW that means that all the unstable zeros of P must be zeros of W
as well. Namely, if we write NP = NPS NPU , where NPS is Schur stable
and NPU is Schur unstable, then NW = XNPU for some polynomial X.
• From (8.4) since
NW DC NW (DW − NW )NP DW − NW NP
= = , (8.7)
DW NC DW NW DP DW DP
152CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

which implies that all the unstable roots of DW must be canceled by


the roots of DW − NW , that means that all the unstable poles of P
must be roots of DW − NW . Namely, if we write DP = DPS DPU , where
DPS is Schur stable and DPU is Schur unstable, then DW − NW = Y DPU
for some polynomial Y .

In conclusion, admissible W (z) yielding internally stabilizing controllers


has to satisfy this three conditions

W BIBO stable

NW = XNPU for some polynomial X (8.8)

DW − NW = Y DPU for some polynomial Y

Remark 8.1. As final remarks:

• The direct synthesis method is simple to apply, in case P does not


contain unstable poles and zeros. In that case, except for the relative
degree constraint, any W can be obtained.

• If we are interested in including internal model components, they can be


embedded in a fictitious way in the plant P as follows: Let R = DN
S D U be
R
R R
S U
the signal we want to track, with DR stable and DR unstable. It suffices
to define P̃ = D1U P (z) and thereafter to apply the above procedure;1
R

• In the discrete-time case a pole at z = 0 is equivalent to a continuous-


time pole at s = −∞. This way, by allocating all poles of W (z) in zero,
we can obtain the so called dead-beat controller, which allows to obtain
the exact (after a finite number of steps) reference signal tracking.

• The fact that the proposed method is based on unstable zero/pole can-
cellations could suggest that this yields controllers that might become
unstable under an arbitrarily small perturbation of the to be controlled
system. Indeed, this is exactly the opposite! Indeed, it can be shown
that, without this cancellation procedure, we would obtain controllers
yielding BIBO stable closed loop transfer functions involving unstable
1
In this respect we have to be somewhat careful, as the situation corresponds to the
“difficult” case which requires to include the P unstable roots into DW − NW .
8.1. DISCRETE-TIME DIRECT SYNTHESIS BY “CANCELING” 153

poles/zeros cancellations. The canceling done according to the method


is needed exactly to avoid the cancellation in the control process. Ob-
serve indeed that
XNPU
CP =
Y DPU
Hence the series of C and P does preserve the unstable poles and zeros
of P provided we choose X that is coprime with DPU and Y that is
coprime with NPU . Indeed, we need to consider this extra constraint in
the design.
Remark 4.1 shows indeed that the proposed method does not require
the perfect knowledge of the to be controlled transfer function P (z). In
fact, if the design of C(z) is based on a sufficiently good estimate P̂ (z)
of the true P (z), we have that the feedback interconnection with C(z)
and P̂ (z) will be internally stable. Consequently, since P (z) can be
seen as a perturbation of P̂ (z), we can apply the reasoning of Remark
4.1 to conclude that the resulting feedback interconnection will remain
internally stable under the condition that P̂ (z) is sufficiently close to
P (z).
Example 8.1. Assume that
1
P (z) =
z+2
and we want to design a controller C(z) using the direct method to obtain
W (z) = z1 , transfer function that has been chosen considering the condi-
tions for the controller causality but not the conditions ensuring the internal
stability. We would get
W (z) 1 z+2
C(z) = =
1 − W (z) P (z) z−1
We notice that there is an unstable pole/zero cancellation in the product
CP that would disappear in case the true to be controlled transfer function
1
would be slightly different such as, for example, P̃ (z) = z+2+ϵ . In fact with
this perturbed transfer function the closed loop transfer function would be
BIBO unstable no matter how small is ϵ .
If we instead apply the cancelling requirements ensuring the internal sta-
bility, a possible admissible transfer function is
−2
W (z) =
z
154CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

that would yield C(z) = −2. Notice that this controller is constant and does
not involve any unstable pole/zero cancellation in the product CP . Indeed
1
it can be shown that in case of a perturbed transfer function P̃ (z) = z+2+ϵ
the resulting perturbed closed loop transfer transfer function would remain
BIBO stable for ϵ small enough.
We consider now a more complex example.
Example 8.2. Assume that
(z − 2)(z + 1/2)
P (z) =
z(z + 3)
Assume we want to design a controller C(z) able to track the step signals
with zero asymptotic error. To this aim we have to impose that W (1) = 1
or equivalently that C(z) has a simple pole in 1. As far as the causality,
since rdeg(P ) = 0, then any W with rdeg(W ) ≥ 0 under the condition that
W (∞) ̸= 1. As far as the internal stability, by (8.6) the numerator NW (z)
of W (z) has to be of the form

NW (z) = (z − 2)X(z)

where X(z) is a polynomial. Instead by (8.7) the denominator DW (z) of


W (z) has to be such that

DW (z) − NW (z) = (z + 3)Y (z)

where Y (z) is a polynomial. Hence DW (z) = (z − 2)X(z) + (z + 3)Y (z).


Since W (1) = 1, then
NW (1) −X(1)
1= =
DW (1) −X(1) + 4Y (1)
which implies that Y (1) = 0 and hence Y (z) = (z − 1)Y ′ (z) for some poly-
nomial Y ′ (z). The last condition to be satisfied is that DW (z) = (z −
2)X(z) + (z + 3)(z − 1)Y ′ (z) is Shur stable. Let’s try to understand whether
degree zero polynomials X(z), Y ′ (z) exist satisfying that constraint. Let
X(z) = α, Y ′ (z) = β with α, β ∈ R. We need to understand if there exist
α, β ∈ R making α(z − 2) + β(z + 3)(z − 1) Schur stable. The roots of that
polynomial coincides with the roots of

(z + 3)(z − 1) + K(z − 2) = z 2 + 2z − 3 + K(z − 2)


8.1. DISCRETE-TIME DIRECT SYNTHESIS BY “CANCELING” 155

where K = α/β. A way to understand which values of K make this poly-


nomial Schur stable is by using the root locus. The positive locus eas-
ily shows that there does not exist a positive K satisfying that condition.
Let’s try with negative K’s. The negative locus is shown in Fig. 8.2. We
see that the locus enters the unit disk in z = −1 and this occurs when
z 2 + 2z − 3 + K(z − 2)|z=−1 = 0 that √ yields K = −4/3. Then the two
roots meets at the double point z = 2 − 5 and then they become complex
conjugate. Hence their product equals to the degree zero coefficient that
is −3 − 2K. We can argue that these complex conjugate roots have unit
absolute value when that coefficient is 1, namely −3 − 2K = 1 that yields
K = −2. We can argue that the Schur stability of DW is obtained for any
value of α, β such that K = α/β ∈] − 2, −4/3[ and the resulting W (z) and
C(z) are
K(z − 2)
W (z) =
(z + 3)(z − 1) + K(z − 2)
NW DP K(z − 2) z(z + 3) z
C(z) = = =K
DW − NW NP (z + 3)(z − 1) (z − 2)(z + 1/2) (z − 1)(z + 1/2)
Notice finally that rdeg(W ) = 1 and hence W (∞) = 0. This shows that the
condition W (∞) ̸= 1 mentioned above is fulfilled.
If we want to obtain a W (z) with relative degree equal to zero, we need
to find a polynomial X(z) of degree one while keeping Y ′ (z) of degree zero.
We try with Y ′ (z) = β and X(z) = γz + δ. This yields
(γz + δ)(z − 2)
W (z) =
β(z + 3)(z − 1) + (γz + δ)(z − 2)
By dividing numerator and denominator by β e defining δ̄ = δ/β and γ̄ = γ/β
we obtain
(γ̄z + δ)(z − 2) (γ̄z + δ)(z − 2)
W (z) = = 2
(z + 3)(z − 1) + (γ̄z + δ̄)(z − 2) (1 + γ̄)z + (2 + δ̄ − 2γ̄)z + (−3 − 2δ̄)
In deciding the poles we notice that we tske advantage of two degrees of
freedom that in principle allow to choose freely two of the three coefficients
of the denominator. In this way we can allocate both poles in the origin
by finding δ̄, γ̄ such that the degree 1 and the degree 0 coefficients are both
equal to zero, namely
2 + δ̄ − 2γ̄ = 0
− 3 − 2δ̄ = 0
156CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

We obtain δ̄ = −3/2 and γ̄ = 1/4 that yields


1 2
4
z − 2z + 3 z 2 − 8z + 12
W (z) = 5 2 =
4
z 5z 2
that has relative degree zero and has twp poles in the origin. Observe finally
that W (∞) = 1/5 ̸= 1.

Figure 8.2: Root locus of example 8.2.

8.1.1 Assigning the transient properties


In the previous example we used the root locus to study the stability of the
stability of the closed loop transfer function. As in the continuous time case
we can use root locus to predict the transient of the closed loop system.
To this aim it would be useful to understand how the position of the poles
are related to the transient characteristics (overshoot mp , rise time tr and
settling time ts ). This relation is well-known for second order continuous
time systems (and for systems that have complex conjugate dominant poles)
like
1
W (s) =  2 (8.9)
s s
1 + 2ξ ωn + ωn
8.1. DISCRETE-TIME DIRECT SYNTHESIS BY “CANCELING” 157

Since the discrete time transfer function


   
−1 −1 W (s)
Wd (z) = (1 − z )Z ST L (8.10)
s
has step response that is the sampling of the step response of the transfer
function W (s), then the transient of the two
psystems have similar properties.
The poles of W (s) are p1,2 = −ωn (ξ ± j 1 − ξ 2 ). The transfer function
Wd (z) is the following

K(z − z d )
Wd (z) = , (8.11)
(z − pd1 )(z − pd2 )

where the poles are pd1,2 = ep1,2 T , the gain K is such that Wd (z)|z=1 = 1 and
where the zero z d is a complicate function of the position of p1,2 and hence
of pd1,2 .
Observe that the transient is mostly determined by the position of the
poles pd1,2 . Hence the transient specifications on the overshoot, the rise time
and the settling time

mp ≤ m∗p , tr ≤ t∗r , ts ≤ t∗s . (8.12)

are related to the position of the poles p1,2 and these are then are related to
the position of the poles pd1,2 . The question now is: how does the admissible
region modify by passing from continuous case to the discrete one?

• From the rise-time requirement t∗r , equation (6.16) implies that ωn ≥


2.2
t∗r
. The corresponding (to this inequality) region is mapped into (see
Fig. 8.3)

• From the settling-time requirement t∗s,5% , equation (6.17) implies that


|σ| = ωn ξ ≥ t∗ 3 . The corresponding situation is depicted in Fig. 8.4.
s,5%

• From the overshoot requirement m∗p , equation (6.15) implies that tan φ ≤
π 2
ln(1/m∗p )
which can be graphically expressed in the z domain like
Fig. 8.5 does.

By intersecting the three regions described above, the admissible region


for placing the W (z)’s poles is obtained (see Fig. 8.6).
2
Recall φ = cos(ξ).
158CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

=(s) =(z)

ωn
1
z


<(s) <(z)

Figure 8.3: Mapping from s plane to z one of the region related to t∗r .

=(s) =(z)

1
σ z


<(s) <(z)
eσT

Figure 8.4: Mapping from s plane to z one of the region related to t∗s .

=(s) =(z)

1
ϕ z


<(s) <(z)

Figure 8.5: Mapping from s plane to z one of the region related to m∗p .
8.1. DISCRETE-TIME DIRECT SYNTHESIS BY “CANCELING” 159

ℑ(z)

ℜ(z)

Figure 8.6: An example of admissible region for pole placement in the z


plane.

Example 8.3. We continue the example 8.2. The closed loop system has
transfer function
K(z − 2)
W (z) =
(z + 3)(z − 1) + K(z − 2)

The root locus of the denominator is shown in Fig. 8.7 with the regions
associated with the transient characteristics.

8.1.2 Dahlin’s method


In case both the poles and the zeros of P (z) are inside the open unit disk,
we have not to worry about the internal stability requirements in the choice
of W (z) but we have only to take care of the controller causality yielding
constraints on W (z) relative degree.
The idea underlying Dahlin’s method is that of choosing the closed loop
transfer function W (z) which, in some sense, is similar to a continuous time
transfer function W (s) of the first order with eventually a delay, namely

e−std
W (s) = , (8.13)
1 + sτ
160CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

Root Locus
1
0.5 /T
0.6 /T 0.4 /T
0.8 0.1
0.7 /T 0.3 /T
0.2
0.6 0.3
0.8 /T 0.4 0.2 /T
0.5
0.4 0.6
0.7
0.9 /T 0.8 0.1 /T
0.2
Imaginary Axis

0.9

1 /T
0
1 /T

-0.2
0.9 /T 0.1 /T

-0.4

0.8 /T 0.2 /T
-0.6

-0.8 0.7 /T 0.3 /T

0.6 /T 0.4 /T
0.5 /T
-1

-1 -0.5 0 0.5 1
Real Axis

Figure 8.7: Root locus of example 8.2.

with td the closed-loop system delay and − τ1 a pole related to the system
rise-time (the smaller τ is, the quicker the system is). Let’s assume td being
an integer multiple of the sampling time T , i.e. td = nT . We obtain the
sample and hold version of W (s) by (5.26)
   
−1 −1 W (s)
W (z) = (1 − z )Z ST L
s
−T /τ
1−e
= z −n . (8.14)
z − e−T /τ
It is clear that the delay n will be chosen as small as possible and has to take
into account the controller causality, so that condition (8.8) has to hold true,
that means
rdeg(W ) = n + 1 ≥ rdeg(P ) ⇒ n ≥ rdeg(P ) − 1. (8.15)
This way the controller C(z) takes the form
W (z) 1 1 − e−T /τ DP (z)
C(z) = = n −T /τ −T /τ
, (8.16)
1 − W (z) P (z) z (z − e )−1+e NP (z)
8.1. DISCRETE-TIME DIRECT SYNTHESIS BY “CANCELING” 161

It can be shown that, not only this controller internally stabilize the feedback
system, but also that C(z) itself is stable except for a pole z = 1.
Indeed, since we assumed that NP (z) is Schur stable, the stability of
the controller C(z) depends on the Schur stability of the polynomial z n (z −
e−T /τ ) − 1 + e−T /τ . It can be shown that this is the case, except for a root
in z = 1. In order to prove this fact, we have to show that any z̄ ∈ C such
that z̄ ̸= 1 and |z̄| ≥ 1 can not be a root of that polynomial. Observe that
|1−e−T /τ |
|z̄−e−T /τ |
< 1 which implies that

|1 − e−T /τ |
−T /τ
< 1 ≤ |z̄|n
|z̄ − e |
1−e−T /τ
showing that z̄−e−T /τ
̸= z̄ n and hence z̄ can not be a root.

8.1.3 Second order W (z)


According to the previous method W (z) is chosen to be a delayed first order
transfer function. We can instead select W (z) so that its step response is the
sampling of the step response of a second order transfer function
1
W (s) =  2 (8.17)
1+ 2ξ ωsn + s
ωn

so that we can take advantage of the well known relations between its tran-
sient characteristics (overshoot mp , rise time tr and settling
p time ts ) and the
position of its complex conjugate poles p1,2 = −ωn (ξ ± j 1 − ξ 2 ). We have
already noticed that discrete-time transfer function whose step response is
the sampling of the step response of (8.17) is given by
   
−1 −1 W (s)
Wd (z) = (1 − z )Z ST L
s
d
K(z − z )
= , (8.18)
(z − pd1 )(z − pd2 )

where pd1,2 = ep1,2 T , K is such that Wd (z)|z=1 = 1 and where z d is a complicate


function of the position of p1,2 and hence of pd1,2 .
Then Wd (z) can be chosen from the requirements about mp , tr , and ta
that yield the admissible region in the s domain and choosing a continuous
162CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

time second order transfer function with poles in that region. Then, by using
formula (8.18) we determine Wd (z). If P (z) has an unstable zero, we can
eventually modify the position of the zero z d in order to cancel it.

8.2 Direct Synthesis from a different perspec-


tive
Let’s consider a different viewpoint for deriving the direct synthesis for-
mula starting from a different interconnection, which holds true both for
continuous-time and discrete-time systems. Given the systems connection
depicted in Fig. 8.8, let P be the (real) plant to be controlled, P̂ be a suit-
able approximation of P that is available to the control designer, and C ′ be
a given preliminary controller. It is straightforward to see that whenever the
plant model would be equal to the real plant, P̂ = P , the signal b would be
equal to zero, so the open-loop control with C ′ = W/P̂ would lead to the
desired controlled behavior W . The feedback signal b is needed to correct
the output in case P̂ ̸= P or in case of disturbances.
e u
+
r C′ b
P b y

− +

b

Figure 8.8: Systems connection for the direct synthesis.

The blocks connection in Fig. 8.8 can be equivalently replaced with that in
Fig. 8.9, where it’s easier to notice that, denoting by C the positive-feedback
connection of C ′ and P , it holds true

W
C′ W 1
C= = P̂ = , (8.19)
1 − C ′ P̃ W 1 − W P̂
1−

which represents the well-known direct synthesis formula.
8.3. SMITH’S PREDICTOR FOR THE DELAY COMPENSATION 163

e u
+
r C ′ b
P b y


C

Figure 8.9: Equivalent systems connection for the controller direct synthesis.

8.3 Smith’s predictor for the delay compen-


sation
A wide class of plants obeys to
P (s) = e−std P ′ (s), (8.20)
where P ′ (s) is rational. This means that a delay td is present. In most cases
the tracking target is adapted to the delay, in the following sense
lim |r(t) − y(t − td )| = 0. (8.21)
t→+∞

As a case study (see [?]), a delay system, for which a controller leading
to (8.21) is needed, is now presented.
Example 8.4 (Temperature control for a fluid in a duct). Let the physical
scheme described in Fig.8.10 be given.
Let RT be the thermistor for measuring the duct temperature, and y(t)
be proportional to the measured temperature. The duct is endowed with a
resistor R for controlling its temperature, thanks to an input voltage u(t)
which modulates the delivered power. An approximate model for the to-be-
controlled plant transfer function is (without taking into account any delay),
K
P ′ (s) = .
1 + sτ
If the fluid speed in the duct is assumed to be constant, the temperature
measure appears to be delayed of td = l/v. So, taking into account this
delay, the to-be-controlled plant becomes
Ke−std
P (s) = P ′ (s)e−std = .
1 + sτ
164CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

R v RT

l
u(t)
y(t)

Figure 8.10: Temperature control for a fluid in a duct.


e u y′
+
r C′ P′ e−std b y

Figure 8.11: Control scheme for the delay system.

A proportional controller C(s) = KP leads to the systems interconnection


expressed by the scheme depicted in Fig. 8.11. From Nyquist plots of both
P ′ (s) and P (s) (8.19) the arising of instability for KP large enough is self-
evident. More precisely, we know that the feedback system is stable if and
only if the phase margin is positive. Observe that the phase margin is

mφ = π + arg(C(jωc )P (jωc )) = π + arg(C(jωc )) + arg(P (jωc ))


= π + arg(e−jωc td ) − arg(1 + jωc τ ) = π − ωc td − arctan(ωc τ )

where ωc is the crossover frequency that satisfies the constraint |C(jωc )P (jωc )| =
1 that, for KP ≥ 1/K, implies

KP K KP K 1 KP K
q
1= =p ⇒ ωc = KP2 K 2 − 1 ≃ for large KP
|1 + jωc τ | 1 + ωc2 τ 2 τ τ

Then
td KP K
mφ ≃ π − − arctan(KP K)
τ
8.3. SMITH’S PREDICTOR FOR THE DELAY COMPENSATION 165

ℑ(s) ℑ(s)

ℜ(s) ℜ(s)

Figure 8.12: Nyquist plot of P (s) (left) and of its delayed version P ′ (s)
(right). For large enough gain values the P ′ (s) diagram encircles the point
−1 + j0, leading to instability (red dotted line).

that becomes negative if KP is big enough.


As shown in Fig. 8.13, to avoid instability it would be better to use the


undelayed output y ′ instead of y, that is not possible since y ′ is not available.
The solution would be to build and estimator of y ′ as shown in Fig. 8.14. The
first simple choice is shown in Fig. 8.15 that however is inefficlient because it
does not take advantage of the output measurement and it is in fact an open
loop control architecture. A better solution is given in Fig. 8.16 in which we
have to choose D(z) such that

Ŷ ′ (s) = P̂ ′ (s)U (s) = Y (s) + D(s)U (s)

where P̂ ′ (s) is an estimate of the transfer function P ′ (s). If we have an


estimate t̂d of the delay td , then we have that P̂ ′ (s)e−st̂d U (s) is an estimate
of Y (s) that we substitute in the previous equation obtaining P̂ ′ (s)U (s) =
P̂ ′ (s)e−st̂d U (s) + D(s)U (s) that yields

P̂ ′ (s) = P̂ ′ (s)e−st̂d + D(s) ⇒ D(s) = P̂ ′ (s)(1 − e−st̂d ) = P̂ ′ (s) − P̂ ′ (s)e−st̂d .


(8.22)
The control scheme in Fig. 8.16 can be reduced to that in Fig. 8.17. This
is called Smith’s predictor. The same figure shows as the external feedback
loop “disappears” if the estimates would be exact ones.
166CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

e u y′
+
r C′ P′ b
e−std y

Figure 8.13: Control scheme with the undelayed output.

e u y′
+
r C′ b
P′ e−std b y

estimator
ŷ ′

Figure 8.14: Control scheme with the estimator of undelayed output.

e u y′
+
r C′ b
P′ e−std y

ŷ ′
P̂ ′

Figure 8.15: Control scheme with an open loop estimator of undelayed out-
put. P̂ ′ (s) is an estimate of the transfer function P ′ (s).

e u
+
r C′ b
P′ e−std b y

+ +
D
ŷ ′

Figure 8.16: Smith’s predictor control scheme.

The controlled system transfer function is


Y (s) C ′ (s)P (s)
W (s) = =
R(s) 1 + C ′ (s)(P (s) + D(s))
C ′ (s)P ′ (s)
= e−std , (8.23)
′ ′ ′
1 + C (s)P (s) + C (s)(P (s)e ′ −st d ′
− P̂ (s)e −s t̂d )
8.3. SMITH’S PREDICTOR FOR THE DELAY COMPENSATION 167

+ +
r C′ b
P′ e−std b y
− −
b
− +
P̂ ′ e−st̂d

Figure 8.17: Equivalent Smith’s predictor control scheme.

In case P̂ ′ (s) and t̂d are good approximations W (s) becomes


This way, C ′ (s) could be designed regardless of the delay P (s) is endowed
with. By evaluating the whole system transfer function

C ′ (s)P ′ (s)
W (s) ≃ e−std , (8.24)
1 + C ′ (s)P ′ (s) + C ′ (s)

it follows that (8.24) can be factorized into two terms: the first one without
any delay, and the second one expressing a ′delay equal to the desired one.
C (s)P ′ (s)
So, a correct design of C ′ (i.e. satisfying 1+C ′ (s)P ′ (s) ≃ 1) leads to reach the

tracking target (8.21).


The control can be reduced back to the usual architecture in Fig. 8.11
by taking the following controller transfer function

C ′ (s) C ′ (s)
C(s) = =
1 + C ′ (s)D(s) 1 + (1 − e−st̂d )C ′ (s)P̂ ′ (s)

Remark 8.2. The previous analysis holds true both for continuous-time and
discrete-time systems. However, substantial differences arise in the practical
situations:

• A delay estd is far from trivial to be realized by means of electric net-


works, so in the continuous-time case an exponential approximation is
1−std /2
preferred (e.g., the Padé approximation (1, 1) e−std = 1+st d /2
);

• On the contrary, in the discrete-time case delays are easily built. So


we can resort to the scheme in Fig. 8.18, with e−st̂d ∼ z −N , N T = t̂D
and
C ′ (z)
C(z) = .
1 + (1 − z −N )C ′ (z)P̂ ′ (z)
168CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

C(z)
+ +
r C ′ (z) b
P (z) b y
− −
− + b
P̂ ′ (z)

z −N

Figure 8.18: Discrete-time version of the Smith’s predictor structure.

8.4 Controller design via Diophantine equa-


tions
So far an important aspect of direct synthesis has been neglected: whenever
the to-be-controlled plant P (z) is equipped with unstable poles, they must
be roots of DW (z) − NW (z) in order to ensure internal stability. The so
far considered techniques apply to simple W (z)’s, and in general how to
implement this constraint hasn’t been clarified.
The denominator of the closed-loop transfer function W , when cancella-
tions are lacking, takes the form:

NC NP + DC DP = DW ,

which is nothing more than a Diophantine (or Bézout) equation in the (poly-
nomial) unknowns X = DC , Y = NC , which are the compensator parameters
to be evaluated. Observe that all the four transfer functions related to the
internal stability analysis of the feedback system have NC NP + DC DP at the
denominator. Hence, if we impose that DW is Schur stable, we obtain the
interconnection internal stability.
We give up any requirement about the zeros, as the direct synthesis
method we are going to explain (see below) focuses only on poles. In other
words, no zeros allocation will be taken into account in order to ease the
interconnection stability. Nevertheless, it is worthwhile to recall that zeros
can dramatically influence the system performances, in particular when P is
not minimum phase.
8.4. CONTROLLER DESIGN VIA DIOPHANTINE EQUATIONS 169

The direct synthesis based on Diophantine equations works as follows:


given the interconnection in Fig. 8.19, we have to impose that the closed-
loop transfer function is

Yo (z) NC NP ! NW (z)
W (z) = = = , (8.25)
R(z) NC NP + DC DP D∗ (z)

where D∗ (z) takes into account the desired pole placement and X := DC and
Y := NC are unknowns polynomials to be evaluated. As a counterpart, NW
is a free parameter, the structure of is completely unconstrained and which
will be a by-product of the synthesis procedure.
+
NC (z) NP (z)
r(k) C(z) = DC (z)
P (z) = DP (z)
b
yo (k)

Figure 8.19: Closed-loop control interconnection.

So, we have to investigate the existence of polynomials X and Y which


solve
NP Y + DP X = D∗ . (8.26)
Remark 8.3. Some remarks:

• As previously observed, once X and Y have been found so that D∗


Y
is stable, the compensator C = X makes internally stable the whole
interconnection;

• (8.26) doesn’t allow to place the W (z)’s zeros.

• From (8.8) we see that the direct method based on canceling yields a
closed loop transfer function that is

XNPU
W =
XNPU + Y DPU

Hence the final mathematical problem that we need to solve is also


in this case a Diophantine equation, that is in general simpler since
XDS
DPU , NPU have lower degree than DP , NP . The controller is C = Y N PS .
P
170CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

8.4.1 Review of Diophantine equations


Diophantine equations equations are well-known, widely studied in the litera-
ture, and necessary and sufficient conditions for their solvability are available.
The following theorem is a fundamental building block of this theory.

Theorem 8.1. Let A, B two coprime polynimials (devoid of common zeros).


Then for any polynomial C, there exists polynomials X, Y such that

AX + BY = C

In order to obtain causal controllers, we will need to have instruments


that enable us to control the degrees of the polynomial X, Y . The following
proposition is useful to this aim.

Proposition 8.1. Let A, B two coprime polynimials and let C such that deg(C) ≥
deg(A)+deg(B). Then there exists polynomials X, Y such that AX+BY = C
and such that

deg(X) = deg(C) − deg(A) and deg(Y ) < deg(A).

Proof. We start from any solution X, Y of AX +BY = C. Then find the rest
Y ′ of the division of Y by A, namely Y ′ = Y − AQ with deg(Y ′ ) < deg(A).
Letting X ′ := X + BQ, we see that X ′ , Y ′ is still a solution. The condition
on the degree of Y ′ is satisfied. Observe finally that deg(BY ′ ) < deg(B) +
deg(A). Hence, deg(AX ′ + BY ′ ) = deg(C) can hold only if deg(AX ′ ) =
deg(C) and hence only if deg(X ′ ) = deg(C) − deg(A).

8.4.2 The controller design


By applying the previous result we obtain the following proposition.

Proposition 8.2. Assume that P (z) is strictly proper and let NP and DP be
NP
coprime polynomials such that P = D P
. Let n := deg(DP ). Choose any
polynomial D∗ with deg(D∗ ) = 2n − 1. Then there exists solutions X, Y of
(8.26) such that deg(X) = n − 1 and deg(Y ) ≤ n − 1. In this way it follows
Y
that the controller C = X is proper.
The proof follows from Prop. 8.1 by observing that deg(D∗ ) ≥ deg(DP )+
deg(NP ) and hence deg(X) = deg(D∗ ) − deg(DP ) = n − 1 and deg(Y ) <
deg(DP ) = n.
8.4. CONTROLLER DESIGN VIA DIOPHANTINE EQUATIONS 171

Remark 8.4. The problem here deals with deg(D∗ ) = 2n − 1, which could be
“too large”, and henceforth we should impose requirements on 2n − 1 poles.
However the solution is easily found out, by assuming

D∗ (z) = Ddom (z)∆(z), (8.27)

with Ddom (z) endowed with the dominant poles, while ∆(z) has only “quick
poles” (e.g. poles at z = 0).

Once we have fixed the degrees, in equation (8.26) the only unknowns
are the coefficients of the polynomials X(z), Y (z). It can be shown that the
equations on the coefficients are linear. Precisely, let3

DP (z) = an z n + an−1 z n−1 + · · · + a0 , (8.28)


NP (z) = bn z n + bn−1 z n−1 + · · · + b0 , (8.29)
X(z) = xn−1 z n−1 + xn−2 z n−2 + · · · + x0 , (8.30)
Y (z) = yn−1 z n−1 + yn−2 z n−2 + · · · + y0 , (8.31)
D∗ (z) = c2n−1 z 2n−1 + c2n−2 z 2n−2 + · · · + c0 . (8.32)

where we know that bn = 0, by the strict properness of P (z). Solving the


Diophantine equation (8.26) is equivalent to solving a linear system of the

3
DP (z) is assumed to be monic, without loss of generality.
172CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

form4
   
  xn−1 c2n−1
an 0 0 ··· 0 bn 0 0 ··· 0  xn−2   c2n−2 
 .. .. .. ..  ..   .. 
an−1 an 0 . . bn−1 bn 0 . . . .
    
.. .. .. .. .. ..
    

an−2 an−1 . .
bn−2 bn−1
. . . .
 ..   .. 
  .   . 
 .. ... ... ... .
.. ... ... ... 
..
 
..


 . 0 0 
 .
 
  .


 .. ..    

 a1 . an−2 an−1 an b1 . bn−2 bn−1 bn   x0

  cn
=


 ... ... ... ...   yn−1   cn−1 
 a0 an−2 an−1 b0 bn−2 bn−1    
  yn−2   cn−2
... ... ... ...
 
0 a0 an−2 0 b0 bn−2   ..   ..
    
 
 .. .. .. .. .. ..  .   . 
 0 0 . . . 0 0 . . .  .
 .
  .
  .

 .   .
 
 .. .. .. .. .. .. .. .. 
. . . . a1 . . . . b1  .   .
 ..   ..
 
··· ··· ··· ···

0 0 a0 0 0 b0
| {z } y0 c0
=:A (2n×2n) | {z } | {z }
=:x =:b
(8.33)
So (8.33) implies that the coefficients of the polynomials X e Y are given
by
x = A−1 b. (8.34)
Remark 8.5 (Asymptotic tracking via Diophantine equations). The problem
of introducing internal model dynamics is simply overcome, as it suffices to
U U
replace DP in (8.26) with D̃P = DP DR , where DR takes into account for the
g unstable poles of the reference signal. This way, the degree of D̃P increases
up to n + g. to prove that for any polynomial D∗ with deg(D∗ ) = 2n + g − 1
there exist solutions X, Y of
U
DP DR X + NP Y = D∗

such that deg(X) = n − 1 and deg(Y ) ≤ n + g − 1 we apply Proposition


U
8.1. Indeed, we have that deg(D∗ ) ≥ deg(DP DR ) + deg(NP ) and hence we
U
can argue that deg(X) = deg(D∗ ) − deg(DP DR ) = n − 1 and deg(Y ) ≤
U
deg(DP DR ) = n + g − 1. This way, the increase in the degree of Y is
U
compensated by a corresponding increase in the degree of DR X still leading
4
Under the coprimeness assumption on NP e DP , the appearing matrix is a non-singular
one.
8.5. DIGITAL CONTROLLER SYNTHESIS: DEADBEAT TRACKING173

to a proper compensator
Y
C= U
.
XDR

8.5 Digital controller synthesis: Deadbeat track-


ing
Given the usual control scheme in Fig.8.20, let’s assume that:
• the reference signal transform is a proper rational function R(z) =
NR (z)
DR (z)
, with DR (z) monic;

• DR (z) is known and its degree is nR , while NR (z) is unknown.


Moreover, let nP be the DP ’s degree, with DP the denominator of P .
E(z)
+
r(k) C(z) P (z) b
y(k)

Figure 8.20: Closed-loop control scheme.

Consider the following definition.


Definition 8.1. A controller is deadbeat (DB) with respect to a given reference
signals class if the tracking error for any signal within the class vanishes after
a finite number of steps.
Definition 8.2. A DB controller is minimum time if the required steps num-
ber for annihilating the error is minimum.

The Z transform of the error is


1
E(z) = R(z) = (1 − W (z))R(z), (8.35)
1 + C(z)P (z)
and it vanishes after a finite number of steps if an only if it is a polynomial
in z −1 .5 In other words
NE (z)
E(z) = . (8.36)
zk
Pr  Pr
5
We know that Z −1 l=0 al z −l = l=0 δ(k − l).
174CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

Two possibilities are available, associated with the equivalent decompositions


of E(z) discussed in (8.35), respectively:

Diophantine equation synthesis: Decomposition


1 DC DP NR NE
E= R= = k, (8.37)
1 + CP DC DP + NC NP DR z
suggests to resort to the already seen asymptotic tracking method via
Diophantine equations. But this requires both to set DC DP + NC NP =
z k and to include the whole reference signal denominator DR in DC DP .
If we assume that DP and DR are coprime so that we need to impose
that DC = DR D̃C , then we have to solve Diophantine equation

DP DR X + NP Y = z k , (8.38)

in the unknown X = D̃C and Y = NC . If we assume that P is strictly


proper and we take k = 2nP + nR − 1, then by applying Proposition 8.1
we can argue that there exist solutions X, Y such that deg(X) = nP −1
and deg(Y ) ≤ nP + nR − 1. Thereafter, we have a proper controller
Y
C= , (8.39)
XDR
and we have that
1 DP DR X NR
E = R=
1 + CP DP DR X + NP Y DR
DP NR X
= 2n +n −1 .
z P R

Synthesis by canceling: The second approach, which resorts to direct syn-


thesis by canceling method, allows to evaluate the numerator of the
closed-loop transfer function W too. In this respect, it appears to be
more convenient and instructive, but careful attention has to be taken
whenever P is endowed with unstable zeros/poles.

We first need to express R(z) in a more manageable form:

z −nR NR (z) ÑR (z −1 )


R(z) = = . (8.40)
z −nR DR (z) D̃R (z −1 )
8.5. DIGITAL CONTROLLER SYNTHESIS: DEADBEAT TRACKING175

So (8.35) is equivalently rewritten as


ÑR (z −1 )
E(z) = (1 − W (z)) . (8.41)
D̃R (z −1 )
Requiring that E(z) is a polynomial in z −1 for all ÑR (z −1 ) is equivalent
to
1 − W (z)
= Q̃(z −1 ), (8.42)
D̃R (z −1 )
where
Q̃(z −1 ) = Q0 + Q1 z −1 + · · · + Qq z −q
is a polynomial in z −1 of degree q, namely we can assume that Qq ̸= 0. This
implies
E(z) = ÑR (z −1 )Q̃(z −1 ). (8.43)
It is easily seen that the smaller is q, the quicker the deadbeat response is
(in the minimum steps number sense).
From (8.42) it follows
NW (z)
W (z) = = 1 − D̃R (z −1 )Q̃(z −1 )
DW (z)
z κ − z κ D̃R (z −1 )Q̃(z −1 )
=

κ
z − DR (z)Q(z)
= , (8.44)

where κ := nR + q and
Q(z) := z q Q̃(z −1 ) = Q0 z n + Q1 z n−1 + · · · + Qq
Observe that Q(z) has degree q if and only if Q0 ̸= 0. Therefore, from (8.44)
a necessary condition for the DB response is expressed in terms of the poles
of W (z), which all have to be zero. From direct synthesis formula (8.1) the
desired controller is
W (z) 1 NW (z) DP (z) NW (z)DP (z)
C(z) = = = .
1 − W (z) P (z) DW (z) − NW (z) NP (z) DR (z)Q(z)NP (z)

It is easy to see that the open-loop transfer function C(z)P (z) = DRN(z)Q(z)
W (z)
is
endowed, as expected, with internal model components (related to the whole
reference signal denominator).
176CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

Only a question remains unsolved: what W (z) are obtainable by resorting


to DB controllers? From (8.44) it follows

NW (z) = z κ − DR (z)Q(z), (8.45)

with Q an arbitrary polynomial. So we can obtain a suitable desired NW (z)


if we are able to solve the following equation in the unknown Q(z)

DR (z)Q(z) = z κ − NW (z), (8.46)

without losing both the controller causality and the closed-loop system in-
ternal stability.
Remark 8.6. Let

DR (z) = anR z nR + anR −1 z nR −1 + · · · + a0 , (8.47)


NW (z) = bκ z κ + aκ−1 z κ−1 + · · · + b0 , (8.48)
Q(z) = xq z q + xq−1 z q−1 + · · · + x0 , (8.49)

and note that (8.46) can be rewritten in terms of a linear system of the form

···
 
an R 0 0 0 
1 − bκ

.. −bκ−1
 anR −1 anR 0 · · · .
  
 xq  
 .
 .. .. .. .. .. 
  xq−1
 .. 
.   .

. . .   
  .. ..
 

 a0 .
.. . .. ..  .
 
.

a1 .   
 . = ..

(8.50)
.. ..

   ..   .

 0 a0 . . anR    
 . .. ..  .   ..

 .. 0 a0 . .   ..  
 .


..
 
 . . . . . x0
 
 .. .. .. .. ..

  . 
0 0 · · · · · · a0 −b0
| {z }
(κ+1)×(q+1)

However, a problem remains, as the previous system is not always solv-


able, because the q + 1 unknowns are not sufficient for obtaining nR + q + 1
arbitrary coefficients.

Other constraints on W (z)


8.5. DIGITAL CONTROLLER SYNTHESIS: DEADBEAT TRACKING177

• Causality Observe that W (∞) = 1 if and only if deg(DR Q) < k that


is possible if and only if deg(Q) < q namely if and only if Q0 = 0. Then
we need to impose that Q0 ̸= 0. In this case the relative degree of W (z)
has to be evaluated, in order to guarantee realizability (i.e. causality)
of C(z). Recalling that
– the causality condition of C(z) is satisfied if W (z) has relative
degree at least equal to that of P (z);
– P (z) (in case it has been obtained by sampling/holding) has typ-
ically relative degree equal to 1.
from (8.44), by being DR (z) assumed monic w.l.o.g, it follows that
W (z) has relative degree (at least) 1 if and only if Q(z) is monic too.6
Remark 8.7. Q(z) monic ensures properness of C(z), however strict-
properness is not ensured. Q(z) monic together with a suitable choice
of the remaining coefficients can guarantee strict-properness of C(z):
this implies that Q(z) has to contain many terms, which implies an
increase in the number of steps required for annihilating the error (recall
E(z) = ÑR (z −1 )Q̃(z −1 )).
• Internal Stability We have to satisfy the two constriaints:
(1) We have to impose that the unstable zeros of P (z) must be zeros
of W (z), namely
NW (z) = z κ − DR (z)Q(z) = NPU (z)X(z).
(2) We have to impose that the unstable poles of P (z) must be zeros
of DW (z) − NW (z) = DR (z)Q(z), namely
DR (z)Q(z) = DPU (z)Y (z).
Hence the unstable poles of P (z) that are not already included as poles
of R(z) has simply to be are inserted as zeros of Q(z).
Remark 8.8. Notice that we need to take particular care of the roots of DR (z),
as in those z−values there is no effect of Q. This implies that both zeros in
W and roots of DW − NW can’t be placed in any pole of the reference signal
we want to track in the dead-beat sense.
Some examples follow, referring to simple situations.
6
In fact NW (z) = z κ − DR (z)Q̃(z) = z κ − z nR +q + (terms of degree ≤ κ − 1) =
z − z κ + (terms of degree ≤ κ − 1).
κ
178CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

8.6 Examples of dead-beat tracking for con-


stant signals
z
This particular case implies R(z) = z−1 . We have NR = z, DR = z − 1 and
nR = 1. We have
NR Q Q
E(z) = n +q = q = Q̃(z −1 ), (8.51)
z R z
so the degree q of Q plays the crucial role for understanding the number of
steps required for annihilating the error. As previously mentioned, it’s better
to assume Q endowed with the minimum possible degree, for obtaining the
“best” DB controller (in the sense of the minimum steps number).

• Case q = 0. It holds

Q(z) = a (8.52)
⇒ NW (z) = z − (z − 1)a = (1 − a)z + a (8.53)
(1 − a)z + a
⇒ W (z) = , (8.54)
z
and the constraints to be satisfied, we can only consider the C(z)
properness. As usual, we assume rdeg(P (z)) = 1. Then rdeg(W (z)) ≥
1 that, from (8.54) implies a = 1. In conclusion

1
W (z) = , (8.55)
z
i.e., W (z) becomes the pure unit delay, and the controller assumes the
expression
W (z) 1 DP (z)
C(z) = = . (8.56)
1 − W (z) P (z) (z − 1)NP (z)
Notwithstanding, the previous W (z) is obtainable only for P (z) with
stable poles and zeros and with relative degree rdeg(P ) = 1.
• Case q = 1. This way

Q(z −1 ) = az + b (8.57)
⇒ NW (z) = z 2 − (z − 1)(az + b) (8.58)
z 2 − (z − 1)(az + b)
⇒ W (z) = . (8.59)
z2
8.6. EXAMPLES OF DEAD-BEAT TRACKING FOR CONSTANT SIGNALS179

As far as the C(z) properness, observe that rdeg(W (z)) ≥ 1 and so


Q(z) has to be monic, namely a = 1. Now we can choose b thinking of
different goals:
1. for obtaining that the relative degree of W (z) is equal to 2. From
NW (z) = z 2 − (z − 1)(z + b) = z(1 − b) + b that holds if and only
if b = 1.
2. for placing an unstable zero in W (for internal stability purposes).
b
That zero is zb = b−1 , for b ̸= 1.
3. for placing an unstable root in DW − NW (the same as above).
This root is zb = −b.
• Case q = 2. Now
Q(z) = az 2 + bz + c. (8.60)
As we made in the previous cases, let a = 1 for ensuring the causality
of C(z). Then
NW (z) = z 3 − (z − 1)(z 2 + bz + c) = z 2 (1 − b) + z(b − c) + c, (8.61)
so that
– b = 1 and c = 1 imply a relative degree of W (z) equal to 3;
– b = 1 and c ̸= 1 imply a relative degree of W (z) equal to 2 and
c can be used either to cancel possible unstable zeros or poles of
P (z) for the closed-loop internal stability, or for allocating the
zero of W (z);
– b ̸= 1 and c ̸= 1 imply a relative degree of W (z) equal to 1 and
b, c can be used either to cancel possible unstable zeros or poles
of P (z) for the closed-loop internal stability, or for allocating the
zeros of W (z).
For q > 2 a similar reasoning applies.
Remark 8.9. If different canonical reference signals are considered
NR (z)
R(z) = , (8.62)
(z − 1)l+1
it follows
z l+1+q − (z − 1)l+1 Q(z)
W (z) = . (8.63)
z l+1+q
180CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

Once again, by being DR (z) monic, the same has to happen to Q(z) too,
in order to obtain a relative degree of W (z) greater than 0. Furthermore,
condition W (1) = 1 is guaranteed thanks to the term (z − 1)l+1 . If Q(z) is
desired to be constant (zero degree), then Q(z) = 1.

8.7 Dead-beat control for P (s) derived from


a sampling/holding
In section §8.5 we dealt with the problem of designing a DB compensator for
a given plant P (z). What does it happen between two subsequent samples
in case P (z) is the sample/hold version of a continuous-time plant P (s)? We
are going to discuss that with reference to a significant example, recalling
the scheme depicted in Fig. 8.21.

+ e u
r(k) C(z) H0 P (s) T

ց b
y(k)

Figure 8.21: Feedback control for a plant obtained via sample/hold.

1
Example 8.5. Assume P (s) = s(s+1) , and let the goal be that of designing
a DB compensator for the step response of the corresponding sample/hold
plant. Without loss of generality, assume T = 1. From (5.26)
  
−1 −1 P (s)
P̃ (z) = (1 − z )Z ST ◦ L
s
1 − 2e−1 + ze−1
=
(z − 1)(z − e−1 )
0.264 + 0.368z
= ,
(z − 1)(z − 0.368)

the relative degree of P̃ (z) is 1. Moreover, P̃ (z) is devoid of unstable zeros,


while it has an unstable pole at z = 1. The internal stability is therefore lost,
unless W (z) = NW (z)/DW (z) with DW (z) − NW (z) having a zero at z = 1 is
8.7. DEAD-BEAT CONTROL FOR P (S) DERIVED FROM A SAMPLING/HOLDING181

chosen. The DB requirement leads to W (z) = z −1 , and the controller, from


the direct synthesis formula (8.1), becomes

z −1 (z − 1)(z − 0.368)
C(z) = ·
1 − z −1 0.264 + 0.368z
z − 0.368
= .
0.264 + 0.368z
The variabile y(t) (the output of P (s)) behavior is shown in Fig. 8.22 (upper
graph), together with the response y(k). So y(t) exhibits some oscillations
(ripples), despite the right value (1) is assumed in the various sampling time
instants. We can also note that

• in general, stability outside the sampling times can’t be guaranteed;

• by decreasing the sampling period, greater inputs are required;

• if the physical limitation of the actuator are overtaken (i.e., the satu-
ration levels) the ideal behavior is disregarded and, in particular, the
dead-beat property is definitely lost.

Conditions ensuring DB control without ripples exist. They can be ob-


tained by resorting to either state-space representations or to elementary
modes conversion from continuous-time into discrete-time.

182CHAPTER 8. DIGITAL CONTROLLERS SYNTHESIS: DIRECT SYNTHESIS METHOD

δ−1 (k) y(k)

0 1 2 3 4 5 t

u(t)

1 3 5

2 4 t

Figure 8.22: Qualitative behavior of y(t) (grey dashed) and of y(k) (blue)
for the step response (upper figure) and behavior of the actuating input u(t),
which drives the plant P (s) (lower figure).
Appendix A

Table of Most Common ZLg


Transforms
184 APPENDIX A. TABLE OF MOST COMMON ZLG TRANSFORMS

f (k), k ∈ Z+ d .... t Z[f (k)] = F (z) Rc

δ(k) d ... t 1 ∀z ∈ C

δ(k − n), n ∈ Z+ d ... t z −n ∀z ∈ C, z ̸= 0

d .... t z
δ−1 (k) |z| > 1
z−1

d .... t z
kδ−1 (k) |z| > 1
(z − 1)2

d .... t z(z + 1)
k 2 δ−1 (k) |z| > 1
(z − 1)3
 
k d ... t z
,l≥0 |z| > 1
l (z − 1)l+1

d .... t z
pk δ−1 (k), p ∈ C |z| > |p|
z−p
 
k k−l d ... t z
p , l ≥ 0, p ∈ C |z| > |p|
l (z − p)l+1
185

d .... t z(z − cos ϑ)


cos(ϑk)δ−1 (k) |z| > 1
z 2 − 2 cos ϑz + 1

d .... t z sin ϑ
sin(ϑk)δ−1 (k) |z| > 1
z2 − 2 cos ϑz + 1

d .... t z(z − p cos ϑ)


pk cos(ϑk)δ−1 (k), p ∈ C |z| > |p|
z2 − 2p cos ϑz + p2

d .... t zp sin ϑ
pk sin(ϑk)δ−1 (k) p ∈ C |z| > |p|
z2 − 2p cos ϑz + p2
186 APPENDIX A. TABLE OF MOST COMMON ZLG TRANSFORMS
Appendix B

Table of Most Common


Laplace Transforms
188APPENDIX B. TABLE OF MOST COMMON LAPLACE TRANSFORMS

f (t), t ∈ R+ d t L[f (t)] = F (s) Rc

δ(t) d t 1 ∀s ∈ C

δ(t − τ ), τ ∈ R+ d t e−τ s ∀s ∈ C

d t 1
δ−1 (t) ℜ(s) > 0
s

d t 1
tδ−1 (t) ℜ(s) > 0
s2

tn d t 1
δ (t),
n! −1
n∈N ℜ(s) > 0
sn+1

d t 1
eαt δ−1 (t), α ∈ R ℜ(s) > α
s−α

tn αt d t 1
n!
e δ−1 (t), α∈R ℜ(s) > α
(s − α)n+1
189

d t s
cos(ϑt)δ−1 (t) ℜ(s) > 0
s2 + ϑ2

d t ϑ
sin(ϑt)δ−1 (t) ℜ(s) > 0
s2 + ϑ2

d t s−α
eαt cos(ϑt)δ−1 (t), α ∈ R ℜ(s) > α
(s − α)2 + ϑ2

d t ϑ
eαt sin(ϑt)δ−1 (t) α ∈ R ℜ(s) > α
(s − α)2 + ϑ2
190APPENDIX B. TABLE OF MOST COMMON LAPLACE TRANSFORMS
Appendix C

Notions of Control in
Continuous-Time

C.1 Routh Test


Let
A(s) = an sn + an−1 sn−1 + · · · + a1 s + a0 .
be a given polynomial. It is said to be a Hurwitz polynomial if all is zeros
are on the open left half-plane. The Routh Test is based on a table (Routh
table) with n + 1 rows: the first and second rows are defined by:

an an−2 an−4 . . .
an−1 an−3 an−5 . . .

Each of the subsequent rows is obtained as function of the elements in the two
rows before it as shown in the example below. Consider the thee consecutive
rows:

pi+2 pi pi−2 . . .
qi+1 qi−1 qi−3 . . .
ri ri−2 ri−4 . . .

Then rj is given by
pi+2 pj
h i
det qi+1 qj−1 pi+2
rj = − = pj − qj−1 . (C.1)
qi+1 qi+1
192 APPENDIX C. NOTIONS OF CONTROL IN CONTINUOUS-TIME

This expression holds also for j = 0, 1 with the proviso of considering to be


zero all the elements of the two preceding rows that have negative index.
With reference to the table just defined we have the following result
(known as Routh Theorem):

Theorem C.1 (Routh). The following hold:

1. A(s) is Hurwitz if and only if the construction of the table can be com-
pleted (i.e. none of the elements in the first column of the table except
the last one is zero) and all the elements in the first column of the table
have the same sign (strictly).

2. If the construction of the table can be completed then the number nn of


zero of A(s) (counted with multiplicities) that are in the open left half-
plane is equal to the number of consecutive pairs of elements having
(strictly) the same sign in the first column of the table.

3. If the construction of the table can be completed then the number np


of zero of A(s) (counted with multiplicities) that are in the open right
half-plane is equal to the number of sign changes in the first column of
the table.

4. If the construction of the table can be completed A(s) has at most a


simple zero in the imaginary axis. This zero, if present, is 0 and is
present if and only if the last element in the first column of the table is
zero.

C.2 Root Locus


The root locus is a useful tool to analize stability of a closed-loop system
and how its poles varies in the complex plane as the controller gain changes.
Mathematically this can be reduced to the problem of determining how the
zeros of the polynomial

pK (s) := D(s) + KN (s), (C.2)

function of the parameter K ∈ R. Here D(s) = ni=1 (s − pi ) and


Q
varies in Q
N (s) = m i=1 (s − zi ) are given monic polynomials with n := deg[D(s)] ≥
m := deg[N (s)].
C.2. ROOT LOCUS 193

With reference to pk (s) in (C.2), we define the positive root locus to be


the set
L+ := {s ∈ C : ∃K ≥ 0 t.c. pK (s) = 0}. (C.3)
Similarly, we define the negative root locus to be the set

L− := {s ∈ C : ∃K < 0 t.c. pK (s) = 0}. (C.4)

The complete root locus is finally defined as

L := L+ ∪ L− = {s ∈ C : ∃K ∈ R t.c. pK (s) = 0}. (C.5)

The following result provides some simple rules allowing to draw a qual-
itative sketch of L+ and L− .
Theorem C.2. With reference to L+ , the following properties hold:
1. L+ is symmetric with respect to the real axis.

2. L+ is formed by n curves (branches) originating, for K = 0, from the


zeros pi of the polynomial D(s). These branches are continuous curves
in the complex plane.

3. As K → ∞, m of the branches tend to the zeros zi of the polynomial


N (s) and the remaining n − m diverge to infinity.

4. The n − m diverging branches tend to infinity along n − m asymptotes.


All such asymptotes originate from the same point σc defined by
Pn Pm
i=1 pi − i=1 zi
σc = .
n−m
Moreover the angles that the asymptotes form with the real axis are
(2k + 1)π
φk = , k = 0, 1, . . . , n − m − 1.
n−m

5. Let zj be a zero of N (s) and µ be its multiplicity. Then, as K → ∞, µ


branches tend to zj with the following gradients in zj :
n m
1X 1 X
βj = arg(zj −pi )− arg(zj −zi )−(2k+1)π, k = 0, 1, . . . , µ−1.
µ i=1 µ i=1
zi ̸=zj
194 APPENDIX C. NOTIONS OF CONTROL IN CONTINUOUS-TIME

Let pj be a zero of D(s) and µ be its multiplicity. Then µ branches


originate from pj with the following gradients in pj :
n m
1 X 1X
αj = − arg(pj −pi )+ arg(pj −zi )+(2k+1)π, k = 0, 1, . . . , µ−1.
µ i=1 µ i=1
pi ̸=pj

6. The intersection between L+ and the real axis is the set of all real points
having to their right an overall odd number of zeros of D(s) and of N (s)
(counted with multiplicity).

7. s⋆ is a multiple point of L+ with multiplicity µ ≥ 2 if and only if there


exists K ≥ 0 such that pK (s) and its first µ−1 derivatives (with respect
to s) vanish for s = s⋆ .
With reference to L− , the following properties hold:
1. L− is symmetric with respect to the real axis.

2. L− is formed by n curves (branches) originating, for K = 0, from the


zeros pi of the polynomial D(s).

3. As K → −∞, m of the branches tend to the zeros zi of the polynomial


N (s) and the remaining n − m diverge to infinity.

4. The n − m diverging branches tend to infinity along n − m asymptotes.


All such asymptotes originate from the same point σc defined by
Pn Pm
p i − i=1 zi
σc = i=1 .
n−m
Moreover the angles that the asymptotes form with the real axis are
2kπ
φk = , k = 0, 1, . . . , n − m − 1.
n−m

5. Let zj be a zero of N (s) and µ be its multiplicity. Then, as K → −∞,


µ branches tend to zj with the following gradients in zj :
n m
1X 1 X
βj = arg(zj − pi ) − arg(zj − zi ) − 2kπ, k = 0, 1, . . . , µ − 1.
µ i=1 µ i=1
zi ̸=zj
C.3. NYQUIST PLOT 195

Let pj be a zero of D(s) and µ be its multiplicity. Then µ branches


originate from pj with the following gradients in pj :
n m
1 X 1X
αj = − arg(pj −pi )+ arg(pj −zi )+2kπ, k = 0, 1, . . . , µ−1.
µ i=1 µ i=1
pi ̸=pj

6. The intersection between L− and the real axis is the set of all real points
having to their right an overall even number of zeros of D(s) and of
N (s) (counted with multiplicity).
7. s⋆ is a multiple point of L− with multiplicity µ ≥ 2 if and only if there
exists K ≤ 0 such that pK (s) and its first µ−1 derivatives (with respect
to s) vanish for s = s⋆ .
8. l := deg[D(s) − N (s)] of the branches are continuous curves in the
complex plane while the other n − l diverge to infinity for K =→ −1.

C.3 Nyquist Plot


Consider a transfer function W (s). Its Nyquist plot is a parametric curve in
the complex plane indexed in ω ∈ R. The real and imaginary coordinates
of this curve are ℜ (W (jω)) and ℑ (W (jω)), respectively. Since W (jω) is
symmetric, i.e. W (jω) = W (−jω), once drawn the Nyquist plot for ω ≥ 0,
it is easy to complete the curve by symmetry with respect to the real axis.
Notice that |W (jω)| and arg[W (jω)] are the polar coordinates of the points
of the Nyquist plot. Hence the Nyquist plot may be easily obtained from the
Bode plot by taking into account the following rules:
1. The points for which the Nyquist plot crosses the unit circle correspond
to the values of the frequency ω for which the Bode magnitude plot
crosses the x-axis (0 dB).
2. The points for which the Nyquist plot crosses the positive real axis
correspond to the values of the frequency ω for which the Bode phase
plot crosses the horizontal lines with ordinate 2πk, k ∈ Z.
3. The points for which the Nyquist plot crosses the negative real axis
correspond to the values of the frequency ω for which the Bode phase
plot crosses the horizontal lines with ordinate π + 2πk, k ∈ Z.
196 APPENDIX C. NOTIONS OF CONTROL IN CONTINUOUS-TIME

4. The points for which the Nyquist plot crosses the positive imaginary
axis correspond to the values of the frequency ω for which the Bode
phase plot crosses the horizontal lines with ordinate π2 + 2πk, k ∈ Z.

5. The points for which the Nyquist plot crosses the negative imaginary
axis correspond to the values of the frequency ω for which the Bode
phase plot crosses the horizontal lines with ordinate − π2 + 2πk, k ∈ Z.

6. If W (s) has poles on the imaginary axis then |W (jω)| diverges for some
values of the frequency ω so that the Nyquist plot is an open curve.

7. If W (s) is a rational function without poles at the origin, then the


Nyquist plot starts, for ω = 0, from the real point of abscissa KB , with
KB being the Bode gain of W (s).

8. If W (s) is a strictly proper rational function, then the Nyquist plot


tends to the origin for ω → ∞.

9. If W (s) is a proper rational function and all its poles and zeros have
strictly negative real part (minimum phase system), then the phase
arg[W (jω)] tends, for ω → ∞, to −(n − m) π2 where n and m are
the numbers (counted with multiplicity) of poles and zeros of W (s),
respectively.

10. If W (s) is a proper rational function but it is not strictly proper then
the Nyquist plot tends, for ω → ∞, to the real point of abscissa KE ,
with KE being the Evans gain of W (s).

11. Both for ω → 0 and for ω → ∞ the gradient to the Nyquist plot of a
rational function tends to an integer multiple of π2 .

You might also like