Analog and Digital Control System Design
Analog and Digital Control System Design
Chi-Tsong Chen
State Universi1y of New York at Stony Brook
Contents
Chapter 1 lntroduction 1
1.1 Empirical and Analytical Methods
1.2 Control Systems 2
1.2.1 Position Control Systems 2
1.2.2 Velocity Control Systems 4
1.2.3 Temperature Control Systems 6
1.2.4 Trajectory Control and Autopilot 6
1.2.5 Miscellaneous Examples 7
1.3 Problem Formulation and Basic Terminology 9
1.4 Scope of the Text 11
XX CONTENTS
References 590
lndex 595
Introduction
the model can be more complex and realistic. Modeling is the most critical step in
analytical design. lf a physical system is incorrectly modeled, subsequent study will
be useless. Once a model is chosen, the rest of the analytical design is essentially a
mathematical problem.
Repeated experimentation is indispensable in the empirical method. It is also
important in the analytical method. In the former, experiments must be carried out
using physical devices, which might be expensive and dangerous. In the latter, how-
ever, experiments can be carried out using models or mathematical equations. Many
computer-aided design packages are available. We may use any of them to simulate
the equations on a digital computer, to carry out design, and to test the result on the
computer. lf the result is not satisfactory, we repeat the design. Only after a design
is found to be satisfactory, will we implement it using physical de vices.
lf the model is adequately chosen, the performance of the implemented system
should resemble the performance predicted by analytical design or computer simu-
lation. However, because ofunavoidable inaccuracy in modeling, discrepancies often
exist between the performance of the implemented physical system and that predicted
by the analytical method. Therefore, the performance of physical systems can often
be improved by fine adjustments or tunings. This is why a physical system often
requires lengthy testing after it is implemented, before it is put into actual operation
or mass production. In this sense, experience is also important in the analytical
method.
In the analytical approach, experimentation is needed to set up models, and
experience is needed (due to the inaccuracy of modeling) to improve the performance
of actual physical systems. Thus, experience and experimentation are both used in
the empirical and analytical approaches. The major difference between these two
approaches is that in the latter, we gain, through modeling, understanding and insight
into the structure of systems. The analytical approach also provides systematic pro-
cedures for designing systems and reduces the likelihood of designing flawed or
disastrous systems.
In this text, we study analytical methods in the analysis and design of control
systems.
This text is concemed with the analysis and design of control systems; therefore, it
is pertinent to discuss first what control systems are. Before giving a formal defini-
tion, we discuss a number of examples.
A possible arrangement of such a system is shown in Figure 1.1(a). This system can
indeed be designed using the empirical method. If it is to be designed using the
ana1ytical method, we must first develop a model for the system, as shown in Figure
1.1 (b ). The model actually consists of a number of blocks. 1 Each block represents
a model of a physical device. Using this model, we can then carry out the design.
A large number of systems can be similarly modeled. For example, the system that
aims the antennas shown in Figure 1.2 at communication satellites and the systems
that control various antennas and solar panels shown in Figure 1.3 can be similarly
modeled. These types of systems are called position control systemf.
There are other types of position control systems. Consider the simplified nu-
clear power plant shown in Figure 1.4. The intensity of the reaction inside the reactor
(and, consequently, the amount of heat generated)1 is controlled by the vertical po-
sitian of the control rods. The more deeply the control rods are submerged, the more
heat the reactor will generate. There are many other control systems in the nuclear
power plant. Maintenance of the boiler's water level and maintenance or regulation
of the generated voltage ata fixed voltage all call for control systems. Position control
is also needed in the numerical control of machine too1s. For example, it is possible
to program a machine tool so that it will automatically drill a number of boles, as
shown in Figure 1.5.
Control
knob
~ u e
~
Potentiometer
Potentiometer
(b)
1
At this point, the reader need not be concerned with the equation inside each block. It will be developed
in Chapter 3.
4 CHAPTER 1 INTRODUCTION
3.3-rn,
20-GHz
transmitting
antenna
2.2-rn, 30-GHz
receiving antenna
Solar arra y
Figure 1.3 Communications satellite. (Courtesy of IEEE Spectrum.)
Pump
Figure 1.4 Nuclear power plant.
o o
5
6 CHAPTER 1 INTRODUCTION
(a)
Tachometer
(b)
Potentiometer
temperature
8 Actual temperature
(a)
Potentiometer
~·~
Shuttle
C:>)~ Rpnway
/---------------------:;::··
6
Figure 1.8 Desired landing trajectory of space shuttle.
(a)
Water
Preset or
desired Actual
water leve! water leve!
(b)
of controlling the valve is understood, the water level can easily be controlled by
trial and error.
As a final example, a schematic diagram of the control of clothes dryers is shown
in Figure 1.10. Presently there are two types of clothes dryers-manual and auto-
matic. In a manual clothes dryer, depending on the amount of clothes and depending
Electricity
~Actual
~dryness
Experience
(a)
Desired Actual
dryness dryness
(b)
on experience, we set the timer to, say, 40 minutes. At the end of 40 minutes, this
dryer will automatically turn off even if the clothes are still damp or wet. lts sche-
matic diagram is shown in Figure 1.10(a). In an automatic dryer, we selecta desired
degree of dryness, and the dryer will automatically turn off when the clothes reach
the desired degree of dryness. If the load is small, it will take less time; if the load
is large, it will take more time. The amount of time needed is automatically deter-
mined by this type of dryer. lts schematic diagram is shown in Figure l:lO(b).
Clearly, the automatic dryer is more convenient to use than a manual one, but it is
more expensive. However, in using a manual dryer, we may overset the timer, and
electricity may be wasted. Therefore, if we include the energy saved, an automatic
dryer may turn out to be more economical.
From the examples in the preceding section, we may deduce that a control system
is an interconnection of components or devices so that the output of the overall
system will follow as closely as possible a desired signal. There are many reasons
to design control systems:
l. Automatic control: The temperature of a house can be automatically main-
tained once we set a desired temperature. This is an automatic control system.
Automatic control systems are used widely and are essential in automation in
industry and manufacturing.
2. Re mote control: The quality of reception of a TV channel can be improved by
pointing the antenna toward the emitting station. If the antenna is located at the
rooftop, it is impractical to change its direction by hand. If we install an antenna
rotator, then we can control the direction remotely by turning a knob sitting in
front of the TV. This is much more convenient. The Hubble space telescope,
which is orbiting over three hundred miles above the earth, is controlled from
the earth. This remote control must be done by control systems.
3. Power amplification: The antennas used to receive signals sent by Voyager 2
have diameters over 70 meters and weights over several tons. Clearly, it is
impossible to turn these antennas directly by hand. However, using control sys-
tems, we can control them by turning knobs or by typing in command signals
on computers. The control systems will then generate sufficient power to turn
the antennas. Thus, power amplification is often implicit in many control
systems.
In conclusion, control systems are widely used in practice because they can be
designed to achieve automatic control, remote control, and power amplification.
We now formulate the control problem in the following. Consider the the po-
sition control problem in Figure 1.1, where the objective is to control the direction
of the antenna. The first step in the design is to choose a motor to drive the antenna.
The motor is called an actuator. The combination of the object to be controlled and
the actuator is called the plant. In a home heating system, the air inside the home is
1Q CHAPTER 1 INTRODUCTION
r-----------------------------,
1
1
Plant
1
L-----------------------------~
Figure 1.11 Control design problem.
the controlled object and the bumer is the actuator. A space shuttle is a controlled
object; its actuator consists of a number of thrustors. The input of the plant, denoted
by u(t), is called the control signa! or actuating signa!; the output of the plant,
denoted by y(t), is called the controlled variable or plant output. The problem is to
design an overall system as shown in Figure 1.11 so that the plant output will follow
as closely as possible a desired or reference signa!, denoted by r(t). Every example
in the preceding section can be so formulated.
There are basically two types of control systems: the open-loop system and the
closed-loop or feedback system. In an open-loop system, the actuating signal is
predetermined by the desired or reference signal; it does not depend on the actual
plant output. For example, based on experience, we set the timer of the dryer in
Figure l.lO(a). When the time is up, the dryer will stop even if the clothes are still
damp or wet. This is an open-loop system. The actuating signal of an open-loop
system can be expressed as
u(t) = f(r(t))
where f is sorne function. If the actuating signal depends on the reference input and
the plant output, or if it can be expressed as
u(t) = h(r(t), y(t))
where h is sorne function, then the system is a closed-loop or feedback system. All
systems in the preceding section, except the one in Figure 1.10(a), are feedback
systems. In every feedback system the plant output must be measured and used to
generate the actuating signal. The plant output could be a position, velocity, tem-
perature, or something else. In many applications, it is transformed into a voltage
and compared with the reference signal, as shown in Figure 1.12. In these transfor-
mations, sensing devices or transducers are needed as shown. The result of the
comparison is then used to drive a compensator or controller. The output of the
controller yields an actuating signal. lf the controller is designed properly, the ac-
tuating signal will drive the plant output to follow the desired signal.
In addition to the engineering problems discussed in the preceding section, a
large number of other types of systems can also be considered as control systems.
1.4 SCOPE OF THE TEXT 11
Our body is in fact a very complex feedback control system. Maintaining our body
temperature at 37° Celsius requires perspiration in summer and contraction of blood
vessels in winter. Maintaining an automobile in a lane (plant output) is a feedback
control system: Our eyes sense the road (reference ~ignal), we are the controller, and
the plant is the automobile together with its engine and steering system. An economic
system is a control system. Its health is measured by the gross national product
(GNP), unemployment rate, average hourly wage, and inflation rate. If the inftation
rate is too high or the unemployment rate is not acceptable, economic policy must
be modified. This is achieved by changing interest rates, monetary policy, and gov-
emment spending. The economic system has a large number of interrelated factors
whose cause-and-effect relationships are not exactly known. Furthermore, there are
many uncertainties, such as consumer spending, labor disputes, or intemational
crises. Therefore, an economic system is a very complex system. We do not intend
to solve every control problem; we study in this text only a very limited class of
control problems.
This text is concemed with the analysis and design of control systems. As it is an
introductory text, we study only a special class of control systems. Every system-
in particular, every control system-is classified dichotomously as linear or nonlin-
ear, time-invariant or time-varying, lumped or distributed, continuous-time or
discrete-time, deterministic or stochastic, and single-variable or multivariable.
Roughly speaking, a system is linear if it satisfies the additivity and homogeneity
properties, time-invariant if its characteristics do not change with time, and lumped
if it has a finite number of state variables or a finite number of initial conditions that
can summarize the effect of past input on future output. A system is continuous-time
if its responses are defined for all time, discrete-time if its responses are defined only
at discrete instants of time. A system is deterministic if its mathematical description
does not involve probability. lt is called a single-variable system if it has only one
input and only one output; otherwise it is called a multivariable system. For a more
detailed discussion of these concepts, see References [15, 18]. In this text, we study
only linear, time-invariant, lumped, deterministic, single-variable systems. Although
12 CHAPTER 1 INTRODUCTION
this class of control systems is very limited, it is the most important one. lts study
is a prerequisite for studying more general systems. Both continuous-time and
discrete-time systems are studied.
The class of systems studied in this text can be described by ordinary differential
equations with real constant coefficients. This is demonstrated in Chapter 2 by using
examples. We then discuss the zero-input response and the zero-state response. The
transfer function is developed to describe the zero-state response. Since the transfer
function describes only the zero-state response, its use in analysis and design must
be justified. This is done by introducing the concept of complete characterization.
The concepts of properness, potes, and zeros of transfer functions are also intro-
duced. Finally, we introduce the state-variable equation and its discretization. lts
relationship with the transfer function is also established.
In Chapter 3, we introduce sorne control components, their models, and their
transfer functions. The loading problem is considered in developing the transfer
functions. Electrical, mechanical, and electromechanical systems are discussed. We
then discuss the manipulation of block diagrams and Masan' s formula to conclude
the chapter.
The quantitative and qualitative analyses of control systems are studied in Chap-
ter 4. Quantitative analysis is concemed with the response of systems due to sorne
specific input, whereas qualitative analysis is concemed with general properties of
systems. In quantitative analysis, we also show by examples the need for using
feedback and tachometer feedback. The concept of the time constant is introduced.
In qualitative analysis, we introduce the concept of stability, its condition, and a
method (the Routh test) of checking it. The problems of pole-zero cancellation and
complete characterization are also discussed.
In Chapter 5, we discuss digital and analog computer simulations. We show
that if the state-variable description of a system is available, then the system can be
readily simulated on a digital computer or built using operational amplifier circuits.
Because it is simpler and more systematic to simulate transfer functions through
state-variable equations, we introduce the realization problem-the problem of ob-
taining state-variable equations from transfer functions. Minimal realizations of vec-
tor transfer functions are discussed. The use of MATLAB, a commercially available
computer-aided design package, is discussed throughout the chapter.
Chapters 2 through 5 are concemed with modeling and analysis problems; the
remaining chapters are concemed with the design problem. In Chapter 6, we discuss
the choice of plants. We then discuss physical constraints in the design of control
systems. These constraints lead to the concepts of well-posedness and total stability.
The saturation problem is also discussed. Finally, we compare the merits of open-
loop and closed-loop systems and then introduce two basic approaches-namely,
outward and inward-in the -design of control systems. In the outward approach,
we first choose a configuration and a compensator with open parameters and then
adjust the parameters so that the resulting overall system will (we hope) meet the
design objective. In the inward approach, we first choose an overall system to meet
the design objective and then compute the required compensators.
Two methods are available in the outward approach: the root-locus method and
the frequency-domain method. They were developed respectively in the 1950s and
1.4 SCOPE OF THE TEXT 13
This text is concemed with analytical study of control systems. Roughly speaking,
it consists of four parts:
l. Modeling
2. Development of mathematical equations
3. Analysis
4. Design
This chapter discusses the first two parts. The distinction between physical systems
and models is fundamental in engineering. In fact, the circuits and control systems
studied in most texts are models of physical systems. For example, a resistor with a
constant resistance is a model; the power limitation of the resistor is often disre-
garded. An inductor with a constant inductance is also a model; in reality, the in-
ductance may vary with the amount of current ftowing through it. An operational
amplifier is a fairly complicated device; it can be modeled, however, as shown in
Figure 2.1. In mechanical engineering, an automobile suspension system may be
modeled as shown in Figure 2.2. In bioengineering, a human arm may be modeled
as shown in Figure 2.3(b) or, more realistically, as in Figure 2.3(c). Modeling is an
extremely important problem, because the success of a design depends upon whether
or not physical systems are adequately modeled.
Depending on the questions asked and depending on operational ranges, a phys-
ical system may have different models. For example, an electronic amplifier has
14
2.1 PHYSICAL SYSTEMS AND MODELS 15
Shock
Spring absorber
Wheel mass
Muscle
force
Arm w w w
(a) (b) (e)
u y
System )
The choice of a model for a physical device depends heavily on the mathematics to
be used. 1t is useless to choose a model that closely resembles the physical device
but cannot be analyzed using existing mathematical methods. It is also useless to
choose a model that can be analyzed easily but does not resemble the physical device.
Therefore, the choice of models is not a simple task. 1t is often accomplished by a
compromise between ease of analysis and resemblance to real physical systems.
The systems to be used in this text will be limited to those that can be described
by ordinary linear differential equations with constant real coefficients such as
du(t)
2 - - - 3u(t)
dt
or, more generally,
dn-ly(t) dy(t)
an-1 dtn-1 + + a1 dt + a0 y(t)
(2.1)
dm-lu(t) du(t)
bm-1 dtm-~ + ··· + b1 dt + b0 u(t)
where a; and b; are real constants, and n ;:::: m. Such equations are called nth arder
linear time-invariant lumped (LT/L) differential equations. In order to be describable
by such an equation, the system must be linear, time-invariant, and lumped. Roughly
speaking, a system is linear if it meets the additivity property [that is, the response
of u 1(t) + u2 (t) equals the sum of the response of u 1(t) and the response of u2 (t)],
and the homogeneity property [the response of au(t) equals a times the response of
u(t)]. A system is time-invariant if its characteristics-such as mass or moment of
inertia for mechanical systems, or resistance, inductance or capacitance for electrical
systems--do not change with time. A system is lumped if the effect of any past
input u(t), for t :5 t0 , on future output y(t), for t ;:::: t0 , can be summarized by afinite
number of initial conditions at t = t0 . For a detailed discussion of these concepts,
2.2 LINEAR TIME-INVARIANT LUMPED SYSTEMS 17
Viscous k1
q,c
Static- ""-
dyldt y
k, 1-y
(a) (b) (e)
see References [15, 18]. We now discuss how these equations are developed to
describe physical systems.
where k1 is called the viscous friction coefficient. This is a linear equation. Most
texts on general physics discuss only static and Coulomb frictions. In this text,
however, we consider only viscous friction; static and Coulomb frictions will be
disregarded. By so doing, we can model the friction as a linear phenomenon.
In- general physics, Hooke's law states that the displacement of a spring is pro-
portional to the applied force, that is
Spring force = k2 X Displacement (2.3)
where k2 is called the spring constant. This equation is plotted in Figure 2.5(c) with
the dotted line. It implies that no matter how large the applied force is, the displace-
ment equals force/k2 • This certainly cannot be true in reality; if the applied force is
larger than the elastic limit, the spring will break. In general, the characteristic of a
physical spring has the form of the solid line shown in Figure 2.5(c). 1 We see that
1
This is obtained by measurements under the assumption that the mass of the spring is zero and that the
spring has no dPifting and no hysteresis. See Reference [18].
18 CHAPTER 2 MATHEMATICAL PRELIMINARY
if the applied force is outside the range [A', B'], the characteristic is quite different
from the dotted line. However, if the applied force lies inside the range [A', B'],
called the linear operational range, then the characteristic can very well be repre-
sented by (2.3). We shall use (2.3) as a model for the spring.
We now develop an equation to describe the system by using (2.3) and consid-
ering only the viscous friction in (2.2). The applied force u(t) must overcome the
friction and the spring force, and the remainder is used to accelerate the mass. Thus
we have
T(t) - k dO(t)
1
dt
or
J d28(t) + k dO(t) + k O(t) = T(t) (2.5a)
dt 2
1 2
dt
T(t) .
Load
(a) (b)
Exercise 2.2. 1
Exercise 2.2.2
Show that the system shown in Figure 2.6(b) where the shaft is assumed to be rigid
is described by
T(t) (2.5b)
20 CHAPTER 2 MATHEMATICAL PRELIMINARY
1 11
~Tl'
R v(t) L
i{t)
j i(t)
i(t) =e dv(t)
dt
Figure 2.8 Electrical components.
dv(t)
i(t) e- (2.6b)
dt
di(t)
v(t) = L - (2.6c)
dt
Now we shall use (2.6) to develop differential equations to describe RLC net-
works. Consider the network shown in Figure 2.10. The input is a current source
u(t) and the output y(t) is the voltage across the capacitor as shown. The current of
the capacitor, using (2.6b), is
e dy(t) 2 dy(t)
dt dt
f
2.2 LINEAR TIME-INVARIANT lUMPED SYSTEMS 21
Charge (flux)
Voltage (current)
This current also passes through the 1-!1 resistor. Thus the voltage drop across A
and Bis
dy(t)
VAB = ic(t) · 1 + y(t) = 2 dt + y(t)
or
[Q
A
+T
u(t) y(t)
B
-1
Figure 2.10 RC network.
22 CHAPTER 2 MATHEMATICAL PRELIMINARY
Exercise 2.2.3
Find differentia1 equations to describe the networks in Figure 2.11. The network in
Figure 2.11 (b) is called a phase-lag network.
R L
(a) (b)
and (2.8)
--- -T-
- -
q]
Figure 2.12 Control of liquid levels.
2.2 LINEAR TIME-INVARIANT iUMPED SYSTEMS 23
They are proportional to relative liquid levels and inversely proportional to ftow
resistances. The changes of liquid levels are govemed by
and
which imply
dh¡
A -
1 dt
(2.9a)
and
(2.9b)
These equations are obtained by linearization and approximation_ In reality, the ftow
of liquid is very complex; it may involve turbulent ftow, which cannot be described
by linear differential equations. To simplify analysis, turbulent ftow is disregarded
in developing (2.9). Let q¡ and q 2 be the input and output of the system. Now we
shall develop a differential equation to describe them. The differentiation of (2.8)
yields
and
q¡ - Q¡
= A
1
(R 1
dq¡
dt
+ R 2 dq2)
dt (2.10a)
dq2
Q¡ - q2 = A2R2 dt (2.10b)
+A R dq2
1 2 dt
This second-order differential equation describes the input q¡ and output q 2 of the
system in Figure 2.12.
To conclude this section, we mention that a large number of physical systems
can be modeled, after simplification and approximation, as linear time-invariant
lumped (LTIL) systems over limited operational ranges. These systems can then be
.,..,
24 CHAPTER 2 MATHEMATICAL PRELIMINARY
described by LTIL differential equations. In this text, we study only this class of
systems.
The response of linear, in particular LTIL, systems can always be decomposed into
the zero-input response and zero-state response. In this section we shall use a simple
example to illustrate this fact and then discuss sorne general properties of the zero-
input response. The Laplace transform in Appendix A is needed for the following
discussion.
Consider the differential equation
Many methods are available to solve this equation. The simplest method is to use
the Laplace transform. The application of the Laplace transform to (2.12) yields,
using (A.9),
s 2 Y(s) - sy(O~) - y(O~) + 3[sY(s) - y(O~)] + 2Y(s)
= 3[sU(s) - u(O~)] - U(s)
12 ·13)
where y(t) : = dy(t)/ dt and capitalletters denote the Laplace transforms of the cor-
responding lowercase letters. 2 Equation (2.13) is an algebraic equation and can be
manipulated using addition, subtraction, multiplication, and division. The grouping
of Y(s) and U(s) in (2.13) yields
(s 2 + 3s + 2)Y(s) = sy(O~) + y(O~) + 3y(O~) - 3u(O~) + (3s - 1)U(s)
which implies
Y(s) = (s + 3)y(O~) + y(O~) - 3u(O~) + 3s - 1
s2 + 3s + 2 s2 + 3s + 2 U(s) (2.14)
This equation reveals that the solution of (2.12) is partly excited by the input u(t),
t 2:: O, and partly excited by the initial conditions y( O~), y(O ~ ), and u(O ~ ). These
initial conditions will be called the initial state. The initial state is excited by the
input applied before t = O. In sorne sense, the initial state summarizes the effect of
the past input u(t), t < O, on the future output y(t), for t 2:: O. If different past inputs
u 1(t), u 2 (t), ... , t~ O, excite the same initial state, then their effects on the future
output will be identical. Therefore, how the differential equation acquires the initial
state at t = O is immaterial in studying its solution y(t), for t 2:: O. We mention that
2
We use A:= B to denote that A. by definition. equals B, andA =: B to denote that B, by definition,
equals A.
p
2.3 ZERO-INPUT RESPONSE AND ZERO-STATE RESPONSE 25
the initial time t = O is not the absolute time; it is the instant we starl to study the
system.
Consider again (2.14). The response can be decomposed into two parts. The
first part is excited exclusively by the initial state and is cálled the zero-input re-
sponse. The second part is excited exclusively by the input and is called the zero-
state response. In the study of LTIL systems, it is convenient to study the zero-input
response and the zero-state response separately. We first study the zero-input re-
sponse and then the zero-state response.
This is called the homogeneous equation. We now study its response dueto a nonzero
initial state. The application of the Laplace transform yields, as in (2.13 ),
s 2 Y(s) - sy(O-) - y(O-) + 3[sY(s) - y(O-)] + 2Y(s) = O
which implies
(s + 3)y(O-) + y(O-) (s + 3)y(O-) + y(O-)
Y(s) 2
(2.15)
s + 3s + 2 (s + l)(s + 2)
This can be expanded as
Y(s) - _k_l_ k2
+- - (2.16)
s+l s+2
with
(s + 3)y(O-) + y(0-)1
S + 2 s~ -1
and
..:..._(s_+_3):..::..Y..:..._(O_-..:..._)_+-=y:....:.·(0_---'-) 1
kz =-
S + 1 s~ -2
No matter what the initial conditions y(O-) and y(O-) are, the zero-input response
is always a linear combination of the two functions e- 1 and e- 21 • The two functions
e- 1 and e- 21 are the inverse Laplace transforms of 1/(s + 1) and 1/(s + 2). The
two roots - 1 and - 2-or, equivalently, the two roots of the denominator of (2.15)
26 CHAPTER 2 MATHEMATICAl PREliMINARY
are called the modes of the system. The modes govem the form of the zero-input
response of the system.
We now extend the preceding discussion to the general case. Consider the nth
order LTIL differential equation
any<nl(t)+ an _ 1y<n-l)(t) + · · · + a 1/ll(t) + a 0 y(t)
bmu<ml(t) + bm_ 1u<m-l)(t) + · · · + h 1u 0 >(t) + b 0 u(t) (2.18)
where
di
y(il(t) : = ---: y(t),
dt'
and
N(p) := bmpm + bm-JPm-1 + ... + b¡p + ho (2.19b)
In the study of the zero-input response, we assume u(t) =O. Then (2.21) reduces to
D(p)y(t) = O (2.22)
This is the homogeneous equation. Its solution is excited exclusively by initial con-
ditions. The app1ication of the Lap1ace transform to (2.22) yields, as in (2.15),
Y(s) = /(s)
D(s)
3
In the literature, they are also called the natural frequencies. However the wn in D(s) = s 2 + 2/;wns
+ w~ is also called the natural frequency in this and sorne other control texts. To avoid possible confusion,
we call the roots of D(s) the modes.
2.4 ZERO-STATE RESPONSE-TRANSFER FUNCTION 27
Y(s) = -k -
1
+ k2 + --= k --- 3 + _c_1~ + c2
s - 2 S + 2 - j3 S + 2 + j3 S + 1· (s + 1?
This is the general form of the zero-input response and is determined by the modes
of the system.
Exercise 2.3. 1
Exercise 2.3.2
Find the modes and the general form of the zero-input responses of
D(p)y(t) = N(p)u(t) (2.23)
where
and N(p) = 3p2 - 10
[Answers: O, O, 2, 2, -2 + }2, and -2 - j2; k1 + k 2t + k 3e 2t + k 4 te 2t +
kse- (2- j2)t + k6e- (2 + j2)t]
The response of (2.24) is partly excited by the initial conditions and partly excited
by the input u(t). If all initial conditions equa1 zero, the response is excited exclu-
sively by the input and is called the zero-state response. In the Laplace transform
domain, the zero-state response of (2.24) is govemed by, setting all initial conditions
in (2.14) to zero,
3s -
Y(s) s2 + 3
s + 2 U(s) =: G(s)U(s) (2.25)
28 CHAPTER 2 MATHEMATICAL PREUMINARV
where the rational function G(s) = (3s - 1)/(s 2 + 3s + 2) is called the transfer
function. It is the ratio of the Laplace transforms of the output and input when all
initial conditions are zero or
G(s) =-
Y(s) 1
=
5E[Output] 1 (2.26)
U(s) Initial conditions~O 5E[Input] Initial conditions~O
The transfer function describes only the zero-state responses of LTIL systems.
The application of the Laplace transform yields, assuming zero initial conditions,
ms 2Y(s) + k 1sY(s) + k2Y(s) = U(s)
or
(ms 2 + k 1s + k2)Y(s) = U(s)
Thus the transfer function from u to y of the mechanical system is
Y(s) 1
G(s) = - = ---::------
U(s) ms 2
+ k 1s + k2
This example reveals that the transfer function of a system can be readily ob-
tained from its differential-e.quation description. For example, if a system is de-
scribed by the differential equation
D(p)y(t) = N(p)u(t)
where D(p) and N(p) are defined as in (2.19), then the transfer function of the system
is
N(s)
G(s)
D(s)
Exercise 2.4. 1
Find the transfer functions from u to y of the networks shown in Figures 2.1 O and
2.11.
[Answers: 1/(6s + 2), 1/(LCs 2 + RCs + 1), (CR 2 s + 1)/(C(R 1 + R 2)s + 1).]
2.4 ZERO-STATE RESPONSE-TRANSFER FUNCTION 29
RLC Networks
Although the transfer function of an RLC network can be obtained from its
differential-equation description, it is generally simpler to compute it by using the
concept of the Lap1acian impedance or, simply, the impedance. If all initia1 condi-
tions are zero, the application of the Laplace transforms to (2.6) yields
V(s) = Rl(s) (resistor)
1
V(s) = - l(s) (capacitar)
Cs
and
V(s) = Lsl(s) (inductor)
These re1ationships can be written as V(s) = Z(s)l(s), and Z(s) is called the
(Lap1acian) impedance. Thus the impedances of the resistor, capacitor, and inductor
are respectively R, 1/Cs, and Ls. If we consider l(s) the input and V(s) the output,
then the impedance is a special case of the transfer function defined in (2.26). When-
ever impedances are used, all initial conditions are implicitly assumed to be zero.
The manipulation involving impedances is purely algebraic, identical to the
manipulation of resistances. For example, the resistance of the series connection of
two resistances R 1 and R 2 is R 1 + R 2 ; the resistance of the parallel connection of
R 1 and R 2 is R 1R 2 /(R 1 + R 2 ). Similarly, the impedance of the series connection of
two impedances Z 1(s) and Z2 (s) is Z 1(s) + Z2 (s); the impedance of the parallel
connection of Z 1(s) and Z2 (s) is Z 1(s)Z2 (s)/(Z 1(s) + Z2 (s)). The only difference is
that now we are dealing with rational functions, rather than real numbers as in the
resistive case.
Example 2.4.2
Compute the transfer function from u to i of the network shown in Figure 2.13(a).
Its equivalent network using impedances is shown in Figure 2.13(b). The impedance
of the parallel connection of 1/2s and 3s + 2 is
1
- (3s + 2)
2s + 2
3s
1 6s + 4s + 1
2
- + (3s + 2)
2s
30 CHAPTER 2 MATHEMATICAL PRELIMINARY
2Q 2
(a) (b) (e)
as shown in Figure 2.13(c). Hence the current/(s) shown in Figure 2.13 is given by
U(s) 6s 2 + 4s + 1
l(s) = 6s 2 + 7s + 3 U(s)
3s + 2
1 +
6s 2 + 4s +
Thus the transfer function from u to i is
l(s) 6s 2 + 4s + 1
G(s) = - = - -2 - = - - - -
U(s) 6s + 7s + 3
Exercise 2.4.3
Find the transfer functions from u to y of the networks in Figures 2.10 and 2.11
using the concept of impedances.
where N(s) and D(s) are two polynomials with real coefficients. We use deg to denote
the degree of a polynomial. If
deg N(s) > deg D(s)
G(s) is called an improper rational function. For example, the rational functions
s2 + 1
S and
S + 1
are all improper. If
deg N(s) :::::: deg D(s)
---
2.4 ZERO-STATE RESPONSE-TRANSFER FUNCTION 31
G(s) is called a proper rational function. lt is strictly proper if deg N(s) < deg D(s);
biproper if deg N(s) = deg D(s). Thus proper rational functions include both strictly
proper and biproper rational functions. If G(s) is biproper, sois G - 1(s) = D(s)/N(s).
This is the reason for calling it biproper.
Exercise 2.4.4
The propemess of a rational function G(s) can also be determined from the
value of G(s) at s = oo. It is clear that G(s) is improper if G(oo) = ± oo, proper if
G(oo) is a finite nonzero or zero constant, biproper if G(oo) is finite and nonzero, and
strictly proper if G(oo) = O.
The transfer functions we will encounter in this text are mostly proper rational
functions. The reason is twofold. First, improper transfer functions are difficult, if
not impossible, to build in practice, as will be discussed in Chapter 5. Second,
improper transfer functions will amplify high-frequency noise, as will be explained
in the following.
Signals are used to carry information. However, they are often corrupted by
noise during processing, transmission, or transformation. For example, an angular
position can be transformed into an electrical voltage by using the wirewound po-
tentiometer shown in Figure 2.14. The potentiometer consists of a finite number of
tums of wiring, hence the contact point moves from tum to tum. Because of brush
(a) (b)
where k is a constant and n(t) is noise. Therefore, in general, every signal is of the
forrn
v(t) = i(t) + n(t) (2.28)
where i(t) denotes inforrnation and n(t) denotes noise. Clearly in order for v(t) to
be useful, we require
v(t) = i(t)
where = denotes ''roughly equal to.'' If the response of a system excited by v(t) is
drastically different from that excited by i(t), the system is generally useless in
practice. Now we show that if the transfer function of a system is improper and if
the noise is of high frequency, then the system is useless. Rather than discussing the
general case, we study a system with transfer function s and a system with transfer
function 1/s. A system with transfer function s is called a dif.ferentiator because it
perforrns differentiation in the time domain. A system with transfer function 1/s is
called an integrator because it perforrns integration in the time domain. The forrner
has an improper transfer function, the latter has a strictly proper transfer function.
We shall show that the differentiator will amplify high-frequency noise; whereas the
integrator will suppress high-frequency noise. For convenience of discussion, we
as sume
i(t) = sin 2t n(t) = 0.01 sin lOOOt
and
v(t) = i(t) + n(t) = sin 2t + 0.01 sin 1000t (2.30)
The magnitude of the noise is very small, so we have v(t) = i(t). If we apply this
signal to a differentiator, then the output is
dv(t)
- - = 2 cos 2t + 0.01 X 1000 cos 1000t = 2 cos 2t + 10 cos 1000t
dt
Because the amp1itude of the noise terrn is five times larger than that of the infor-
mation, we do not have dv(t)/dt = di(t)/dt as shown in Figure 2.15. Thus a differ-
entiator-and, more generally, systems with improper transfer functions-cannot be
used if a signal contains high-freque_ncy noise.
2.4 ZERO-STATE RESPONSE-TRANSFER FUNCTION 33
di(t) dv(t)
dt dt
15 15
10 10
n n
5 5
o
-5 -5
V
-10 -10
-15 -15
o 2 4 6 8 10 12 14 o 2 4 6 8 10 12 14
Figure 2. 15 Responses of differentiator.
i t
o
u( r)dr = --
1
2
0.01
cos 2t - - - cos lOOOt
1000
L u( r)d'T = L i( r)d'T
Exercise 2.4.5
Consider
u(t) = i(t) + n(t) = cos 2t + 0.01 cos O.OOlt
Note that the frequency of the noise n(t) is much smaller than that of the information
i(t). Do we have u(t) = i(t)? Do we have du(t)/ dt = di(t)/ dt? Is it true that a
differentiator amplifies any type of noise?
[Answers: Yes, yes, no.]
34 CHAPTER 2 MATHEMATICAL PRELIMINARY
G(s) = N(s)
D(s)
where N(s) and D(s) are polynomials with real coefficients and deg N(s) ~ deg D(s).
o Definition
A finite real or complex number Á is a po/e of G(s) if IG(A)I = oo, where 1·1
denotes the absolute value. It is a zero of G(s) if G(A) = O. •
We have
N( -2) 2[( -2? + 3( -2? - ( -2) - 3] 6
G(-2) = ---= =- = 00
D(- 2) (- 3) · O · (- 1) o
Therefore -2 is a pole of G(s) by definition. Clearly -2 is a root of D(s).
Does this imply every root of D(s) is a pole of G(s)? To answer this, we check
s = 1, which is also a root of D(s). We compute G(1):
G( ) _ N(l) _ 2(1 + 3 - 1 - 3) O
1
D(1) O· 3 · 8 O
• _ N(s)l _ N'(s)l
G( 1) - D(s) - D'(s)
s~l s~l
2
+
-si =-""
2(3s 6s - 1) 16
24 00
Thus s = 1 is nota pole of G(s). Therefore not every root of D(s) is a pole of G(s).
Now we factor N(s) in (2.31) and then cancel the common factors betweenN(s)
and D(s) to yield
2(s + 3)(s - 1)(s + 1) 2(s + 3)
G(s) (2.32)
(s - 1)(s + 2)(s + V (s + 2)(s + 1) 2
We see immediately that s = 1 is nota pole of G(s). Clearly G(s) has one zero, -3,
2.4 ZERO-STATE RESPONSE-TRANSFER FUNCTION 35.
and three poles, - 2, - 1, and - l. The pole - 2 is called a simple pole and the
pole - 1 is called a repeated pole with multiplicity 2. 4
From this example, we see that if polynomials N(s) and D(s) have no common
factors, 5 then all roots of N(s), and all roots of D(s) are, respectively, the zeros and
poles of G(s) = N(s)/D(s). If N(s) and D(s) have no common factor, they are said
to be coprime and G(s) = N(s)/D(s) is said to be irreducible. Unless stated other-
wise, every transfer function will be assumed to be irreducible.
We now discuss the computation of the zero-state response. The zero-state re-
sponse of a system is govemed by Y(s) = G(s)U(s). To compute Y(s), we first
compute the Laplace transform of u(t). We then multiply G(s) and U(s) to yield Y(s).
The inverse Laplace transform of Y(s) yields the zero-state response. This is illus-
trated by an example.
Example 2.4.3
Find the zero-state response of (2.25) dueto u(t) = 1, for t :2:: O. This is called the
unit-step response of (2.25). The Laplace transform of u(t) is 1/s. Thus we have
3s - 1 1
Y(s) = G(s)U(s) = · - (2.33)
(s + l)(s + 2) s
To compute its in verse Laplace transform, we carry out the partial fraction expansion
as
3s - 1
+~
_k_¡_ k3
Y(s) +-
(s + l)(s + 2)s S + 1 S + 2 S
where
2~s~s=-1
3s -4
k¡ Y(s) · (s + 4
l)ls=-1 (s + (1)(-1)
3s - 1 1 -7
k2 Y(s) · (s + 2)1s= _ -3.5
2 (s + l)s s= -2 (-1)(-2)
and
3s - 1 -1
k3 - -0.5
Y(s). sls=O (s + l)(s + 2) 1s=O 2
4
If s is very large, (2.32) reduces to G(s) = l/s 2 and G(oo) = O. Thus oo can be considered as a repeated
zero with multiplicity 2. Unless stated otherwise, we consider only finite poles and zeros.
5
Any two polynomials, such as 4s + 2 and 6s + 2, have a constan! as a common factor. Such a common
factor, a polynomial of degree O, is called a trivial commonfactor. We consider only nontrivial common
factors-that is, common factors of degree 1 or higher.
t.··
36 CHAPTER 2 MATHEMATICAL PRELIMINARY
for t ::::::: O. Thus, the use of the Laplace transform to compute the zero-state response
is simple and straightforward.
This example reveals an important fact of the zero-state response. We see from
(2.34) that the response consists of three terms. Two are the inverse Laplace trans-
forms of 1/(s + 2) and 1/(s + 1), which are the poles ofthe system. The remaining
term is dueto the step input. In fact, for any u(t), the response of (2.33) is generally
of the form
y(t) = k 1e-t + k2 e- 2 t + (terms dueto the poles of U(s)) (2.35)
(see Problem 2.20). Thus the poles of G(s) determine the basic form of the zero-
state response.
Exercise 2.4.6
Example 2.4.4
Consider the system in (2.33). Find a bounded input u(t) so that the pole -1 will
not be excited. If U(s) = s + 1, then
3s - 1
Y(s) = G(s)U(s)
(s + 1)(s + 2
) · (s + 1)
3s - 1 3(s + 2) - 7 7
3 ---
S + 2 S + 2 S + 2
which implies
y(t) 38(t) - 7e- 2t
jiiiiP
This response does not contain e-t, thus the pole - 1 is not excited. Therefore if
we introduce a zero in U(s) to cancel a pole, then the pole will not be excited by the
input u(t).
If U(s) is biproper or improper, as is the case for U(s) = s + 1, then its inverse
Laplace transform u(t) will contain an impulse and its derivatives and is not bounded.
In order for u(t) to be bounded, we choose, rather arbitrarily, U(s) = (s + 1)/
s(s + 3), a strictly proper rational function. Its inverse Laplace transform is
u(t) = 31 + 32 e-3t
for t 2::: O and is bounded. The application of this input to (2.33) yields
3s - 1 s + 1 3s - 1
Y(s) = · = -----
(s + 2)(s + 1) s(s + 3) (s + 2)(s + 3)s
7 10
2(s + 2) 3(s + 3) 6s
which implies
7 10 1
y(t) = -e- 2t
- - e -3t
2 3 6
for t 2::: O. The second and third terms are due to the input, the first term is due to
the pole - 2. The term e- 1 does not appear in y(t), thus the pole - 1 is not excited
by the input. Similarly, we can show that the input (s + 2)/s(s + 1) or (s + 2)/
(s + 3) 2 will not excite the pole -2 and the input (s + 2)(s + 1)/s(s + 3) 2 will
not excite either pole.
From this example, we see that whether or not a pole ·will be excited depends
on whether u(t) or U(s) has a zero to cancel it. The Laplace transforms of the unit-
step function and sin w 0 t are
and
S
They have no zero. Therefore, either input will excite all poles of every L TIL system.
The preceding discussion can be extended to the general case. Consider, for
example,
(s + 10)(s + 2)(s - 1?
Y(s) = G(s)U(s) : = s3(s 2?(s + 2 - j2)(s + 2 + j2) U(s)
The transfer function G(s) has poles atO, O, O, 2, 2, and -2 ± j2. The complex
poles - 2 ± j2 are simple poles, the poles O and 2 are repeated poles with multi-
plicities 3 and 2. If G(s) and U(s) have no pole in common, then the zero-state
38 CHAPTER 2 MATHEMATICAL PRELIMINARY
Example 2.4.5
Consider
2
G 1(s)
(s + 1)(s + 1 + j)(s + - j)
0.2(s+ 10)
G2 (s)
(s + 1)(s + 1 + j)(s + - j)
-0.2(s - 10)
G3 (s)
(s + 1)(s + 1 + j)(s + - j)
2
10(s + 0.1s + 0.2)
G4 (s) =
(s + 1)(s + 1 + j)(s + 1 - j)
The transfer function G 1(s) has no zero, G2 (s) and G 3 (s) have one zero, and G4 (s)
has a pair of complex conjugate zeros at - 0.05 ± 0.444}. They all have the same
2.5.---~-/--~~----~--~----~--~----~--~----~--~
' 1 "
'x "~2
1.5 1' "
1 \~4 ~'
1 \
1
0.5 1 1
o
-0.5
-1
-1.5
-2L----L----L---~----~--~----~--~----~--~----~
o 2 3 4 5 6 7 8 9 10
set of poles, and their unit-step responses are all of the form
y(t) = k¡e-t + kze-(l+jl)t + k3e-(l-jl)t + k4
with k3 equal to the complex conjugate of k2 • Their responses are shown in Figure
2.16, respectively, with the so lid line (G 1(s)), dashed line (Gz(s)), dotted line (G 3 (s)),
and dash-and-dotted line (G 4(s)). They are quite different. In conclusion, even though
the poles of G(s) determine the basic form of responses, exact responses are detei-
mined by the poles, zeros, and the input. Therefore, the zeros of a transfer function
cannot be completely ignored in the analysis and design of control systems.
In the analysis and design of control systems, every device is represented by a block
as shown in Figure 2.4 or 2.17(a). The block is then represented by its transfer
function G(s). If the input is u(t) and the output is y(t), then they are related by
Y(s) = G(s)U(s) (2.37)
where Y(s) and U(s) are respectively the Laplace transforms of y(t) and u(t). Note
that we have mixed the time-domain representation u(t) and y(t) and the Laplace
transform representation G(s) in Figure 2.17(a). This convention will be used
throughout this text. It is important to know that it is incorrect to write y(t) =
G(s)u(t). The correct expression is Y(s) = G(s)U(s). 6
Equation (2.37) is an algebraic equation. The product of the Laplace transform
of the input and the transfer function yields the Laplace transform of the output. The
advantage of using this algebraic representation can be seen from the tandem con-
nection oftwo systems shown in Figure 2.17(b). Suppose the two systems are rep-
resented, respectively, by
Y1(s) = G 1(s)U 1(s)
In the tandem connection, we have u2 (t) = y 1 (t) or Uz(s) = Y1 (s) and
Y2 (s) = G 2 (s)Y1(s) = G2 (s)G 1(s)U1(s) (2.38)
u~
-~
(a) (b) (e)
Figure 2.17 (a) A system. (b) Tandem connection of two systems. (e) Reduction of (b ).
6
1t can also be expressed as y(t) = fb g(t - r)u( r)dr, where g(t) is the inverse Laplace transforrn of
G(s). See Reference [18]. This forrn is rarely used in the design of control systems.
40 CHAPTER 2 MATHEMATICAL PRELIMINARY
Thus the tandem connection can be represented by a single block, as shown in Figure
2017(c), with transfer function G(s) : = G2 (s)G 1(s), the product of the transfer func-
tions of the two subsystemso If we use differential equations, then the differential
equation description of the tandem connection will be much more complexo Thus
the use of transfer functions can greatly simplify the analysis and design of control
systemso
The transfer function describes only the zero-state response of a systemo There-
fore, whenever we use the transfer function in analysis and design, the zero-input
response (the response dueto nonzero initial conditions) is completely disregardedo
However, can we really disregard the zero-input response? This question is studied
in this sectiono
Consider a linear time-invariant lumped (LTIL) system described by the differ-
ential equation
D(p)y(t) = N(p)u(t) (2039)
where
D(p) anpn + an~JPn~J + ooo + a¡p + ao
N(p) bmpm + brn~ JPrn~ 1 + ooo + b¡p + bo
and the variable p is the differentiator defined in (2019) and (2020)0 Then the zero-
input response of the system is described by
D(p)y(t) = O (2040)
and the response is dictated by the roots of D(s), called the modes of the system
(Section 20301 )o The zero-state response of the system is described by the transfer
function
N(s)
G(s) : = -
D(s)
and the basic form of its response is govemed by the poles of G(s) (Section 2.4o2)o
The poles of G(s) are defined as the roots of D(s) after canceling the common factors
of N(s) and D(s)o Thus if D(s) and N(s) have no common factors, then
The set of the poles = The set of the modes (2o4l)
In this case, the system is said to be completely characterized by its transfer functiono
If D(s) and N(s) have common factors, say R(s), then the roots of R(s) are nodes of
the system but not poles of G(s)o In this case the roots of R(s) are called the missing
poles of the transfer function, and the system is said to be not completely charac-
terized by its transfer functiono We use examples to illustrate this concept and discuss
its implicationso
.....
2.5 BLOCK REPRESENTATION--COMPLETE CHARACTERIZATION 41
Example 2.5. 1
Consider the system shown in Figure 2.18. The input is a current source. The output
y is the vo1tage across the 2-!1 resistor as shown. The system can be described by
the LTIL differential equation
dy(t) du(t)
- - 0.75y(t) = - - 0.75u(t) (2.42a)
dt dt
(Problem 2.21). The equation can also be written, using p = d/dt, as
(p - 0.75)y(t) = (p - 0.75)u(t) (2.42b)
The mode of the system is the root of (s - 0.75) or 0.75. Therefore its zero-input
response is of the form
y(t) = keo.75t (2.43)
where k depends on the initia1 voltage of the capacitar in Figure 2.18. We see that
if the initial voltage is different from zero, then the response will approach infinity
as t ~ oo.
We will now study its zero-state response. The transfer function of the system
is
S 0.75
G(s) = ---= 1 (2.44)
S 0.75
Because of the common factor, the transfer function reduces to l. Thus the system
has no pole and the zero-state response is y(t) = u(t), for all t. This system is not
completely characterized by its transfer function because the mode 0.75 does not
appear as a pole of G(s). In other words, the transfer function has missing pole 0.75.
If we use the transfer function to study the system in Figure 2.18, we would
conclude that the system is acceptable. In reality, the system is not acceptable, be-
cause if, for any reason, the voltage of the capacitar becomes nonzero, the response
il
u-i 1
+T
2Q y
t
u
(Current
source)
-1
IQ
will grow without bound and the system will either become saturated or bum out.
Thus the system is of no use in practice.
The existence of a missing pole in Figure 2.18 can easily be explained from the
structure of the network. Because of the symmetry of the four resistors, if the initial
voltage of the capacitor is zero, its voltage will remain zero no matter what current
source is applied. Therefore, the removal of the capacitor will not affect the zero-
state response of the system. Thus, the system has a superftuous component as far
as the input and output are concemed. These types of systems are not built in practice,
except by mistake.
Example 2.5.2
Consider the system described by
(p 2 + 2p - 3)y(t) = (p - 2)u(t) (2.45)
considering the response due to nonzero initial conditions. If a system is not com-
pletely characterized by its transfer function, care must be exercised in using the
transfer function to study the system. This point is discussed further in Chapter 4.
Exercise 2.5. 1
Which of the following systems are completely characterized by their transfer func-
tions? If not, find the missing poles.
a. (p 2 + 2p + 1)y(t) (p + 1)p u(t)
b. (p 2
- 3p + 2)y(t) (p 1) u(t)
c. (p 2 - 3p + 2)y(t) = u(t)
IOQ
lO V lO V
(a) (b)
no matter what device is connected to it, the supp1ied vo1tage to the device is a1ways
1O vo1ts. In this case, the connection is said to have no loading problem.
Rough1y speaking, if its transfer function changes after a system is connected
to another system, the connection is said to have a loading effect. For examp1e, the
transfer function from u to y in Figure 2.19(a) is 1 before the system is connected
to any device.lt becomes 5/10 = 0.5 when the system is connected a 10-fl resistor.
Thus, the connection has a 1oading effect. If the tanctem connection of two systems
has a 1oading effect, then the transfer function of the tandem connection does not
equa1 the product of the transfer functions of the two subsystems as deve1oped in
(2.38). This is illustrated by an examp1e.
Example 2.5.3
Consider the networks shown in Figure 2.20(a). The transfer function from u 1 to y 1
of network M 1 is G 1(s) = s j (s + 1). The transferfunction from u 2 to y2 of network
M 2 is G2 (s) = 2/(2 + 3s). Now we connect them together or set y 1 = u2 as shown
in Figure 2.20(b) and compute the transfer function from u 1 to y2 . The impedance
of the paralle1 connection of the impedance s and the impedance (3s + 2) is
s(3s + 2)/(s + 3s + 2) = s(3s + 2)/(4s + 2). Thus the current / 1 shown in
(a) (b)
o----...._--'-0+ !
-1
J"'
L __________ _j
(e)
The loading of the two networks in Figure 2.20 can be easily explained. The
current / 2 in Figure 2.20(a) is zero before the connection; it becomes nonzero after
the connection. Thus, the loading occurs. In electrical networks, the loading often
can be eliminated by inserting an isolating amplifier, as shown in Figure 2.20(c).
The input impedance Z;0 of an ideal isolating amplifier is infinity and the output
impedance Zout is zero. Under these assumptions, / 2 in Figure 2.20(c) remains zero
and the transfer function of the connection is G 2 (s)kG 1(s) with k = l.
The loading problem must be considered in developing a block diagram for a
control system. This problem is considered in detail in the next chapter.
The transfer function describes only the relationship between the input and output
of LTIL systems and is therefore called the input-output description or externa!
description. In this section we shall develop a different description, caBed the state-
variable description or interna! description. Strictly speaking, this description is the
same as the differential equations discussed in Section 2.2. The only difference is
that high-order differential equations are now written as sets of first-order differ-
ential equations. In this way, the study can be simplified.
46 CHAPTER 2 MATHEMATICAL PRELIMINARY
(2.52a)
y(t) (2.52b)
where u and y are the input and output; X¡, i = 1, 2, 3, are called the state variables;
aij• b;, e¡, and d are constants; and i;(t): = dx;(t)/dt. These ,equations are more often
written in matrix form as
x(t) Ax(t) + bu(t) (State equation) (2.53a)
with
["" ""j
a¡z
X
l;:J A az¡
a31
azz
a32
az3
a33
b
[::J (2.54a)
and
e = [c 1 Cz c3] (2.54b)
The vector x is called the state vector or simply the state. If x has n state variables
or is an n X 1 vector, then A is an n X n square matrix, b is an n X 1 column
vector, e is a 1 X n row vector, and d is a 1 X 1 scalar. A is sometimes called the
system matrix and d the direct transmission part. Equation (2.53a) describes the
relationship between the input and state, and is called the state equation. The state
equation in (2.53a) consists of three first-order differential equations and is said to
have dimension 3. The equation in (2.53b) relates the input, state, and output, and
is called the output equation. It is an algebraic equation; it does not involve differ-
entiation of x. Thus if x(t) and u(t) are known, the output y(t) can be obtained simply
by multiplication and addition.
Before proceeding, we remark on the notation. Vectors are denoted by boldface
lowercase letters; matrices by boldface capitalletters. Scalars are denoted by regular-
face lowercase letters. In (2.53), A, b, e, and x are boldface because they are either
vectors or matrices; u, y, and d are regular face because they are scalars.
This text studies mainly single-variable systems, that is, systems with single
input and single output. For multivariable systems, we have two or more inputs
and/ or two and more outputs. In this case, u(t) and y(t) will be vectors and the orders
of b ande and d must be modified accordingly. For example, if a system has three
inputs and two outputs, then u will be 3 X 1; y, 2 X 1; b, n X 3; e; 2 X n; and
d, 2 X 3. Otherwise, the form of state-variable equations remains the same.
The transfer function describes only the zero-state response of systems. Thus,
when we use the transfer function, the initial state or initial conditions of the system
2.6 STATE-VARIABLE EQUATIONS 47
where u(t) is the applied force (input), y(t) is the displacement (output), y(t) : =
dy(t)/ dt, and y(t) : = d 2y(t)/ dt 2.
The potential energy and kinetic energy of a mass are stored in its position and
velocity; therefore the position and velocity will be chosen as state variables. Define
X¡ (t) : = y(t) (2.56a)
and
(2.56b)
Then we have
.i1(t) = y(t) = x 2 (t)
This relation follows from the definition of x 1(t) and x 2 (t) and is independent of the
system. Taking the derivative of x 2 (t) yields
i2(t) = y(t)
1
- [- k 2 y(t) - k 1y(t) + u(t)]
m
k2 k¡ 1
- - x 1(t) - - xit) +- u(t)
m m m
These equations can be arranged in matrix forro as
(2.57a)
y(t) = [1 (2.57b)
Consider the RLC network shown in Figure 2.21. It consists of one resistor, one
capacitor, and one inductor. The input u(t) is a voltage source and the voltage across
the 3-H inductor is chosen as the output.
Step 1: The capacitor voltage x 1(t) and the inductor currents x 2(t) are chosen as
state variables. The capacitor current is 2i 1(t), and the inductor voltage is
3iit).
Step 2: The current passing through the 4-0 resistor clearly equals x 2 (t). Thus, the
voltage across the resistor is 4x2 (t). The polarity of the voltage must be
specified, otherwise confusion may occur.
2.6 STATE-VAR1ABLE EQUATIONS 49
Step 3: From Figure 2.21, we see that the capacitor current 2i 1 equals x2 (t), which
implies
(2.58)
The voltage across the inductor is, using Kirchhoff 's voltage law,
3i2 (t) = u(t) - x 1(t) - 4x2 (t)
or
(2.59)
(2.60a)
Exercise 2.6. 1
Find state-variable descriptions of the networks shown in Figure 2.22 with the state
variables and outputs chosen as shown.
JQ
u
-rr
X y u
+T
x2
+lJ -1
Figure 2.22 Networks.
where 1 is a unit matrix of the same order as A. Note that without introducing 1,
(s - A) is not defined, for s is a scalar and A is an n X n matrix. The premultipli-
cation of (si - A) - I to (2.62) yields
2.7 SOLUTIONS OF STATE EQUATION5-LAPLACE TRANSFORM METHOD 51
Zero-Input Zero-State
Response Response
This response consists of the zero-input response (the response dueto nonzero x(O))
and the zero-state response (the response due to nonzero u(t)). If we substitute X(s)
into the Laplace transform of (2.61 b ), then we will obtain
Y(s) = c(sl - A) - 1x(O) + [c(sl - A) -lb + d]U(s) (2.64)
Zero-Input Zero-State
Response Response
This is the output in the Laplace transform domain. lts inverse Laplace transform
yields the time response. Thus, the response of state-variable equations can be easily
obtained using the Laplace transform.
Example 2.7.1
(2.650)
y = [4 5]x (2.65b)
Find the output dueto a unit-step input and the initial state x(O) = [ - 2 1]', where
the prime denotes the transpose of a matrix or a vector. First we co'!lpute
si - A = [ ~ ~J [ -~ - 3.5] = [S -6
+ 6 3.5 J
4 S- 4
lts inverse is
(si - A)- 1
S + 6 3.5 J-!
[ -6 S- 4
1 [S - 4 -3.5 J
(s + 6)(s - 4) + 6 X 3.5 6 S + 6
Thus we have
s 2 +2s-3
1 [4s + 14 5s + 16] [-2] 1
-3s - 12
s 2 + 2s - 3
-
52 CHAPTER 2 MATHEMATICAL PRELIMINARY
and
G(s) := c(sl- A)- 1b
[4 5] S2 + ~S - 3 [S 6 4
S
-3.5][-1]
+ 6 1
(2.66)
S + 2
s 2
+ 2s 3
Thus, using (2.64) and U(s) = 1/s, the output is given by
- 3s - 12 s + 2 1 - 3s 2 - lls + 2
Y(s) = + - = ------
s2 + 2s - 3 s2 + 2s - 3 s (s - 1)(s + 3)s
which can be expanded by partia1 fraction expansion as
-3 2/3 2/3
Y(s) = - - +
s-1 s+3 s
Thus the output is
This example shows that the response of state-variable equations can indeed be
readily obtained by using the Laplace transform.
which becomes, if T = t,
1 (2.70)
(2.71)
Now using (2.67) through (2.71) we can show that the solutions of (2.61) due to
u(t) and x(O) are given by
Zero-Input
and Response
Zero-State Response
,....--A---,
To show that (2.72) is the solution of (2.61a), we must show that it meets the initial
condition and the state equation. Indeed, at t = O, (2.72) reduces to x(O) =
lx(O) + 1 · O = x(O). Thus it meets the initial condition. The differentiation of
(2.72) yields
This shows that (2.72) is the solution of (2.6la). The substitution of (2.72) into
(2.6lb) yields immediately (2.73). If u = O, (2.6la) reduces to
x(t) = Ax(t)
This is called the homogeneous equation. lts solution due to the initial state x(O) is,
from (2.72),
54 CHAPTER 2 MATHEMATICAl PREliMINARY
This formula was used in (2.66) to compute the transfer function of (2.65). Now we
discuss its general property. Let det stand for the determinant and adj for the adjoint
of a matrix. See Appendix B. Then (2.75) can be written as
1 .
G(s) = e [adJ (si - A)]b + d
det(sl - A)
We call det (si - A) the characteristic polynomial of A. For example, the charac-
teristic polynomial of the A in (2.65) is
[T
and their transposes are all
Tl [L
o
o
1
o
(2.77)
Ll(s) = s 4 + a 1s 3 + a2 s 2 + a 3 s + a4 (2.78)
This characteristic polynomial can be read out directly from the entries of the mat-
rices in (2.77). Thus, these matrices are called companionforms of the po1ynomial
in (2.78).
Now we use an exarnple to discuss the relationship between the eigenvalues of
a state-variable equation and the poles of its transfer function.
Example 2.8. 1
(2.79a)
y = [4 5]x (2.79b)
Thus its eigenvalues are 1 and -3. The transfer function of (2.79) was computed
in (2.66) as
S + 2 S+ 2
G(s) = s 2 + 2s - 3 (s + 3)(s - 1)
Its poles are 1 and -3. The number of the eigenvalues of (2.79) equals the number
of the poles of its transfer function.
56 CHAPTER 2 MATHEMATICAL PRELIMINARY
Example 2.8.2
(2.80a)
y = [ -2 -1]x (2.80b)
where T is called the sampling period. This type of signal is completely specified
by the sequence of numbers u(k), k = O, 1, 2, ... , as shown in Figure 2.23(b). The
signal in Figure 2.23(a) is called a continuous-time or analog signa! because it is
defined for all time; the signal in Figure 2.23(b) is called a discrete-time signa!
because it is defined only at discrete instants of time. Note that a continuous-time
signal may not be a continuous function of time as shown in the figure.
If we apply a stepwise input to (2.82), the output y(t) is generally not stepwise.
However, if we are interested in only the output y(t) at t = kT, k = O, 1, 2, ... , or
y(k) : = y(kT), then it is possible to develop an equation simpler than (2.82) to
describe y(k). We develop such an equation in the following.
The solution of (2.82) was developed in (2.72) as
This equation holds for any t-in particular, for t at kT. Let x(k) : = x(kT) and
x(k + 1) : = x((k + 1)T). Then we have
(2.85)
x(k + 1) (2.86)
u(t)
We rewrite (2.86) as
x(k + 1)
(2.87)
The term inside the brackets equals x(k). If the input u(t) is stepwise as in (2.83),
then u( T) in the integrand of the last term of (2.87) equals u( k) and can be moved
outside the integration. Define a = (k + 1)T - T. Then da = -dT and (2.87)
becomes
with
Example 2.9.1
Consider the continuous-time state-variable equation
(2.90a)
y = [2 1]x (2.90b)
First we compute
-1
-1
~[eA 1 ] = (si - A) - 1
S + 1 J
= (s
1
+ 1f
[S + 1
O
1
s + 1
J
[
' : 1 (' ~ 1)']
S + 1
Its inverse Lap1ace transform is, using Table A.1,
e-t te-t]
eAr =
[ O e- 1
If T = 0.1, then
y( k) [2 1]x(k) (2.92b)
This completes the discretization. This discretized equation is used on digital com-
puters, as will be discussed in Chapter 5, to compute the response of the continuous-
time equation in (2.90).
We have used the Laplace transform to compute eAT in (2.91). If T is small, the
infinite series in (2.67) may converge rapidly. For example, for eAT in (2.91) with
T = 0.1, we have
1 + TA + -1 (TA)(TA) =
[0.9050 0.0900]
2 o 0.9050
and
1 + TA + 21 (TA)(TA) + 61 (TA)(TA)(TA) [
0.9048
o
0.0905]
0.9048
60 CHAPTER 2 MATHEMATICAL PRELIMINARY
The infinite series converges to the actual values in four terms. Therefore for small
T, the infinite series in (2.67) is a convenient way of computing eAT. In addition to
the Laplace transform and infinite series, there are many ways (at least 17) of com-
puting eAT. See Reference [47]. In Chapter 5, we will introduce computer software
to solve and to discretize state-variable equations.
PROBLEMS
2. 1. Consider the pendulum system shown in Figure P2.1, where the applying force
u is the input and the angular displacement (} is the output. It is assumed that
there is no friction in the hinge and no air resistance of the mass. Is the system
linear? Is it time-invariant? Find a differential equation to describe it. Is the
differential equation LTIL?
m Figure P2.1
2.2. Consider the pendulum system in Figure P2.1. It is assumed that sin (} and cos
(}can be approximated by(} and 1, respectively, if 101 < 7T/4 radians. Find an
LTIL differential equation to describe the system.
2.3. a. Consider the system shown in Figure P2.3(a). The two blocks with mass
m 1 and m 2 are connected by a rigid shaft. It is assumed that there is no
friction between the ftoor and wheels. Find a differential equation to de-
scribe the system.
b. If the shaft is long and flexible, then it must be modeled by a spring with
spring constant ~ and a dashpot. A dashpot is a device that consists of a
Y¡~
(a) (b)
Figure P2.3
...
PROBLEMS 61
and
2.4. If a robot arm is long, such as the one on a space shuttle, then its employment
must be modeled as a flexible system, as shown in Figure P2.4. Define
T: Applied torque
k1 : Viscous-friction coefficient
k2 : Torsional spring constant
1;: Moment of inertia
0¡: Angular displacement
Find an LTIL differential equation to describe the system.
k2
k¡ 1¡
)
(:11 Figure P2.4
2.5. Find LTIL differential equations to describe the networks in Figure P2.5.
(a) (b)
Figure P2.5
2.6. Consider the network shown in Figure P2.6(a), in which T is a tunnel diode
with characteristics as shown. Show that if v lies between a and b, then the
circuit can be modeled as shown in Figure P2.6(b). Show also that if v lies
between e and d, then the circuit can be modeled as shown in Figure P2.6(c),
62 CHAPTER 2 MATHEMATICAL PRELIMINARY
(a)
(b) (e)
Figure P2.6
2.9. Consider the differential equation in Problem 2.7. Find the response due to
y(O-) = 1, y(O-) = 2, and u(t) = 2, for t 2:: O.
pz
PROBLEMS 63
2.10. Find the transfer function from u to (} of the system in Problem 2.2.
2. 11. Find the transfer functions for the networks in Figure P2.5. Compute them
from differential equations. Also compute them using the concept of Laplacian
impedances.
2.12. Find the transfer functions from u to y, u to y 1 , and u to Y2 of the systems in
Figure P2.3.
2. 13. Considera system. If its zero-state response due to a unit -step input is measured
as y(t) = 1 - e-zr + sin t, where is the transfer function of the system?
2.14. Considera system. If its zero-state response due to u(t) = sin 2t is measured
as y(t) = e- 1 - 2e- 31 + sin 2t + cos 2t, what is the transferfunction ofthe
system?
2. 15. Consider the LTIL system described by
a. Find the zero-input response dueto y(O-) = 1 and )'(0-) 2. What are
the modes of the system?
b. Find the zero-state response dueto u(t) = 1, for t 2: O.
c. Can you detect all modes of the system from the zero-state response?
d. Is the system completely characterized by its transfer function? Will a seri-
ous problem arise if we use the transfer function to study the system?
2. 16. Repeat Problem 2.15 for the differential equation
2
d y(t) + 2 dy(t) _ 3 (t) = du(t) + 3 u(t)
dt 2 dt y dt
2.17. Considera transfer function G(s) with poles -1 and -2 ± j1, and zero 3.
Can you determine G(s) uniquely? If it is also known that G(2) = - 0.1, can
you now determine G(s) uniquely?
2.18. Find the unit-step response of the following systems and plot roughly the
responses:
2
a. G 1 (s)
(s + 1)(s + 2)
20(s + 0.1)
b. Gz(s)
(s + 1)(s + 2)
0.2(s + 10)
e G 3 (s) = -----
. (s + 1)(s + 2)
-20(s - 0.1)
d G4 (s) = -----
• (s + 1)(s + 2)
64 CHAPTER 2 MATHEMATICAL PRELIMINARY
-0.2(s - 10)
e G5 (s) = ---'--------'---
. (s + 1)(s + 2)
Which type of zeros, closer to or farther away from the origin, has a larger
effect on responses?
2.19. Find the po1es and zeros of the following transfer functions:
2(s 2 - 9)(s + 1)
a G(s) - ------'----=---------=-
. - (s + 3f(s + 2)(s - 1?
10(s 2 - s + 1)
b. G(s) = s4 + 2s3 + s + 2
2s 2 + 8s + 8
c. G(s) = (s + l)(s 2 + 2s + 2)
2.20. a. Considera system with transfer function G(s) = (s - 2)/s(s + 1). Show
that if u(t) = e 1, t 2: O, then the response of the system can be decomposed
as
Total response = Response due to the poles of G(s)
+ Response due to the poles of U(s)
b. Does the decomposition hold if u(t) = 1, for t 2': O?
c. Does the decomposition hold if u(t) = e 21 , for t 2': O?
d. Show that the decomposition is valid if U(s) and G(s) have no common
pole and if there are no pole-zero cancellations between U(s) and G(s).
2.21. Show that the network in Figure 2.18 is described by the differential equation
in (2.42).
2.22. Consider the simplified model of an aircraft shown in Figure P2.22. It is as-
sumed that the aircraft is dynamically equivalent at the pitched angle e0 , ele-
vator angle u0 , altitude h0 , and cruising speed v 0 • It is assumed that small
deviations of e and u from ea and Uo generate forces ft = k¡ e and fz = kzU,
as shown in the figure. Let m be the mass of the aircraft; /, the moment of
e
V
1----- 12 ----+j/ 1
Figure P2.22
p
PROBLEMS 65
inertia about the center of gravity P; bÓ, the aerodynamic damping and h, the
deviation of the altitude from h0 • Show that the transfer function from u to h
is, by neglecting the effect of /,
k 1k2!2 - k2bs
G (S) = ~:=-=-----=--
ms2(bs + k1 / 1 )
2.23. Considera cm:t with a stick hinged on top of it, as shown in Figure P2.23. This
could be a model of a space booster on takeoff. If the angular displacement 8
of the stick is small, then the system can be described by
e= 8 +u
y = {38 - u
where f3 is a constant, and u and y are expressed in appropriate units. Find the
transfer functions from u to 8 and from u to y. Is the system completely char-
acterized by the transfer function from u to y? By the one from u to 8? Which
transfer function can be used to study the system?
Figure P2.23
2.24. Show that the two tanks shown in Figure P2.24(a) can be represented by the
block diagram shown in Figure P2.24(b). Is there any loading problem in the
block diagram? The transfer function of the two tanks shown in Figure 2.12
is computed in Exercise 2.4.2. Is it possible to represent the two tanks in Figure
2.12 by two blocks as in Figure P2.24(b) on page 66? Give your reasons.
2.25. Find state-variable equations to describe the systems in Problems 2.1 and 2.2.
2.28. The soft landing phase of a lunar module descending on the moon can be
modeled as shown in Figure P2.28. It is assumed that the thrust generated is
proportional to riz, where m is the mass of the module. The system can be
described by my = - kriz - mg, where gis the gravity constant on the lunar
surface. Define the state variables of the system as x 1 = y, x 2 = y, x 3 = m,
and u = riz. Find a state-variable equation to describe the system. Is it a time-
invariant equation?
......
66 CHAPTER 2 MATHEMATICAL PRELIMINARY
(a)
ql
A 1R 1s+l A 2 R 2 s+l
Ty
lff//ff///ff///,1j//////////////////,?
Lunar surface Figure P2.28
x [~ -~J x + [- :] u
y = [4 5]x
A = [ -1
o
o]
-2
Verify your results by using (2.74).
b. X = [ ~ _ ~J X + [ ~ Ju
y = [2 -2]x
c. x= [ =~ ~ J X + [ ~Ju
y = [1 O O]x
Do they have the same transfer functions? Which are minimal equations?
2.33. Find the outputs of the equations in Problem 2.31 due to a unit -step input and
zero initial state.
y(k) = [1 O]x(k)
dueto x(O) [1 -2]' and u(k) = 1 for all k;::: O.
68 CHAPTER 2 MATHEMATICAL PRELIMINARY
2.36. Discretize the continuous-time equation with sampling period 0.1 second
y = [1 O]x
A= [ -~ -~.5]
is computed in (2.76) as
A(s) s2 + 2s - 3
Verify
A(A) = A2 + 2A - 31 = O
This is called the Cayley-Hamilton Theorem. lt states that every square
matrix meets its own characteristic po1ynomial.
b. Show that A2 , A 3 , . . . , can be expressed as a linear combination of 1 and
A. In general, for a square matrix A of order n, Ak for k 2: n can be expressed
as a linear combination of 1, A, A2 , •.• , An ~ 1•
2.38. Show that if (2.67) is used, b in (2.89c) can be expressed as
+ .. ·) b
3. 1 INTRODUCTION
The design of control systems can be divided into two distinct parts. One is con-
cerned with the design of individual components, the other with the design of overall
systems by utilizing existing components. The former belongs to the domain of
instrumentation engineers; the latter, the domain of control engineers. This is a con-
trol text, so we are mainly concemed with utilization of existing components. Con-
sequently, our discussion of control components stresses their functions rather than
their structures.
Control components can be mechanical, electrical, hydraulic, pneumatic, or opti-
cal devices. Depending on whether signals are modulated or not, electrical devices
again are divided into ac (altemating current) or de (direct current) devices. Even a
cursory introduction of these .devices can easily take up a whole text, so this will
not be attempted. Instead, we select a number of commonly used control compo-
nents, discuss their functions, and develop their transfer functions. The loading prob-
lem will be considered in this development. We then show how these components
are connected to form control systems. Block diagrams of these control systems are
developed. Finally, we discuss manipulation of block diagrams. Mason's formula is
introduced to compute overall transfer functions of block diagrams.
69
70 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
3.2 MOTORS
Motors are indispensable in many control systems. They are used to tum antennas,
telescopes, and ship rudders; to close and open valves; to drive tapes in recorders,
and rollers in steel milis; and to feed paper in printers. There are many types of
motors: de, ac, stepper, and hydraulic. The magnetic field of a de motor can be
excited by a circuit connected in series with the armature circuit; it can also be excited
by a field circuit that is independent of the armature circuit or by a permanent magnet.
We discuss in this text only separately excited de motors. DC motors used in control
systems, also called servomotors, are characterized by large torque to rotor-inertia
ratios, small sizes, and better linear characteristics. Certainly, they are more expen-
sive than ordinary de motors.
where k is a constant. The generated torque is used to drive a load through a shaft.
The shaft is assumed to be rigid. To simplify analysis, we consider only the viscous
friction between the shaft and bearing. Let J be the total moment of inertia of the
load, the shaft, and the rotor of the motor; (}, the angular displacement of the load;
and f, the viscous friction coefficient of the bearing. Then we have, as developed
in (2.5),
d 20(t) dO(t)
T(t) = J - - + j- (3.2)
dt2 dt
This describes the relationship between the motor torque and load's angular
displacement.
If the armature current ia is kept constant and the input voltage u(t) is applied
Field circuit
Armature circuit
Figure 3.1 DC motor.
p
3.2 MOTORS 71
to the field circuit, the motor is called afield-controlled de motor. We now develop
its block diagram.
In the field-controlled de motor, the armature current ia(t) is constant. Therefore,
(3.1) can be reduced to
T(t) = kia(t)i¡(t) =; k¡i¡(t) (3.3)
The application of the Laplace transform to (3.3) and (3.4) yields, 1 assuming zero
initial conditions,
T(s) = k¡l¡(s)
L¡sl¡(s) + R¡l¡(s) = U(s)
which imply
l¡(s) ---U(s)
L¡s + R¡
and
k
T(s) = f U(s)
L¡s + R¡
Thus, if the generated torque is considered as the output of the motor, then the
transfer function of the field-controlled de motor is
T(s) k¡
Gm(s) : = - = (3.5)
U(s) L¡s + R¡
This transfer function remains the same no matter what load the motor drives. Now
we compute the transfer function of the load. The application of the Laplace trans-
form to (3.2) yields, assuming zero initial conditions,
T(s) = Js 2 E>(s) + fsE>(s)
which can be written as
E>(s) 1
G¡(s) : = T(s) = s(Js + f)
This is the transfer function of the load if we consider the motor torque the input,
and the load's angular displacement the output. Thus, the motor and load in
1
Capitalletters are used to denote the Laplace transforms of the corresponding lowercase letters. In the
case of T(t) and T(s), we use the arguments to differentiate them.
72 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
Motor Load
~--T--~~~~--s-(J_s_+_f_)~~----~:
Figure 3.2 Block diagram of field-controlled de motor.
Figure 3.1 can be modeled as shown in Figure 3.2. lt consists of two blocks: One
represents the motor; the other, the load. Note that any change of the load will not
affect the motor transfer function, so there is no loading problem in the connection.
The transfer function of the field-controlled de motor driving a load is the prod-
uct of Gm(s) and G 1(s) or
k¡
G(s) = Gm(s)G1(s) = s(L¡s + R¡)(Js + f)
(3.6)
km
-.------~~-----
(T¡S + 1)s(TmS + 1)
where T¡: = L¡/R¡, Tm: = 1/f and km:= k¡/R¡f· The constant T¡ depends only on
the electric circuit and is therefore called the motor electrical time constant. The
constant Tm depends only on the load and is called the motor mechanical time con-
stant. The physical meaning of the time constant will be discussed in the next chapter.
If the electrical time constant T¡ is much smaller than the mechanical time constant
Tm, as is often the case in practice, then G(s) is often approximated by
km
G(s) = ----"'--- (3.7)
s(Tms + 1)
For a discussion of this type of approximation, see Reference [18]. Note that Tm
depends only on the load.
The field-controlled de motor is used to drive a load, yet there is no loading
problem in Figure 3.2. How could this be possible? Different loads will induce
different ia; even for the same load, ia will not be constant. Therefore, the loading
problem is eliminated by keeping ia constant. In practice, it is difficult to keep ia
constant, so the field-controlled de motor is rarely used.
where kt : = ki¡(t) is a constant. When the motor is driving a load, a back electro-
motive force (back emf) voltage vb will develop in the armature circuit to resist the
3.2 MOTORS 73
applied voltage. The voltage vb(t) is linearly proportional to the angular velocity of
the motor shaft:
(3.9)
or
Using (3.2), (3.8), and (3.11), we now develop a transfer function for the motor. The
substitution of (3.8) into (3.2) and the application of the Laplace transform to (3.2)
and (3.11) yield, assuming zero initial conditions,
kJa(s) = Js 2 0(s) + fs®(s) (3.12)
Motor Load
r------------, r----------------,
1 1 1 1
u T dO/dt ()
1 + 1 1
1--+--
1 - 1
ls+f
1 1 1 1
_ _ _ _ _ _ _ _ _ _ _j L _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _j
L_
Exercise 3.2. 1
The input and output of the transfer function in (3.15) are the applied electrical
voltage and the angular position of the motor shaft as shown in Figure 3.4(a). If
motors are used in velocity control systems to drive tapes or conveyers, we are no
longer interested in their angular positions O. Instead we are mainly concemed with
2
In engineering, it is often said that the electrical time constant is much smaller than the mechanical time
constant, thus we can set La = O. If we write (3.14) as G(s) = km/s(as + I)(bs + 1), then a and b are
called time constants. Unlike (3.6), a and b depend on both e1ectrica1 and mechanical parts of the motor.
Therefore, electrical and mechanical time constants in arrnature-controlled de motors are no longer as
well defined as in field-controlled de motors.
3.2 MOTORS 75
s(Tms+l) T S+)
m
(a) (b)
Figure 3.4 Transfer functions of armature-controlled de motor.
their angular velocities. Let w be the angular velocity of the motor shaft. Then we
have w(t) = d()/dt and
The transfer function of the motor from applied voltage u to angular velocity w is
W(s)
(3.18)
U(s)
as shown in Figure 3.4(b). Equation (3.18) differs from (3.15) by the absence of one
pole at s = O. Thus the transfer function of motors can be (3 .15) or (3.18), depending
on what is considered as the output. Therefore it is important to specify the input
and output in using a transfer function. Different specifications lead to different
transfer functions for the same system.
R(t) to t1 t2
o
'
"' '-O
'
'o....
' o-,
(a) (b)
This is called the final speed or steady-state speed. If we apply a voltage of known
magnitude a and measure the final speed w(oo), then the motor gain constant can
be easily obtained as km w(oo)ja. To find the motor time constant, we rewrite
(3.19) as
where ln stands for the naturallogarithm. Now if we measure the speed at any t, say
t = t0 , then from (3.21) we have
-to
Tm = ------~---- (3.22)
ln ( 1 - w(to))
w(oo)
Thus the motor time constant can be obtained from the final speed and one additional
measurement at any t.
Example 3.2. 1
Consideran armature-controlled de motor driving a load. We apply 5 V to the motor.
The angular speed of the load at t0 = 2 s is measured as 30 rad/s and the final speed
is measured as 70 rad/s. What are the transfer functions from the applied voltage to
the speed and from the applied voltage to the displacement?
From (3.20) and (3.22~we have
k = w(oo) = 70 =
14
m a 5
3.2 MOTORS 77
and
-2 -2 -2
Tm 3.57
In 0.57 -0.5596
In ( 1 - ~~)
Thus the transfer functions are
W(s) 14 E>(s) 14
U(s) 3.57s + 1 U(s) s(3.57s + 1)
Exercise 3.2.2
In (3.22), in addition to the final speed, we used only one datum to compute Tm.
A more accurate Tm can be obtained by using more data. We plot R(t) in (3.21) for
a number of t as shown in Figure 3.5(b), and draw from the origin a straight line
passing through these points. Then the slope of the straight line equals 1/Tm. This
is a more accurate way of obtaining the motor time constant. This method can also
check whether or not the simplified transfer functions in (3.18) and (3.15) can be
used. lf all points in the ¡j'ot are very close to the straight line, then (3.18) and (3.15)
are adequate. lf not, more accurate transfer functions such as the one in (3 .14) must
be used to describe the motor.
The problem of determining Tm and km in (3.18) from measurements is called
parameter estimation. In this problem, the form of the transfer function is assumed
to be known, but its parameters are not known. We then determine the parameters
from measurements. This is a special case of the identification problem where neither
the form nor the parameters are assumed to be known. lf no noise exists in meas-
urements, then the identification problem is not difficult. See Reference [15]. How-
ever, in practice, noise often arises in measurements. This makes the problem dif-
ficult. For a different method of identification, see Section 8.3.2.
78 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
-1- '
1
/
r, '\
1 ..l.. \
1 ~ 1
1 ~ 1
,"!eh rV N,
~,N
1
/ e2 ' 2
\
/~ \
\
~r 2
1
' ' __
1 ...... _.,.. / 1
3.3 GEARS
Gears are used to convert high speed and small torque into low speed and high
torque or the converse. The most popular type of gear is the spur gear shown in
Figure 3.6(a). Let r; be the radius; N¡, the number of teeth; 8¡, the angular displace-
ment, and T¡, the torque for gear i, i = 1, 2. Clearly, the number of teeth on a gear
is linearly proportional to its radius; hence N 1/r 1 = N 2 /r 2 • The linear distance
traveled along the surfaces of both gears are the same, thus 81r 1 = 82 r2 • The linear
forces developed at the contact point of both gears are equal, thus T1/r 1 = T2 /r 2 •
These equalities can be combined to yield
(3.23)
This is a linear equation as shown in Figure 3.6(b) and is obtained under idealized
conditions. In reality, backlash exists between coupled gears. So long as the gears
are rotating in one direction, the teeth will remain in contact. However, reversa! of
the direction of the driving gear will disengage the teeth and the driven gear will
remain stationary until reengagement? Therefore, the relationship between 81 and
(}2 should be as shown in Figure 3.6(c), rather than in Figure 3.6(b). Keeping the
backlash small will increase the friction between the teeth and wear out the teeth
fas ter. On the other hand, an excessive amount of backlash will cause what is called
the chattering, hunting, or limited-cycle problem in control systems. To simplify
analysis and design, we use the linear equation in (3.23).
Consider an armature-controlled de motor driving a load through a gear train
as shown in Figure 3.7(a). The numbers oftefth are N 1 and N2 • Let the total moment
3
Here we assume that the mass of the gear is zero, or that the gear has no inertia.
3.3 GEARS 79
Ra
+
~
JI' JI
~
~
'2' !2
~
N2
(a)
Ra
+
~ N 2
l 1+l2( N~)
~
~
N 2 el
!1+!2 (N)21
N2
(b)
of inertia (including rotor, motor shaft, and gear 1) and viscous friction coefficient
on the motor shaft be, respectively, 1 1 and f 1, and those on the load shaft be 12 and
f 2 • The torque generated by the motor must drive 1 1; overcome f 1, and generate a
torque T1 at gear 1 to drive the second gear. Thus we have
(3.24)
N¡ ée¡(t) N¡ dfJ¡(t)
T2 = 12 N2---;¡¡¡- + !2 N2----;¡¡- (3.25)
80 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
The substitution of this T2 into T 1 = (N¡/N2 )T2 and then into (3.24) yields
2
2 2
= J d 8 1 (t) + d8 1 (t) + J (N1) d 81(t)
Tmotor 1 dt2 f¡ dt 2 N dt2
2
+
¡2
(N N2
1)
2
d8 1(t)
dt
(3.26)
d 281(t) d8 1(t)
= J ieq --;¡¡:¡-- + f ieq ---;¡;----
where
2 2
This process transfers the load and friction on the load shaft into the motor shaft as
shown in Figure 3.7(b). Using this equivalent diagram, we can now compute the
transfer function and develop a block diagram for the motor and load.
Once the load and friction on the load shaft are transferred to the motor shaft,
we can simply disregard the gear train and the load shaft. If the armature inductance
La is assumed to be zero, then the transfer function of the motor and load from u to
81 is, as in (3.15),
®1 (s)
G(s) = - - (3.28)
U(s)
(3.29a)
and
(3.29b)
(a)
'
u
~
o··~
Motor
Gear
(b)
The transfer function from 81 to 82 is N 1 /N 2 • Thus the block diagram of the motor
driving a load through a gear train is as shown in Figure 3.8(a). We remark
that because of the loading, it is not possible to draw a block diagram as shown in
Figure 3.8(b).
3.4 TRANSDUCERS
Transducers are devices that convert signals from one form to another-for example,
from a mechanical shaft position, temperature, or pressure to an electrical voltage.
They are also called sensing devices. There are all types of transducers such as
thermocouples, strain gauges, pressure gauges, and others. We discuss in the follow-
ing only potentiometers and tachometers.
Potentiometers
The potentiometer is a device that can be used to convert a linear or angular dis-
placement into a voltage. Figure 2.14 shows a wire-wound potentiometer with its
characteristic. The potentiometer converts the angular displacement 8(t) into an elec-
tric voltage v(t) described by
v(t) = kfJ(t) (3.30)
where k is a constant and depends on the applied voltage and the type of potentiom-
eter used. The application of the Laplace transform to (3.30) yields
V(s) = kE>(s) (3.31)
Thus the transfer function of the potentiometer is a constant. Figure 3.9 shows three
commercially available potentiometers.
Tachometers
The tachometer is a device that can convert a velocity into a voltage. 1t is actually
a generator with its rotor connected to the shaft whose velocity is to be measured.
Therefore a tachometer is also called a tachogenerator. The output v(t) of the tach-
ometer is proportional to the shaft's angular velocity; that is,
dO(t)
v(t) = k - - (3.32)
dt
where 8(t) is the angular displacement and k is the sensitivity of the tachometer, in
volts per radian per second. The application of the Laplace transform to (3.32) yields
V(s)
G(s) = - = ks (3.33)
E>(s)
Thus the transfer function from 8(t) to v(t) of the tachometer is ks.
As discussed in Section 2.4.1, improper transfer functions will amplify high-
frequency noise and are not used in practice. The transfer function of tachometers
is improper, therefore its employment must be justified. A tachometer is usually
attached to a shaft-for example, the shaft of a motor as shown in Figure 3.10(a).
Although the transfer function of the tachometer is improper, the transfer function
from u to y 1 is
Motor
(a) (b)
ITJ¡ (}
Motor
r--------------, Motor
u 1
w
~ - L--
1 s 1
1
_ _ _ _ _ _j
(e) (d)
Potentiometer Differentiator
as shown in Figure 3.10(b). lt is strictly proper. Thus electrical noise entered at the
annature circuit will not be amplified. The transfer function from motor torque T(t)
to y 1 is, from Figure 3.2,
1 k
- - - - . ks =
s(JS + f) fS +f
which is again strictly proper. Thus, mechanical noise, such as torque generated by
gusts, is smoothed by the moment of inertia of the motor. In conclusion, tachometers
will not amplify electrical and mechanical high-frequency noises and are widely
used in practice. See also Problem 3.17. Note that the block diagram in Figure 3.10(b)
can also be plotted as shown in Figure 3.10(c). This arrangement is useful in com-
puter computation and operational amplifier circuit realization, as will be discussed
in Chapter 5. We mention that the arrangement shown in Figure 3.11 cannot be used
to measure the motor angular velocity. Although the potentiometer generates signal
kO(t), it also generates high-frequency noise n(t) due to brush jumps, wire irregu-
larities, and variations of contact resistance. The noise is greatly amplified by the
differentiator and overwhelms the desired signal kdO/dt. Thus the arrangement can-
not be used in practice.
The transfer function of a tachometer is ks only if its input is displacement. If
its input is velocity w(t) = d()(t)/dt, then its transfer function is simply k. In velocity
control systems, the transfer function of motors is km/( Tms + 1) as shown in Figure
3.4(b). In this case, the block diagram of a motor anda tachometer is as shown in
Figure 3.10(d). Therefore, it is important to specify what are the input and output
of each block.
Error Detectors
Every error detector has two input terrninals and one output terminal. The output
signal is proportional to the difference of the two input signals. An error detector
can be built by connecting two potentiometers as shown in Figure 3.12(a). The two
potentiometers may be located far apart. For example, one may be located inside a
room and the other, attached to an antenna on the rooftop. The input signals O, and
00 are mechanical positions, either linear or rotational; the output v(t) is a voltage
signal. They are related by
v(t) = k[O/t) - Oo(t)] (3.34)
,...
/ 84 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
V
+
(a)
e +
....:._. V
(b) (e)
or
The operational amplifier is one of the most important circuit elements. It is built in 1
integrated-circuit form. It is small, inexpensive, and versatile. It can be used to build
buffers, amplifiers, error detectors, and compensating networks, and is therefore
widely used in control systems.
The operational amplifier is usually represented as shown in Figure 3.13(a) and
is modeled as shown in Figure 3.13(b). It has two input terminals. The one with a
'' - '' sign is called the inverting terminal and the one with a '' + '' sign the non-
inverting terminal. The output voltage V 0 equals A(v; 2 - V¡¡), andA is called the
open-loop gain. The resistor R; in Figure 3.13(b) is called the input resistance and
3.5 OPERATIONAl AMPLIFIERS (OP-AMPS) 85
(a) (b)
Figure 3.13 Operational amplifier.
R 0 , the output resistance. R; is generally very large, greater than 104 O, and Ro is
very small, less than 50 n. The open-loop gain A is very large, usually over 105 ,
in low frequencies. Signals in op-amps are limited by supply voltages, commonly
± 15 V. Because of this limitation, if A is very large or infinity, then we have
(3.35)
This equation implies that the two input terminals are virtually short-circuited. Be-
cause R; is very large, we have
i; =o (3.36)
This equation implies that the two input terminals are virtually open-circuited. Thus
the two input terminals have the two confticting properties: open-circuit and short-
circuit. Using these two properties, operational amplifier circuits can easily be
analyzed.
Consider the operational amplifier circuit shown in Figure 3.14(a). Because of
the direct connection of the output terminal and inverting terminal, we have V¡¡ =
voúThus we have, using the short-circuit property, V 0 = V¡ 2 . It means that the output
vÜl!age is identical to the input voltage v ¡ 2 . One may wonder why we do not connect
vo directly to v¡2 , rather than through an op-amp. There is an important reason for
doing this. The input resistance of op-amps is very large and the output resistance
is very small, so op-amps can isolate the circuits before and after them, and thus
eliminate the loading problem. Therefore, the circuit is called a voltage follower,
buffer, or isolating amplifier and is widely used in practice.
Consider the circuit shown in Figure 3.14(b) where Z 1 and Z¡ are two imped-
ances. The open-circuit property i; = O implies i1 = - i0 • Thus we have4
(3.37)
4
Because impedances are defined in the Laplace transform domain (see Section 2.4), all variables must
be in the same domain. We use V, to denote the Laplace transform of v,.
86 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
e R
(d) (e)
Because the noninverting terminal is grounded and because of the short-circuit prop-
erty, we have V; 1 = V; 2 = O. Thus (3.37) becomes
(3.38)
Exercise 3.5. 1
Show that the transfer function of the circuit in Figure 3.14(d) equals -1/RCs.
Thus, the circuit qm act as an integrator. Show that the transfer function of the
circuit in Figure 3.'14(e) equals - RCs. Thus, the circuit can actas apure differen-
tiator. lntegrators and differentiators can be easily built by using operational amplifier
circuits. However, differentiators so built may not be stable and cannot be used in
practice. See Reference [18]. Integrators so built are stable and are widely used in
practice.
Consider the op-amp circuit shown in Figure 3.15. The noninverting terminal
is grounded, so V¡¡ = V¡ 2 = O. Because of i; = O, we have
i¡ = - (i 1 + i2 + i3 )
3.5 OPERATIONAL AMPLIFIERS (OP-AMPS) 87
V0 ( v1 v2 v3 )
R= - R/a + R/b + R/c
Thus we have
(3.39)
where vw is the voltage generated by a tachometer. Note the polarity of the tach-
ometer output. The output of the second op-amp, an inverting amplifier, is u = -e,
thus we have u = r - vw. This is an error detector. In conclusion, op-amp circuits
are versatile and widely used. Because of their high input resistances and low output
resistances, their connection will not cause the loading problem.
R R R
u=r-vw
(a) (b)
In this section, we show how block diagrams are developed for control systems.
Example 3.6. 1
Consider the control system shown in Figure 3.17(a). The load could be a telescope
or an antenna and is driven by an armature-controlled de motor. The system is
designed so that the actual angular position of the load will follow the reference
signa!. The error e between the reference signa! r and the controlled signa! y is
detected by a pair of potentiometers with sensitivity k 1 • The dotted line denotes
mechanical coupling, therefore their signals are identical. The error e is amplified
by a de amplifier with gain k2 and then drives the motor. The block diagram of this
system is shown in Figure 3.17(b). The diagram is self-explanatory.
+
de
e Amplifier u
E k2
Error
detector
~r ~Y
(a)
r-----"",---,
r 1 + 1 u y
s(rms+ 1)
1
L_
(b)
Example 3.6.2
In a steel or paper mili, the products are moved by rollers, as shown in Figure 3.18(a).
In order to maintain a·prescribed tension, the roller speeds are kept constant and
equal to each other. This can be achieved by using the control system shown in
Figure 3.18(b ). Each roller is driven by an armature-controlled de motor. The desired
3.6 BLOCK DIAGRAMS OF CONTROL SYSTEMS 89
(a)
Potentiometer
~E
units
(b)
1------,...w
Desired speed
Example 3.6.3
where e depends on the insulation and the temperature difference between inside
and outside the chamber. To simplify analysis and design, e is assumed to be a
positive constant. This means that if no steam is pumped into the chamber, the
temperature will decrease at the rate of e- cr. The application of the Laplace transform
+
V
Thennocouple
Temperature
(a)
Ls+R
temperature
V
kz ~-+--------'
(b)
Exercise 3.6. 1
5
May be skipped without loss of continuity.
92 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
Example 3.6.4
In order to point the antenna toward the earth or the solar panels toward the sun, the
altitude or orientation of a satellite or space vehicle must be properly controlled.
This can be achieved using gas jets or reaction wheels. Because of unlimited supply
of electricity through solar panels, reaction wheels are used if the vehicle will travel
for long joumeys. Three sets of reaction wheels are needed to control the orientation
in the three-dimensional space; but they are all identical.
A reaction wheel is actually a ftywheel; it may be simply the rotor of a motor.
It is assumed to be driven by an armature-controlled de motor as shown in Figure
3.20(a) and (b). The case of the motor is rigidly attached to the vehicle. Because of
the conservation of momentum, if the reaction wheel tums in one direction, the
satellite will rotate in the opposite direction. The orientation and its rate of change
can be measured using a gyro and a rate gyro. The block diagram of the control
system is shown in Figure 3.20(c).
We derive in the following the transfer function G(s) of the space vehicle and
Space
vehicle
""'
Reaction
wheel
(a) (b)
Amplifier
y
Rate gyro
Gyro
(e)
motor. Let the angular displacements, with respect to the inertial coordinate, of the
vehicle and the reaction wheel be respectively y and ()as shown in Figure 3.20(a).
They are chosen, for convenience, to be in opposite directions. Clearly, the relative
angular displacement of the reaction wheel (or the rotor of the motor) with respect
to the vehicle (or the stator or case of the motor) is y + 8. Thus the armature circuit
is govemed by, as in (3.10),
. di0 {t)
Raza(t) + La dt + vb(t) u(t) (3.42)
Let f and f be the moment of inertia and the viscous friction coefficient on the motor
• shaft. Because the friction is generated between the motor shaft and the bearing that
is attached to the satellite, the friction equals f(d8/dt + dy/dt). Thus the torque
equation is
2
d (}(t). (d(}
T(t) = k 1i = f -- + f - + -dy) (3.44)
a dF dt dt
Let the moment of inertia of the vehicle be fu. Then the conservation of angular
momentum implies
f dy = f d(} (3.45)
u dt dt
Using (3.42) through (3.45), we can show that the transfer function from u to y
equals
Y(s) = kJ
G(s) (3.46)
U(s) s[(L0 S + Ra)(ffus + (f + fu)f) + ktkb(f + fu)]
Exa!Jlple 3.6.5
~..¡_/-
Consider the industrial robot shown in Figure 3.2l(a). The robot has a number of
joints. It is assumed that each joint is driven by an armature-controlled de motor
_ through gears with gear ratio n = N 1/N 2 • Figure 3.2l(b) shows a block diagram of
the control of a joint, where the block diagram in Figure 3.3 is used for the motor.
The f and f in the diagram are the total moment of inertia and viscous friction
coefficient reflected to the motor shaft by using (3.27). The compensator is a
94 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
(a)
(b)
Once a block diagram for a control system is obtained, the next step in analysis is
to simplify the block diagram to a single block or, equivalently, to find the overall
transfer function. lt is useless to analyze an individual block, because the behavior
of the control system is affected only indirectly by individual transfer function. The
behavior is dictated completely by its overall transfer function.
Two methods are available to compute overall transfer functions: block diagram
manipulation and employment of Mason's formula. We discuss first the former and
than the latter. The block diagram manipulation is based on the equivalent diagrams
shown in Table 3.1. The first pair is concemed with summers. A summer must have
two or more inputs and one and only one output. If a summer has three or more
Table 3.1 Equivalent Block Diagrams
y
l. )
2.
y
3.
u1 y
4.
U2
u
)0 y1 u y1
cffi
5.
1 ~ ~
u y1
6.
¡·0 ~2
~ ~
7.
2 2
8.
~ 2
~
y
9.
2
95
96 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
inputs, it can be separated into two summers as shown. A terminal can be branched
out, at branching points, into several signals as shown in the second pair of Table
3.1 with all signals equal to each other. A summer ora branching point can be moved
arougd a block as shown from the third to the sixth pair of the table. Their equiva-
len~es can be easily verified. For example, for the fourth pair, we have Y =
GU 1 ± U 2 for the left-hand-side diagram and Y = G(U 1 ± U2 /G) = GU 1 ± U2
for the right-hand-side diagram. They are indeed equivalent.
The last three pairs of Table 3.1 are called, respectively, the tandem, parallel,
and feedback connections of two blocks. The reduction of the tandem and parallel
connections to single blocks is very simple. We now reduce the feedback connection
toa single block. Note that if W is fed into the summer with a positive sign (that is,
E = U + W), it is called positive feedback. If W is fed into the summer with a
negative sign (that is, E = U - W), it is called negative feedback. We derive only
the negative feedback part. Let the input of G 1 be denoted by E and the output of
G2 by W as shown in the left-hand-side diagram of the last pair of Table 3.1. Then
we have
(3.47)
(3.48)
Thus we have
y
(3.49)
u
This is the transfer function from U to Y, for the negative feedback part, as shown
in the right -hand-side diagram of the last pair of Table 3.1. In conclusion, the transfer
function of the feedback connection is G 1/(1 + G 1G2 ) for negative feedback and
G 1/(1 - G 1G2 ) for positive feedback. They are important formulas, and should be
remembered. "'-~
Now we use an example to illustrate the use of Table 3.1 to compute overall
transfer functions of block diagrams.
Example 3.7.1
Consider the block diagram shown in Figure 3.22(a). We first use entry 9 in Table
3.1 to simplify the inner positive feedback loop as shown in Figure 3.22(b). Note
that the direct feedback in Figure 3.22(a) is the same as feedback with transfer
function 1 shown in Figure 3.22(b). Using entries 7 and 9, we have
3. 7 MANIPULATION OF BlOCK DIAGRAMS 97
r+~?~
-L~I
y
";-~,
(a) (b)
y y
(e) (d)
-
r + y
(e)
y
(3.50)
R Gz
1 +- G 1 • · 1
1 - G2 G 3
This is the overall transfer function from R to Y in Figure 3.22(a).
A block diagram can be manipulated in many ways. For example, we can move
th{s~tond summer in Figure 3.22(a) to the front of G 1, using entry 4 of Table 3.1,
to yield the block diagram in Figure 3.22(c) which can be redrawn, using entry 1 of
Table 3.1, as shown in Figure 3.22(d). Note that the positive feedback has been
changed to a negative feedback, but we also have introduced a negative sign into
the feedback block. The two feedback paths are in parallel and can be combined as
shown in Figure 3.22(e). Thus we have
y
R
_ G3 )
G¡
which is the same as (3.50), as expected.
98 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
Exercise 3. 7. 1
Use block manipulation to find the transfer functions from r to y of the block dia-
grams in Figure 3.23.
-r +
cation. For example, if a block diagram has no loop as in the tandem and parallel
connections in Table 3.1, then á = l. If a block diagram has only nontouching
loops, then the formula terminates at the first parentheses. If a block diagram does
not have three or more mutually nontouching loops, then the formula terminates at
the second parentheses. For example, the block diagram in Figure 3.24 does not have
three or more mutually nontouching loops, and we have
á = 1 - (- G 1G6 - G4 G5 + G 3 G4 G7 - G 1G3 G4 + G2 G4 )
(3.53)
+ ((- G 1G6 ) ( - G4G5 ) + (- G 1G6 )(G2 G4 ))
For easy reference, we call á the characteristic function of the block diagram. lt is
an inherent property of a block diagram aQd is independent of input and output
terminals. ·
Now we introduce the concept of forward paths. It is defined for specific input/
output pairs. A forward path from input r to output y is any connection of unidirec-
tional branches and blocks from r to y along which no point is encountered more
than once. A forward path gain is the product of all transfer functions along the path
including signs at summers. For the block diagram in Figure 3.24, the r-y in-
put/output pair has two forward paths, with gains P 1 = G 1G 3 G4 and P 2 = - G2G4 •
The p-y input/output pair has only one forward path, with gain - G4 • A loop touches
a forward path if they have at least one point in common.
With these concepts, we are ready to introduce Mason's formula. The formula
states that the overall transfer function from input v to output w of a block diagram
is given by
W(s) = '!.¡P¡á¡
(3.54)
V(s) á
where á is defined as in (3.52),
ái á set those loop gains to zero if they touch the ith forward path.
and the summation is to be carried out for all forward paths from v to w. If there is
no forward path from v to w, then Gwv = O. Now we use the formula to compute
Gyr and GPY for the block diagram in Figure 3.24. The á for the diagram was com-
puted in (3.53). There are two forward paths from r to y, with P 1 = G 1G3G4 and
P 2 = -G2 G4 • Because allloops touch the first forward path, we set allloop gains
to zero in (3.53) to yield
All loops except the one with loop gain - G 1G6 touch the second forward path,
therefore we have
3. 7 MANIPULATION OF BLOCK DIAGRAMS 101
Thus we have
(3.55)
Now we find the transfer function from p to y in Figure 3.24. The .l for the
block diagram was computed in (3.53). There is only one forward path from p to y
with gain P 1 = - G 4 • Because all loops except the loop with loop gain - G 1G 6
touch the path, we have
.l 1 = 1 - (-G 1G6 ) = 1 + G 1G6
Thus the transfer function from p to y in Figure 3.24 is, using Mason's formula,
Y(s)
G =- (3.56)
YP P(s)
Exercise 3.7.2
Exercise 3. 7.3
Exercise 3. 7.4
Repeat Exercise 3.7.1 by using Mason's formula. Are the results the same?
~-~
(a) (b)
These transfer functions are often referred to as closed-loop transfer functions. Now
we open the loop at e as shown, and the resulting diagram becomes the one in Figure
3.26(b). The transfer functions from r to y, from p 1 to y, and from p 2 to y now
become
They are called open-loop transfer functions. We see that they are quite different
from the closed-loop transfer functions in (3.57). This is not surprising, because the
block diagrams in Figure 3.26 represent two distinctly different systems. Unless
stated otherwise, all transfer functions in this text refer to closed-loop or overall
transfer functions.
PROBLEMS
3.1. Considera generator modeled as shown in Figure P3.1(a). The generator is
driven by a diesel engine with a constant speed (not shown). The field cirt-uii
is assumed to have resistance R 1 and inductance L1; the generator has interna!
resistanceR 8 . The generated voltage v8 (t) is assumed to be linearly proportional
to the field current i 1 (t)-that is, vg(t) = k8 i1 (t). Strictly speaking, vg(t) should
appear at the generator terminals A and B. However, this would cause a loading
problem in developing its block diagram. In order to eliminate this problem,
R¡ Rg L g ~o
+B~
-l
(a)
u kg vg y
RL
·1 R +L s
1 1
·1 RL +R8
we assume that vg(t) appears as shown in Figure P3.l(a) and combine Rg with
the load resistance RL. Show that the generator can now be represented by the
block diagram in Figure P3.1(b) and there is no loading problem in the block
diagram. Where does the power of the generator come from? Does the ftow of
power appear in the block diagram?
3.2. Consider an armature-controlled de motor driving a load as shown in Figure
P3.2. Suppose the motor shaft is not rigid and must be modeled as a torsional
spring with constant ks. What is the transfer function from u to (}? What is the
transfer function from u to w = d (} / dt?
Load ) e
J
Figure P3.2
3.3. The combination of the generator and motor shown in Figure P3.3 is widely
used in industry. lt is called the Ward-Leonard system. Develop a block
diagram for the system. Find the transfer functions from u to vg and from
vg to e.
Generator Motor
, - - - - - - - - - - - - - - - - - --, í - - - - - - - - - - - - - ¡
1
' Rf Rg Lg 1
Ra La
L _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _j
1 1
1 1
1 1
1 1
Fl 1
o
L _ _ _ _ _ _ _ _ _ _ _ _j1
Ward-Leonard system
Figure P3.3
3.4. a. The system in Figure P3.4(a) can be used to control the voltage V 0 (t) of a
generator that is connected to a load with resistance RL" A reference signal
r(t), which can be adjusted by using a potentiometer, is applied through an
amplifier with gain k1 to the field circuit of the generator. Draw a block
diagram for the system. The system is called an open-loop voltage regulator.
b. The system in Figure P3.4(b) can also be used to control the voltage V 0 (t).
lt differs from Figure P3.4(a) in that part ofthe output voltage or v 1 = kpvo
is subtracted from r(t) to generate an error voltage e(t). The error signal
e(t) is applied, through an amplifier with gain k 1, to the field circuit of the
104 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
generator. Develop a block diagram for the system. The system is called a
closed-loop or feedback voltage regulator.
Amplifier
(a)
(b)
Figure P3.4
kt = 1 N·m·A- 1
N 1/N2 = 1/2 12 = 12 N·m·rad- 1 ·s 2 f 2 = 0.2 N·m·rad- 1 ·s,
1 1 = 0.1 N·m·rad- 1 ·s 2 , f 1 = 0.01 N·m·rad- 1·s
Draw a block diagram for the system. Also compute their transfer functions.
3.6. Consider the gear train shown in Figure P3.6(a). Show that it is equivalent to
the one in Figure P3.6(b) with
1Ieq = 1¡ + 12 (N¡)2
N2
+ 13 (N¡N3)2
N2N4
3.7. Find the transfer functions from V¡ to V 0 ofthe op-amp circuits shown in Figure
P3.7.
PROBLEMS 105
(a) (b)
Figure P3.6
Figure P3.7
3.8. Consider the operational amplifier circuit shown in Figure P3.8. Show that the
output v o equals
R¡
Va = R (vB - VA)
Figure P3.8
106 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
3.9. Develop a block diagram for the system shown in Figure P3.9. Compute also
the transfer function of each block.
Amplifier
Figure P3.9
3.10. Consider the temperature control system shown in Figure P3.10. 1t is assumed
that the heat q pumped into the chamber is proportional to u and the temperature
y inside the chamber is related to q by the differential equation dy(t)/dt =
- 0.3y(t) + O.lq(t). Develop a block diagram for the system and compute the
transfer function of each block.
u
y
Temperature
sensor, k 1
Figure P3. 1O
Pulse generator
Light Transistor
pickup
+ Amplifier
+ Digital-to-
analog 1 - - - - - - - - - - 1
v
'-----------1.---J converter Digital speed signa! ....__ __, Pulse
Figure P3. 11
r------------------1
1 1
1 ed
Tape
1
1
l_----------------- ...1
Figure P3. 12
108 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
3.13. In industry, a robot can be designed to replace a human operator to carry out
repeated movements. Schematic diagrams for such a robot are shown in Figure
P3.13. The desired movement is first applied by an operator to the joystick
shown. The joystick activates the hydraulic motor and the mechanical arm.
The movement of the arm is recorded on a tape, as shown in Figure P3.13(a).
The tape can then be used to control the mechanical arm, as shown in Figure
P3.13(b). It is assumed that the signal x is proportional to u, and the transfer
function from x to y is km/s(Tms + 1). Draw a block diagram for the system
in Figure P3.13(b). Indicate also the type of transfer function for each block.
Joystick Amplifier
compensating Actuator
network
~ Mechanical Hydraulic
arm motor
(a)
X
--1
Amplifier
compensating
network
Tape
e
3.14. Use Table 3.1 to reduce the block diagrams shown in Figure P3.14 to single
blocks.
PROBLEMS 109
(a)
(b)
(e)
Figure P3.14
3.16. Use Mason's formula to compute the transfer functions from r 1 to y and r 2 to
y of the block diagram shown in Figure P3.16.
Figure P3. 16
11 0 CHAPTER 3 DEVELOPMENT OF BLOCK DIAGRAMS FOR CONTROL SYSTEMS
3.17. Consider the block diagram shown in Figure P3.17. It is the connection of the
block diagram of an armature-controlled de motor and that of a tachometer.
Suppose noises may enter the system as shown. Compute the transfer functions
from n 1 to w and from n2 to w. Are they proper transfer functions?
r-
u 1 + w
1 -
ks •
Tachometer
1 1
L~-------------------------~
Motor and load
Figure P3. 17
Quantitative
and Qualitative
Analysesof
Control Systems
4.1 INTRODUCTION
Once block diagrams of control systems are developed, the next step is to carry out
analysis. There are two types of analysis: quantitative and qualitative. In quantitative
analysis, we are interested in the exact response of control systems due to a specific
excitation. In qualitative analysis, we are interested in general properties of control
systems. We discuss first the former and then the latter.
1
The radius of the reel changes from a full reel to an empty reel, therefore a constan! angular velocity
will not generate a constan! linear tape speed. For this reason, the angular velocity of motor shafts that
drive expensive compact disc (CD) players is not constant. The angular velocity increases gradually as
the_ pick-up reads from the outer rim toward the inner rim so that the linear speed is constant.
111
112 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANALYSES OF CONTROL SYSTEMS
L a =0
+ Amplifier +
r u
km/(Tms + 1), as is shown in Figure 3.4(b). Thus the transfer function from r to
W IS
which implies
(4.2)
for t 2:: O. This response, shown in Figure 4.2(a), is called the step response; it is
called the unit-step response if a = l. Because e-t/rm approaches zero as t ~ oo,
we have
ws(t) : = lim w(t) = akmkl
f--'>00
This is called the steady-state or final speed. If the desired speed is w" by choosing
a as a = wr/k 1km, the motor will eventually reach the desired speed.
In controlling the tape, we are interested in not only the final speed but also the
speed of response; that is, how fast the tape will reach the final speed. For the first-
order transfer function in (4.1 ), the speed of response is dictated by Tm• the time
constant of the motor. We compute
Tm (0.37) 1 = 0.37
2rm (0.37) 2 = 0.14
3rm (0.37) 3 = 0.05
4rm (0.37)4 = 0.02
5rm (0.37) 5 = 0.007
4.2 FIRST-ORDER SYSTEM5-THE TIME CONSTANT 113
w(t)
o
(a) (b)
and plot e-t/Tm in Figure 4.2(b). We see that if t :2:: 5Tm, the value of e-t/Tm is less
. than 1% of its original va1ue. Therefore, the speed of the motor will reach and stay
within 1% of its final speed in 5 time constants. In engineering, the system is often
considered to have reached the final speed in 5 time constants.
The system in Figure 4.1 is an open-loop system because the actuating signal
u(t) is predetermined by the reference signal r(t) and is independent of the actual
motor speed. The motor time constant Tm of this system depends on the motor and
load (see (3.17)). For a given load, once a motor is chosen, the time constant is
fixed. If the time constant is very large, for example, Tm = 60 s, then it will take
300 seconds or 5 minutes for the tape to reach the final speed. This speed of response
+ Amplifier +
e u
B - k¡
f +
(a)
Desire~~ w
~
f
(b)
Figure 4.3 Feedback control system.
114 CHAPTER 4 QUANTITATIVE ANO QUALITATIVE ANALYSES OF CONTROL SYSTEMS
is much too slow. In this case, the only way to change the time constant is to choose
a larger motor. lf a motor is properly chosen, a system with good accuracy and fast
response can, be designed. This type of open-loop system, however, is sensitive to
plant perturbation and disturbance, as is discussed in Chapter 6, so it is used only
in inexpensive or low-quality speed control system·s.
We now discuss a different type of speed control system. Consider the system
shown in Figure 4.3(a). A tachometer is connected to the motor shaft and its output
is combined with the reference input to generate an error signal. From the wiring
shown, we have e = r - f. The block diagram is shown in Figure 4.3(b). Because
the actuating signal u depends not only on the reference input but also the actual
plant output, this is a closed-loop or feedback control system. Note that in developing
the transfer function of the motor, if the moment of inertia of the tachometer is
negligible compared to that of the load, it may be simply disregarded. Otherwise, it
must be included in computing the transfer function of the motor and load.
The transfer function from r to w of the feedback control system in Figure 4.3
is
k¡km
G 0 (S)
W(s) TmS + k¡km
R(s) k¡kmk2 TmS + k 1k2km + 1
+
TmS + 1 (4.3)
where
(4.4a)
t4.4b)
This transfer function has the same formas (4.1). lf r(t) = a, then we have, as in
(4.2),
(4.5)
and the steady-state speed is ak0 k 1• With a properly chosen, the tape can reach a
desired speed. Furthermore, it will reach the desired speed in 5 X T0 seconds.
The time constant T0 of the feedback system in Figure 4.3 is Tm/(k 1k 2km + 1).
It now can be controlled by adjusting k 1 or k2. For example, if Tm = 60 and km =
1, by choosing k 1 = 10 and k2 = 4, we have T0 = 60/(40 + 1) = 1.46, and the
tape will reach the final'Speed in 5 X 1.46 = 7.3 seconds. Thus, unlike the open-
loop system in Figure 4.1, the time constant and, more generally, the speed of re-
sponse of the feedback system in Figure 4.3 can be easily controlled.
4.2 FIRST-ORDER SYSTEMS--THE TIME CONSTANT 115
which implies
w(t) 2.5aé' - 2.5a
w
¡.-...,.-y
We see that if a o¡f O, the term 2.5aé1 approaches infinity as t ~ oo. In other
l-VOrds, the motor shaft speed will increase without bounds and the motor will
bum out or disintegrate. For the simple system in Figure 4.3, this phenomenon
will not happen for negative feedback. However, for more complex systems,
the same phenomenon may happen even for negative feedback. Therefore, care
must be exercised in using feedback. This is the stability problem and will be
discussed later.
Exercise 4.2. 1
Consider (4.6).1f k 1k2 km = 1 and r(t) = 10- 2 , for t;::: O, what is the speed w(t) of
the motor shaft? What is its final speed?
[Answers: k 1kmt/l00rm, infinity.]
Y(s)
R(s)
(4.7)
This is called a quadratic transfer function with a constant numerator. In this section
we study its unit-step response.
The transfer function G (s) has two poles and no zero. Its poles are
0
(4.9)
Ims
s-=o
s< l
s= cose
s-plane
Figure 4.5 Poles of quadratic system.
factor; and wd, the damped or actual frequency. Clearly if the damping ratio ~ is O,
the two poles ±jw11 are pure imaginary. If O < ~ < 1, the two poles are complex
conjugate and so forth as listed in the following:
In order to see the physical significance of ~, 0', wd, and W11 , we first compute the
unit-step response of (4.8) forO ::5 ~ ::5 l. If r(t) 1, for t :=:::O, then R(s) = 1/s
and
w2n w2n
Y(s)
sz + 2~W11 S + wzn S (s + O' + jwd)(s + O' - jwd)s
k, kz k*2
+ +
S S + O' + jwd S + O' - jwd
with
118 CHAPTER 4 QUANTITATIVE ANO QUALITATIVE ANALYSES OF CONTROL SYSTEMS
and
w2
k~ = n
• (- 2jwd)(a - jwd)
where k'i is the complex conjugate of k 2• Thus the unit-step response of (4.8) is
y(t) = 1 + k2e-(<r+Jwd)t + k'ie-(<r-Jwd)t
(4.10)
where
V1 - ~2
(} = cos- 1 ~ = tan- 1 = sin- 1 Y1 - ~2 (4.11)
~
The angle (} is shown in Figure 4.5. We plot sin (wdt + 8) and e-<Tt in Figure 4.6(a)
and (b). The point-by-point product of (a) and (b) yields e-m sin (wdt + 8). We
see that the frequency of oscillation is determined by wd, the imaginary part of the
poles in (4.9). Thus, wd is called the actual frequency; it is the distance of the poles
from the real axis. Note that wn is called the natural frequency; it is the distance of
the poles from the origin. The envelope of the oscillation is determined by the
damping factor a, the real part of the poles. Thus, the poles of G 0 (s) dictate the
response of e-m sin (wdt + 8), and consequently of y(t) in (4.10). The unit-step
response y(t) approaches 1 as t ~ oo. Thus the final value of y(t) is l.
We now compute the maximum value or the peak of y(t). The differentiation
of y(t) yields
dy(t) ú) •
- - = a---.!! e-<rt sm (wdt + 8) - wne-<rt cos (wdt + 8)
dt ú)d
(4.12)
By comparing (4.11) and (4.12), we conclude that the solutions of (4.12) are
k= O, 1, 2, ...
Thus the stationary points of y(t) occur at t = k'TT/ wd, k = O, 1, .... We plot y(t)
in Figure 4. 7 for various damping ratios ~· From the plot, we see that the peak occurs
atk=1or
1T 1T
4.3 SECOND-ORDER SYSTEMS 119
1/0'
(b)
(e)
This is the peak of y(t). It depends only on the damping ratio (. If Ymax is larger than
the final value y( oo) = 1, the response is said to have an overshoot. From Figure
4.7, we see that the response has an overshoot if ( < l. If ( ~ 1, then there is no
overshoot.
We consider again the unit-step response y(t) in (4.10). If the damping ratio (
is zero, or o- = (wn = O, then e- m sin (wdt + 8) reduces to a pure sinusoidal
function and y(t) will remain oscillatory for all t. Thus the system in (4.8) is said to
120 CHAPTER 4 QUANTITATIVE ANO QUALITATIVE ANALYSES OF CONTROL SYSTEMS
2
// --...... o
1.8 ,/ s=ol \. s=
1.6
1.4
/¡::¿;;':\, ·--,
1.2 //1
!/ 1
s= o.5 '\
\,_
~
;-..
0.8
: ¡'/
------
0.6
0.4 ------
-----
0.2
v' \. ',, __
o
o 2 3 4 5 6 7 8 9 10
be undamped. If O < ( < 1, the response y(t) contains oscillation whose envelope
decreases with time as shown in Figure 4.7. In this case, the system in (4.8) is said
to be underdamped. If ( > 1, the two poles of (4.8) are real and distinct and the
unit -step response of (4. 8) will contain neither oscillation nor overshoot. In this case,
the system is said to be overdamped. The system is said to be critically damped if
( = l. In this case, the two poles are real and repeated, and the response is on the
verge of having overshoot or oscillation.
The step response of (4.8) is dictated by the natural frequency wn and the damp-
ing ratio (. Because the horizontal coordinate of Figure 4. 7 is wn t, the larger wn, the
faster the response. The damping ratio govems the overshoot; the smaller (, the
larger the overshoot. We see from Figure 4. 7 that, if ( is in the neighborhood of O. 7,
then the step response has no appreciable overshoot and has a fairly fast response.
Therefore, we often design a system to have a damping ratio of 0.7. Because the
response also depends on wn, we like to control both wn and (in the design.
Example 4.3. 1
The transfer function of the automobile suspension system shown in Figure 2.7 is
1/m
G(s) = ------==-----
ms2 + k 1s + k2 2 k¡ kz
S +-S+
m m
4.3 SECOND-ORDER SYSTEMS 121
If the shock absorber is dead (that is, it does not generate any friction), or k 1 = O,
then the damping ratio ? is zero. In this case, the car will remain oscillatory after
hitting a pothole and the car will be difficult to steer. By comparing the transfer
function of the suspension system and (4.8), we have
k2 = w~ k¡
- = 2?w
m m n
If we choose ? = 0.7 and wn = 2, then from Figure 4.7, we can see that the
automobile will take about 2 seconds to retum to the original horizontal position
after hitting a pothole and will hardly oscillate. To have these values, k 1 and k2 must
be
k 1 = 2 · 0.7 · 2m = 2.8m
Thus, the suspension system of an automobile can be controlled by using suitable
k1 and k2 •
Exercise 4.3. 1
Find the damping ratio, damping factor, natural frequency, and actual frequency of
the following systems. Also classify them in terms of dampedness.
9
a. G(s) = ---=2- -
2s + 9
9
b. G(s) = .....,2- - - -
s + 3s + 9
9
c. G(s) = - - - - -
s2 + 12s + 9
[Answers: (a) O, O, v4.5, v4.5, undamped; (b) 0.5, 1.5, 3, 2.6, underdamped;
(e) ? = 2, wn = 3, the other two not defined, overdamped.]
With the preceding discussion, we are now ready to study the position control
system in (4.7). Its block diagram was developed in Figure 3.17(b) and is repeated
in Figure 4.8(a). In the block diagram, km and Tm are fixed by the motor and load.
The amplifier gain k2 clearly can be adjusted; so can the sensitivity of the error
detector (by changing the power supply E). Although both k 1 and k2 can be changed,
because
and ?=
only one of wn and ? can be arbitrarily assigned. For example, if k1 and k2 are chosen
so that wn = 10, we may end up with ? = 0.05. If k 1 and k2 are chosen so that
122 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANALYSES OF CONTROl SYSTEMS
-r + y
(a)
- r + u
s(Tms+ 1)
y
(b)
? = 2, we may end up with wn = 0.2. Their step responses are shown in Figure
4.9. The former has too much oscillation; the latter has no oscillation but is much
too sloW. Thus both responses are not satisfactory. How to choose k1 and k2 to yield
a satisfactory response is a design problem and will be discussed in later chapters.
Exercise 4.3.2
(a) Consider the position control system in (4.7). Suppose Tm = 4 and km = 0.25;
find k1 and k2 so that wn = 0.25. What is ?? Use Figure 4.7 to sketch roughly its
unit-step response. (b) Can you find k 1 and k2 so that wn = 0.25 and ? = 0.7?
[Answers: k1k2 = 1, ? = 0.5, no.]
y(t)
..------~----------------------------------------··············(b)·------------··
o 2 3 4
Figure 4.9 Step responses.
4.4 TIME RESPONSES OF POLES 123
Exercise 4.3.3
Suppose the position control system in Figure 4.8(a) cannot achieve the design ob-
jective. We then introduce an additional tachometer feedback with sensitivity k3 as
shown in Figure 4.8(b). Show that its transfer function from r to y is
k 1k2 km
Y(s) Tm wzn
G 0 (s) -
R(s)
sz + (1 + kzk3km) S+ k 1k2km
Tm Tm
sz + 2?wns + wzn
From the preceding two sections, we see that poles of overall transfer functions
essentially determine the speed of response of control systems. In this section, we
shall discuss further the time response of poles.
Poles can be real or complex, simple or repeated. It is often convenient to plot
them on the complex plane or s-planeas shown in Figure 4.10. Their corresponding
responses are also plotted. The s-plane can be divided into three parts: the right half
plane (RHP), the left half plane (LHP) and the pure imaginary axis or jw-axis. To
avoid possible confusion whether the RHP includes the jw-axis or not, we shall use
the following convention: The open RHP is the RHP excluding the jw-axis and the
closed RHP is the RHP including the jw-axis. If a pole lies inside the open LHP,
then the pole has a negative real part; its imaginary part can be positive or negative.
If a pole lies inside the closed RHP, then the pole has a positive or zero real part.
Poles and zeros are usually plotted on the s-plane using crosses and circles. Note
that no zeros are plotted in Figure 4.10. Consider 1/(s + at or the pole at -a with
multiplicity n. The pole is a simple pole if n = 1, a repeated pole if n > l. To
simplify the discussion, we assume a to be real. Its time response, using Table A.1,
is
(4.14)
If the pole -a is in the open RHP, ora < O, then its response increases exponentially
to infinity for n = 1, 2. . . . . If the pole is at the origin, or a = O, and is simple,
then its response is a step function. If it is repeated, with multiplicity n ;::: 2, then
its response is tn- 1/n!, which approaches infinity as t ~ oo. If the real pole is
in the open LHP, ora> O, and is simple, then its response is e-at, which decreases
124 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANALYSES OF CONTROL SYSTEMS
Ims
~~.
o
-a
'L=_,
o
(a)
lms
1
)¡(
/
= eut COSOJ.t
¡
(b)
the product of t, which goes to oo, and e-ar, which goes toO as t ~ oo. Therefore,
it requires sorne computation to find its value at t ~ oo. We use l'Hópital's rule to
compute
1
lim - - = O
{-'HXJ aeaf
Thus, as plotted in Figure 4.10(a), the time response te-at approaches zero as
t ~ oo. Similarly, we can show
as t ~ oo
for a > O, and n = 1, 2, 3, .... This is dueto the fact that the exponential e-ar,
with a > O, approaches zero with a rate much faster than the rate at which tn
approaches infinity. Thus, we conclude that the time response of any simple or
repeated real pole that lies inside the open LHP approaches O as t ~ oo.
The situation for complex conjugate poles is similar to the case of real poles
with the exception that the responses go to O or oo oscillatorily. Therefore we will
not rep~at the discussion. Instead we summarize the preceding discussion in the
following table:
Open LHP o o
Open RHP ±oo
Origin (sn) A constan!
jw-axis((s 2 + a 2n A sustained oscillation
This table implies the following facts, which will be used later.
l. The time response of a pole, simple or repeated, approaches zero as t ~ oo if
and only if the pole lies inside the open LHP or has a negative real part.
2. The time response of a pole approaches a nonzero constant as t ~ oo if and only
if the pole is simple and located at s = O.
4.5 STABILITY
stability. In this text, we study only BIBO stability. Therefore, the adjective BIBO
will be dropped.
A function u(t) defined for t 2: O is said to be bounded if its magnitude does
not approach infinity or, equivalently, there exists a constant M such that
lu(t)l :::; M < oo
for all t 2: O.
o Definition 4. 1
A system is stable if every bounded input excites a bounded output. Otherwise
the system is said to be unstable. •
Example 4.5. 1
Consider the network shown in Figure 4.11(a). The input u is a current source; the
output y is the voltage across the capacitar. Using the equiva1ent Laplace transform
circuit in Figure 4.11 (b ), we can readily obtain
s·-
s S
Y(s) - - U(s) - --
2
U(s) (4.15)
1 S + 1
S+-
S
which implies
y(t) = sin t
lt is bounded. If we apply the bounded input u(t) = sin at, for t 2: O, where a is a
positive real constant and a #- 1, then the output is
S a as[(s 2 + a 2) - (s 2 + 1)]
Y(s) = 2
s + 1 · s2 + a2 (a 2 - 1)(s 2 + 1)(s 2 + a2)
a S a S
a2 - -~ a2 - 1 s2 + a2
+T 1
+T
u 1F lH y U(s) - Y(s)
S
-1 -1
(a) (b)
which implies
a
y(t) = a 2 _ [cos t - cos at]
1
lt is bounded for any a =F- l. Thus the outputs due to the bounded inputs u(t) = 1
and sin at with a =F- 1 are all bounded. Even so, we cannot conclude the stability of
the network because we have not yet checked every possible bounded input. In fact,
the network is not stable, because the application of u(t) = sin t yields
S 1 S
Y(s) = s2 + 1 . s2 + 1
y(t) = 21 t sm. t
This output y(t) approaches positive or negative infinity as t ____,. oo. Thus the bounded
input u(t) = sin t excites an unbounded output, and the network is not stable.
Exercise 4.5. 1
THEOREM 4.1
A system with proper rational transfer function G(s) is stable if and only if every
pole of G(s) has a negative real part or, equivalently, lies inside the open left
half s-plane. •
By open left half s-plane, we mean the left half s-plane excluding the jw-axis.
This theorem implies that a system is unstable if its transfer function has one or
more poles with zero or positive real parts. This theorem can be argued intuitively
by using Table 4.1. If a transfer function has one or more open right half plane poles,
then most bounded inputs will excite these poles and their responses will approach
128 CHAPTER 4 QUANTITATIVE ANO QUALITATIVE ANALYSES OF CONTROL SYSTEMS
infinity. If the transfer function has a simple pole on the imaginary axis, we may
apply a bounded input whose Laplace transform has the same pole. Then its response
will approach infinity. Thus a stable system cannot have any pole in the closed right
half s-plane. For a proof of the theorem, see Reference [15] or [18].
We remark that the stability of a system depends only on the poles of its transfer
function G(s) and does not depend on the zeros of G(s). If all poles of G(s) lie inside
the open LHP, the system is stable no matter where the zeros of G(s) are. For
convenience, a pole is called a stable po/e if it lies inside the open LHP or has a
negative real part. A pole is called an unstable po/e if it lies inside the closed RHP
or has a zero or positive real part. A zero that lies inside the open LHP (closed RHP)
will be called a minimum-phase (nonminimum-phase) zero. The reason for using
such names will be given in Chapter 8.
Now we shall employ Theorem 4.1 to study the stability of the network in Figure
4.11. The transfer function, as developed in (4.15), is
S
G(s) = s2 + 1
Its poles are ±j; they have zero real part and are unstable poles. Thus, the network
is not stable.
Most control systems are built by interconnecting a number of subsystems. In
studying the stability of a control system, there is no need to study the stability of
its subsystems. All we have to do is to compute the overall transfer function and
then apply Theorem 4.1. We remark that a system can be stable with unstable sub-
systems and vice versa. For example, consider the system in Figure 4.12(a). It con-
sists of two subsystems with transfer functions - 2 and 1/ (s + 1). Both subsystems
are stable. However, the transfer function of the overall feedback system is
-2
S + 1 -2 -2
G 0 (S)
-2 S+ 1 - 2 S - 1
+--
S + 1
which is unstable. The overall system shown in Figure 4.12(b) is stable because its
transfer function is
2
S - 1 2 2
2 S - 1 + 2 S + 1
+ S - 1
~ ~
-~¡ ~
(a) (b)
Its subsystem has transfer function 2/(s - 1) and is unstable. Thus the stability of
a system is independent of the stabilities of its subsystems. Note that a system with
transfer function
s 2 - 2s - 3
G(s) = - - - - - - - - (4.16)
(s + 2)(s - 3)(s + 10)
is stable, because 3 is nota pole of G(s). Recall that whenever we encounter rational
functions, we reduce them to irreducible ones. Only then are the roots of the denom-
inator poles. Thus, the poles of G(s) = (s - 3)(s + 1)/(s + 2)(s - 3)(s + 10)
= (s + 1)/(s + 2)(s + 10) are -2 and -10. They are both stable poles, and the
transfer function in (4.16) is stable.
Exercise 4.5.2
2H IH
(a) (b)
Considera system with transfer function G(s) = N(s)/D(s). It is assumed that N(s)
and D(s) have no common factor. To determine the stability of G(s) by using Theo-
130 CHAPTER 4 QUANTITATIVE ANO QUALITATIVE ANALYSES OF CONTROL SYSTEMS
rem 4.1, we must first compute the poles of G(s) or, equivalently, the roots of D(s).
If the degree of D(s) is three or higher, hand computation of the roots is complicated.
Therefore, it is desirable to have a method of determining stability without solving
for the roots. We now introduce such a method, called the Routh test or the Routh-
Hurwitz test.
o Definition 4.2
A polynomial with real coefficients is called a Hurwitz polynomial if all its roots
have negative real parts. •
where a¡, i = O, 1, ... , n, are real constants. If the leading coefficient an is negative,
we may simply multiply D(s) by - 1 to yield a positive an. Note that D(s) and
- D(s) have the same set of roots; therefore, an > O does not impose any restriction
on D(s).
are all positive, then it is Hurwitz (see Exercise 4.6.2). In conclusion, for a poly-
nomia1 of degree 1 or 2 with a positive leading coefficient, the condition that all
coefficients are positive is necessary and sufficient for the polynomial to be Hurwitz.
However, for a polynomial of degree 3 or higher, the condition is necessary but not
4.6 THE ROUTH TEST 131
Note that b40 is always zero. The result is placed at the right hand side of the second
row. We then discard the first element, which is zero, and place the remainder in the
third row as shown in Table 4.2. The fourth row is obtained in the same manner
from its two previous rows. That is, we compute k4 = b51 /b41 , the ratio of the first
elements of the second and third rows, and then subtract the product of the third row
and k4 from the second row:
We drop the first element, which is zero, and place the remainder in the fourth row
as shown in Table 4.2. We repeat the process until the row corresponding to s0 =
1 is obtained. If the degree of D(s) is n, there should be a total of (n + 1) rows. The
table is called the Routh table.
We remark on the size of the table. If n = deg D(s) is even, the first row has
one more entry than the second row. If n is odd, the first two rows have the same
number of entries. In either case, the number of entries decreases by one at odd
powers of s. For example, the number of entries in the rows of s 5 , s 3 , and s is one
less than that of their preceding rows. We also remark that the rightmost entries of
the rows corresponding to even powers of s are the same. For example, in Table
4.2, we have b64 = b43 = b22 = b01 = a0 •
2
The presentation is slightly different from the cross-product method; it requires less computation and is
easier to program on a digital computer. See Problem 4.16.
132 CHAPTER 4 QUANTITATIVE ANO QUALITATIVE ANALYSES OF CONTROL SYSTEMS
It is clear that if all the entries of the table are positive, so are all the entries in
the first column. It is rather surprising that the converse is also true. In employing
the theorem, either condition can be used. A proof of this theorem is beyond the
scope of this text and can be found in Reference [18]. This theorem implies that if
a zero ora negative number appears in the table, then the polynomial is not Hurwitz.
In this case, it is unnecessary to complete the table.
Example 4.6. 1
Consider 2s 4 + s3 + 5s 2 + 3s + 4. We form
s4 2 5 4:
2 r---1
k3 s3 1 3 1
1 [O -1 4] (1st row) - k3 (2nd row)
1
1
s2 -1 4 1
1
Clearly we have k3 = 2/1, the ratio of the first entries of the first two rows. The
result of subtracting the product of the second row and k3 from the first row is placed
on the right hand side of the s 3-row. We drop the first element, which is zero, and
4.6 THE ROUTH TEST 133
put the rest in the s 2-row. A negative number appears in the table, therefore the
polynomial is not Hurwitz.
Example 4.6.2
k4 - s4 3 2: 1
[O O] (1st row) - k4 (2nd row)
1
s3 o 1
1
A zero appears in the tab1e, thus the polynomial is not Hurwitz. The reader is advised
to complete the table and verify that a negative number appears in the first column.
Example 4.6.3
2
k4 s4 3 1.5 [O 1] = (1st row) - k4(2nd row)
1
1
k3 - s3 [O 2 1.5] = (2nd row) - ki3rd row)
1
1 1
k2 s2 2 1.5 1
1 [O 0.25] = (3rd row) - k2(4th row)
2 1
1
-----1
2
k¡ S! 0.25 [O 1.5] (4th row) - k1(5th row)
0.25
so 1.5
Every entry in the table is positive, therefore the polynomial is Hurwitz.
Exercise 4.6. 1
c. 2s 4 + 2s 3 +~< + 2 }
d. 2s + 5s
4 3
+ ~s 2 ~
e. s 5 + 3s 4 + 10s 3 + 12s 2 + 7s + 3
[Answers: No, no, no, yes, yes.]
Exercise 4.6.2
Show that a polynomial of degree 2 is Hurwitz if and only if the three coefficients
of a 2 s 2 + a 1s + a0 are of the same sign.
Exercise 4.6.3
The most important application of the Routh test is to check the stability of
control systems. This is illustrated by an example.
Example 4.6.4
Consider the system shown in Figure 4.14. The transfer function from r to y is, using
Mason' s formula,
Y(s)
R(s)
2s + 1
s + 2 s(s 2 + 2s + 2)
- 2(s- 1) 2s + ]
1 [ (s
- + 1)(s 2 + 2s + 2) (s + 2)s(s 2 + 2s + 2)
(2s + 1)(s + 1)
(s + 1)(s + 2)s(s 2 + 2s + 2) + 2(s - 1)s(s + 2) + (2s + l)(s + 1)
4.6 THE ROUTH TEST 135
y
s(s 2 + 2s + 2)
To conclude this section, we mention that the Routh test can be used to deter-
mine the number of roots of D(s) lying in the open right half s-plane. To be more
specific, if none of the entries in the first column of the Routh table is zero, then the
number of changes of signs in the first column equals the number of open RHP roots
of D(s). This property will not be used in this text and will not be discussed further.
Example 4.6.5
kz = -
1
3
s2 3 2 + 8k [o 4 - 2 ~ 8k] [o 10
3
8k]
9 10 - 8k
k¡ SI [O 2 + 8k]
10 8k 3
so 2 + 8k
The conditions for G 0 (s) to be stable are
10 8k
--->0 and 2 + 8k >o
3
These two inequalities imply
10 -2
1.25 =->k and k>-=
8 -025 (4.21)
8 o
They are plotted in Figure 4.16(a). From the plot we see that if 1.25 >k> -0.25,
then k meets both inequalities, and the system is stable.
Example 4.6.6
Consider again Figure 4.15. If
(s - 1 + j2)(s - 1 - j2) s2 - 2s + 5
G(s) = --------"-----"-'--- (4.22)
(s - 1)(s + 3 + j3)(s + 3 - j3) s 3
+ 5s 2
+ l2s - 18
then the overall transfer function is
s 2 - 2s + 5
k·~--~~--------
kG(s) s3 + 5s 2 + 12s - 18
G 0 (s)
+ kG(s) s - 2s + 5
2
+ k . --::----------:::----------
s3 + 5s 2 + 12s - 18
k(s 2 - 2s + 5)
3
s + (5 + k)s 2 + (12 - 2k)s + 5k - 18
We form the Routh tab1e for its denominator:
s3 12 - 2k
5 + k
s2 5 + k 5k- 18 [o (12 - 2k) - 5k- 18] =: [O
5 + k
x]
5 + k
s' X [O 5k - 18]
X
SO 5k- 18
The x in the tab1e requires sorne manipulation:
(12 - 2k)(5 + k) - (5k - 18) -2k2 - 3k + 78
X=
5 + k 5 + k
-2(k + 7.04)(k - 5.54)
5 + k
Thus the conditions for G 0 (s) to be stable are
5 +k> o 5k - 18 >O
and
- 2(k + 7.04)(k - 5.54)
x= >O
5 + k
These three inequalities imply
18
k> -5 k>-= 3.6 (4.23a)
5
and
(k + 7.04)(k - 5.54) <o (4.23b}
138 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANALYSES OF CONTROL SYSTEMS
Exercise 4.6.4
y y
)'
(a) (b)
Generally speaking, every control system is designed so that its output y(t) will track
a reference signal r(t). For sorne problems, the reference signal is simply a step
function, a polynomial of degree O. For others, the reference signal may be more
complex. For example, the desired altitude of the landing trajectory of a space shuttle
may be as shown in Figure 4.18. Such a reference signal can be approximated by
r(t) = r0 + r 1t + r2 t 2 + ··· + rmtm
a polynomial of t of degree m. Clearly, the larger m, the more complex the reference
signal that the system can track. However, the system will also be more complex.
that is, y(t) will track r(t) as t approaches infinity. This is called asymptotic tracking,
and the response
Ys(t) : = lim y(t)
/--'>00
is called the steady-state response. We will now compute the steady-state response
of stable systems due to polynomial inputs.
Consider a system with transfer function
(4.24)
with n ~ m. lt is assumed that GJs) is stable, or that all the poles of G 0 (s) have
negative real parts. If we apply the reference input r(t) = a, for t ~ O, then the
output is given by
f3o + {3¡ S + · · · + {3 sm a
Y(s) = G0 (s)R(s) = m n ·-
a0 + a 1s + · · · + ans s
k
= - + (terms dueto the poles of GJs))
S
k= G(s)~·sl s~O
o S
If the system is stable, then the response dueto every pole of G0 (s) will approach
zero as t ~ oo. Thus the steady-state response of the system dueto r(t) = a is
f3o
Ys(t) = lim y(t) G 0 (0)a = - · a (4.25)
ao
140 CHAPTER 4 QUANTITATIVE ANO QUALITATIVE ANALYSES OF CONTROL SYSTEMS
R(s)
and
Y(s)
k¡
= - + -k22 + (terms dueto the poles of G (s)) 0
s s
with, using (A.8c) and (A.8d),
k2 = Go(s);. s21s=O
and
k1 = ~G0 (S)als=O
(ao + a¡S + + ansn)(/3 1 + · · · + mf3msm-l)
=a [
(a0 + a 1s + · · · + ans n2)
(f3o + {3¡s + · · · + f3msm)(a 1 + · · · +
(ao + a¡S + ... + ansn)2
or
(4.26b)
This steady-state response depends only on the coefficients of G0 (s) associated with
s0 and s.
We discuss now the implications of (4.25) and (4.26). lf G 0 (0) = 1 or a0 = {30
and if r(t) = a, t ;:::: O, then
y,(t) = a = r(t)
Thus the output y(t) will track asymptotiéally any step reference input. lf G 0 (0) =
1 and G~(O) = O or a 0 = {3 0 and a 1 = {3 1, and if r(t) = at, t ;:::: O, then (4.26)
reduces to
Ys(t) = at
,
4.7 STEADY-STATE RESPONSE OF STABLE SYSTEMs-POLYNOMIAL INPUTS 141
that is, y(t) will track asymptotically any ramp reference input. Proceeding forward,
if
and (4.27)
then the output of G 0 (s) will track asymptotically any acceleration reference input
at 2 . Note that in the preceding discussion, the stability of G0 (s) is essential. lf G0 (s)
is not stable, the output of G 0 (s) will not track any r(t).
Exercise 4.7 .1
2
b. G0 (s) = - - dueto r(t) = a
S + 1
2 + 3s
e G (s) -
• o - 2 + 3s + s 2 due to r(t) = 2 + t
2
d. G 0 (s) = -- dueto r(t) = 3t
S + 1
68 + 9s+ 9s 2
e. Go(s) = 68 + 9
s + 9s 2 + s 3 dueto r(t) = a
[Answers: (a) oo; (b) 2a; (e) y.(t) = 2 + t; (d) 6t - 6; (e) oo.]
Hence, we have
aw0 aw0
Y(s) = G 0 (s)R(s) = G 0 (S) · = G 0 (S) · - - - - " - - -
S
2
+ 2
W0 (s + jw0 )(s - jw0 )
Because G 0 (s) is stable, s = ± jw0 are simple poles of Y(s). Thus Y(s) can be
expanded as, using partial fraction expansion,
k¡ ki
Y(s) ---'-- + . + terms dueto the poles of G (s)
S - jw0 S + JW 0
0
142 CHAPTER 4 QUANTITATIVE ANO QUAUTATIVE ANALYSES OF CONTROL SYSTEMS
with
and
Since all the poles of G (s) have negative real parts, their time responses will ap-
0
proach zero as t---¿ oo. Hence, the steady-state response of the system dueto r(t) =
a sin wot is given by
(t
Ys )
= :;g-1 [ aGo(jwo)
2J·eS · )
_ aGo(- jwo)
2J·eS + JW
· 0)
J (4.29)
- JW 0
All coefficients of G0 (s) are implicitly assumed to be real. Even so, the function
G 0 (jW0 ) is generally complex. We express it in polar formas
(4.30)
where
1
and
\
!'
where Im and Re denote, respectively, the imaginary and real parts. A(w0 ) is called
the amplitude and 8(w0 ), the phase of G 0 (s). If all coefficients of G 0 (s) are real,
then A(w0 ) is an even function of wm and 8(w0 ) is an odd function of W 0 ; that is,
A(- W 0 ) = A(w0 ) and 8(- W 0 ) = - 8(w0 ). Consequently we have
y,(t)
ej[wJ+O(w 0 )] e-j[w,t+O(w
_ 0 )]
This shows that if r(t) = a sin W 0 t, then the output will approach a sinusoidal
function of the same frequency. lts amplitude equals aiG 0 (jw0 )1; its phase differs
from the phase of the input by tan - 1 [Im G0 (jW0 )/Re G0 (jW0 )]. We stress again that
(4.32) holds only if G 0 (s) is stable.
4.7 STEADY-STATE RESPONSE OF STABLE SYSTEM5-POLYNOMIAL INPUTS 143
Example 4.7.1
Consider G (s) = 3/(s + 0.4). lt is stable. In order to compute its steady-state
0
Note that the phase - 1.37 is in radians, not in degrees. This computation is very
simple, but it does not reveal how fast the system will approach the steady state.
This problem is discussed in the next subsection.
Exercise 4. 7.2
Find the steady-state response of 2/(s + 1) due to (a) sin 2t (b) 1 + sin 2t
(e) 2 + 3 sin 2t - sin 3t.
[Answers: (a) 0.89 sin (2t - 1.1). (b) 2 + 0.89 sin (2t - 1.1). (e) 4 + 2.67
sin (2t - 1.1) - 0.63 sin (3t - 1.25).]
mined by the value of G 0 (s) at s = jw0 • Thus G(jw) is called thefrequency response
of the system. lts amplitude A( w) is called the amplitude characteristic, and its phase
O(w), the phase characteristic. For example, if G 0 (s) = 2/(s + 1), then G 0 (0) =
2, G 0 (j1) = 2/(}1 + 1) = 2/(1.4ej45 •) = l.4e-j45 ·, G 0 (jl0) = 2/(}10 + 1) =
0.2e-j 84·, and so forth. The amplitude and phase characteristics of Gjs) =
2/ (s + 1) can be plotted as shown in Figure 4.19. From the plot, the steady-state
response due to sin W 0 t, for any W 0 , can be read out.
Exercise 4. 7.3
Plot the amplitude and phase characteristics of G 0 (s) = 2/(s - 1). What is the
steady-state response of the system due to sin 2t?
[Answers: Same as Figure 4.19 except the sign of the phase is reversed, infinity.
The amplitude and phase characteristics of unstable transfer functions
do not have any physical meaning and, strictly speaking, are not
defined.]
144 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANALYSES OF CONTROL SYSTEMS
1 G0 (júJ) 1
(b)
apply r(t) = sin W 0 t and rneasure the steady-state response. Frorn the arnplitude and
phase of the response, we can obtain A(w0 ) and O(w0 ). By varying or sweeping W 0 ,
G 0 (jw) over a frequency range can be obtained. Special devices called frequency
analyzers, such as the HP 3562A Dynarnic Systern Analyzer, are available to carry
out this rneasurernent. Sorne devices will also generate a transfer function frorn the
rneasured frequency response.
We introduce the concept of bandwidth to conclude this section. The bandwidth
of a stable G0 (s) is defined as the frequency range in which 3
(4.35)
For exarnple, the bandwidth of G0 (s) = 2/(s + 1) can be read frorn Figure 4.19 as
1 radian per second. Thus, the arnplitude of G 0 (jw) at every frequency within the
bandwidth is at least 70.7% of that at w = O. Because the power is proportional to
the square of the arnplitude, the power of G 0 (jw) in the bandwidth is at least
(0.707) 2 = 0.5 = 50% of that at w = O. Thus, the bandwidth is also called the
half-power bandwidth. (lt is also called the - 3-dB bandwidth as is discussed in
Chapter 8.) Note that if G0 (s) is not stab1e, its bandwidth has no physical rneaning
and is not defined.
3
This definition applies only to G0 (s) with lowpass characteristic as shown in Figure 4.19. More general! y,
the bandwidth of stable G0 (s) is defined as the frequency range in which the amplitude of G0 (jw) is at
leas! 70.7% of the largest amplitude of Gjjw).
4.7 STEADY-STATE RESPONSE OF STABLE SYSTEM5-POLYNOMIALINPUTS 145
Consider G0 (s) = 3/(s + 0.4). If we apply r(t) = sin 2t, then Ys(t) = 1.47
sin (2t - 1.37) (see [4.34]). One may wonder how fast y(t) will approach y,(t). In
order to find this out, we shall compute the total response of G0 (s) dueto sin 2t. The
Laplace transform of r(t) = sin 2t is 2/(s 2 + 4). Thus we have
3 2 6
Y(s) =
S + 0.4 . s 2 + 4 (s + 0.4)(s + 2j)(s - 2j) (4.36)
1.44 1.47e-j1.3? 1.47ejl.37
S + 0.4 + 2j(s - 2j) 2j(s + 2j)
which implies
y(t) 1.44e-o.4r + 1.47 sin (2t - 1.37) (4.37)
'-----v-----'
Transient Steady-State
Response Response
The second term on the right hand side of (4.37) is the same as (4.34) and is the
steady-state response. The first term is called the transient response, because it ap-
pears right after the application of r(t) and will eventually die out. Clearly the faster
the transient response approaches zero, the faster y(t) approaches y,(t). The transient
response in (4.37) is govemed by the real pole at - 0.4 whose time constant is
defined as 1/0.4 = 2.5. As shown in Figure 4.2(b), the time response of
1/(s + 0.4) decreases to less than 1% of its original value in five time constants or
5 X 2.5 = 12.5 seconds. Thus the response in (4.37) may be considered to have
reached the steady state in five time constants or 12.5 seconds.
Now we shall define the time constant for general proper transfer functions. The
time constant can be used to indicate the speed at which a response reaches its steady
state. Consider
N(s) N(s)
G(s) = - = ----------'----'--------- (4.38)
D(s) (s + a 1)(s + a 2 )(s + u 1 + jwd 1)(s + u 1 - jwd 1) · · ·
If G(s) is not stable, the response due to its poles will not die out and the time
constant is not defined. If G(s) is stable, then a; > O and u 1 > O. For each real pole
(s + a¡), the time constant is defined as 1/a;. For the pair of complex conjugate
poles (s + u 1 ± jwd 1), the time constant is defined as 1/ u 1 ; this definition is
reasonable, because u 1 govems the envelope of its time response as shown in Figure
4.6. The time constant of G(s) is then defined as the largest time constant of all poles
of G(s). Equivalently, it is defined as the inverse of the smallest di-;tance of all poles
of G(s) from the imaginary axis. For example, suppose G(s) has poles -1, -3,
-0.1 ± j2. The time constants of the poles are 1, 1/3 = 0.33 and 1/0.1 = 10.
Thus, the time constant of G(s) is 10 seconds. In engineering, the response of G(s)
due to a step or sinusoid input will be considered to have reached the steady state
in five time constants. Thus the smaller the time constant or, equivalently, the farther
away the closest pole from the imaginary axis, the faster the response reaches the
steady-state response.
146 CHAPTER 4 QUANTITATIVE AND QUAUTATIVE ANALYSES OF CONTROL SYSTEMS
Exercise 4.7.4
Ims
X 2
o
-*--1---+-o--+--t---+-+-- Res
-3 -2 -1 o 1 2 3
o -1
X -2 X Po le
-3 o Zero
The time constant of a stable transfer function G(s) as defined is open to argu-
ment. It is possible to find a transfer function whose step response will not reach the
steady state in five time constants. This is illustrated by an example.
Example 4.7.2
Consider G(s) = 1/(s + 1?. It has three poles at s - l. The time constant of
G(s) is 1 second. The unit-step response of G(s) is
1 1 1 -1 -1 -1
Y(s) = · - = - + + + ---
(s + 1) 3
s s (s + 1? (s + 1? (s + 1)
or
y(t) 1 - 0.5Pe-' - te-' - e-'
PROBLEMS 147
2 3 4 5 10
This example shows that if a transfer function has repeated poles or, more
generally, a cluster of poles in a small region elose to the imaginary axis, then the
rule of five time constants is not applicable. The situation is actually much more
complicated. The zeros of a transfer function also affect the transient response. See
Example 2.4.5 and Figure 2.16. However, the zeros are not considered in defining
the time constant, so it is extremely difficult to state precisely how many time con-
stants it will take for a response to reach the steady state. The rule of five time
constants is useful in pointing out that infinity in engineering does not necessarily
mean mathematical infinity.
PROBLEMS
4.1. Consider the open-loop voltage regulator in Figure P3.4(a). lts block diagram
is repeated in Figure P4.1 with numerical values.
a. If RL = 100 n and if r(t) is a unit-step function, what is the response vo(t)?
What is its steady-state response? How many seconds will V0 (t) take to reach
and stay within 1% of its steady state?
b. What is the required reference input if the desired output voltage is 20 V?
Figure P4.1
[
~
,\
148 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANAlYSES OF CONTROl SYSTEMS
c. Are the power levels at the reference input and plant output necessarily the
same? If they are the same, is the system necessary?
d. If we use the reference signal computed in (b ), and decrease RL from 100 O
to 50 O, what is the steady-state output voltage?
4.2. Consider the closed-loop voltage regulator shown in Figure P3.4(b). Its block
diagram is repeated in Figure P4.2 with numerical values.
o. If RL = 100 o and if r(t) is a unit-step function, what is the response vo(t)?
What is its steady-state response? How many seconds will V 0 (t) take to reach
the steady state?
b. What is the required r(t) if the desired output voltage is 20 V?
c. If we use the r(t) in (b) and decrease RL from 100 oto 50 O, what is the
steady-state output voltage?
Figure P4.2
4.3. Compare the two systems in Problems 4.1 and 4.2 in terms of (a) the time
constants or speeds of response, (b) the magnitudes of the reference signals,
and (e) the deviations of the output voltages from 20 V as RL decreases from
100 O to 50 O. Which system is better?
4.4. The transfer function of a motor and load can be obtained by measurement.
Let the transfer function from the applied voltage to the angular displacement
be ofthe form km/s(rms + 1). Ifwe apply an input of 100 V, the speed (not
displacement) is measured as 2 rad/s at 1.5 seconds. The speed eventually
reaches 3 rad/s. What is the transfer function of the system?
4.5. Maintaining a liquid level at a fixed height is important in many process-control
systems. Such a system and its block diagram are shown in Figure P4.5, with
Desired
signa! +
4.8. Consider the position control system shown in Figure 4.8(b). Let the transfer
function of the motor and load be 1/s(s + 2). The error detector is a pair of
potentiometers with sensitivity k1 = 3. The reference input is to be applied by
tuming a knob. A tachometer with sensitivity k3 is introduced as shown.
o. If k2 = 1 and k3 = 1, compute the response due to a unit-step reference
input. Plot the response. Roughly how many seconds will y take to reach
and stay within 1% of its final position?
b. If it is required to tum y 30 degrees, how many degrees should you tum the
control knob?
c. If k3 = 1, find a k2 so that the damping ratio equals 0.7. If k3 is adjustable,
can you find a k 2 anda k 3 so that? = 0.7 and ?wn = 3?
r
•
'
1
150 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANALYSES OF CONTROl SYSTEMS
d. Compare the system with the one in Problem 4. 7 in terms of the speed of
response.
4.9. Considera de motor. It is assumed that its transfer function from the input to
the angular position is 1/s(s + 2). Is the motor stable? If the angular velocity
of the motor shaft, rather than the displacement, is considered as the output,
what is its transfer function? With respect to this input and output, is the system
stable?
4.10. A system may consist of a number of subsystems. The stability of a system
depends only on the transfer function of the overall system. Study the stability
of the three unity-feedback systems shown in Figure P4.10. Is it true that a
system is stable if and only if its subsystems are all stable? Is it true that
negative feedback will always stabilize a system?
Figure P4. 1O
s3 -
b. G(s) = s4 + 14s 3 + 71s 2 + 154s + 120
c. The system shown in Figure P4.12.
Figure P4. 12
PROBLEMS 151
4. 13. Find the ranges of k in which the systems in Figure P4.13 are stable.
(a)
(b)
Motor
y
Compensating
network
Tachometer
4.14. Considera system with transfer function G(s). Show that if we apply a unit-
step input, the output approaches a constant if and only if G(s) is stable. This
fact can be used to check the stability of a system by measurement.
4. 15. In a modem rapid transit system, a train can be controlled manually or auto-
matically. The block diagram in Figure P4.15 shows a possible way to control
Tachometer
Figure P4. 15
152 CHAPTER 4 QUANTITATIVE AND QUALITATIVE ANALYSES OF CONTROL SYSTEMS
the train automatically. If the tachometer is not used in the feedback (that is,
if b = 0), is it possible for the system to be stable for sorne k? If b = 0.2,
find the range of k so that the system is stable.
4.16. LetD(s) = ansn + an_ 1sn-l + · · · + a 1s + a 0 , withan >O. Let [J/2] be
the integer part of j/2. In other words, if j is an even integer, then [j/2]
j/2. If j is an odd integer, then [j/2] = (j - 1)/2. Define
i = 1, 2, ... , [j/2] + 1
Verify that the preceding algorithm computes all entries of the Routh table.
4.17. What are the time constants of the following transfer functions?
S- 2
a. s 2 + 2s + 1
S - 1
b. 2
(s + 1)(s + 2s + 2)
s 2 + 2s - 2
c. (s 2 + 2s + 4)(s 2 + 2s + 10)
d.---
S + 10
s + 1
S - 10
e. s 2 + 2s + 2
Do they all have the same time constant?
4.18. What are the steady-state responses of the system with transfer function
1/(s 2 + 2s + 1) dueto the following inputs:
a. u 1(t) = a unit-step function.
b. u2 (t) = a ramp function.
c. u3(t) = u 1(t) + u2 (t).
d. uit) = 2 sin 2m, for t ;:::::. O.
4.19. Consider the system with input r(t), output y(t), and transfer function
S + 8
G(s)- - - - - -
(s + 2)(s + 4)
PROBLEMS 153
Computer
Simulation
and Realization
5. 1 INTRODUCTION
In recent years computers have become indispensable in the analysis and design of
control systems. They can be used to collect data, to carry out complicated com-
putations, and to simulare mathematical equations. They can also be used as control
components, and as such are used in space vehicles, industrial processes, autopilots
in airplanes, numerical controls in machine tools, and so on. Thus, a study of the
use of computers is important in control engineering.
Computers can be divided into two classes: analog and digital. An intercon-
nection of a digital and an analog computer is called a hybrid computer. Signals on
analog computers are defined at every instant of time, whereas signals on digital
computers are defined only at discrete instants of time. Thus, a digital computer can
accept only sequences of numbers, and its outputs again consist only _i)f sequences
of numbers. Because digital computers yield more accurate results and are more
flexible and versatile than analog computers, the use of general-purpose analog
computers has been very limited in recent years. Therefore, general-purpose analog
computers are not discussed in this text; instead, we discuss simulations using op-
erational amplifier (op-amp) circuits, which are essentially special-purpose or cus-
tom-built analog computers.
We first discuss digital computer computation of state-variable equations. We
use the Euler forward algorithm and show its simplicities in programming and com-
putation. We then introduce sorne commercially available programs. Op-amp circuit
implementations of state-variable equations are then discussed. We discuss the rea-
154
5.2 COMPUTER COMPUTATION OF STATE-VARIABLE EQUATIONS 155
sons for not computing transfer functions directly on digital computers and then
introduce the realization problem. After discussing,the problem, we show how trans-
fer functions can be simulated on digital computers or built using op-amp circuits
through state-variable equations.
where u(t) is the input; y(t) the output, and x(t) the state. If x(t) has n components
or n state variables, then A is an n X n matrix, b is an n X 1 vector, e is a 1 X n
vector, and d is a 1 X 1 sca1ar. Equation (5.2) is an algebraic equation. Once x(t)
and u(t) are available, y(t) can easily be obtained by multiplication and addition.
Therefore we discuss on1y the computation of (5.1) by using digital computers.
Equation (5.1) is a continuous-time equation; it is defined at every instant of
time. Because every time interval has infinitely many points and because no digital
computer can compute them all, we must discretize the equation before computation.
By definition, we have
. 0 x(t ) .0 x(t 0 + a) - x(t )
x(t0 ) : = -- := hm --'-"------""-
dt a->0 a
The substitution of this into (5.1) at t = t0 yields
x(t0 + a) - x(t0 ) = [Ax(t0 ) + bu(t0 )]a
or
x(t0 + a) + aAx(t0 ) + abu(t0 )
x(t0 )
lx(t0 ) + aAx(t0 ) + abu(t0 )
where 1 is a unit matrix with the same order as A. Note that x + aAx =
(1 + aA)x is not well defined (why?). After introducing the unit matrix, the equation
becomes
x(t0 + a) = (1 + aA)x(t0 ) + bu(t0 )a
,. (5.3)
This is a discrete-time equation, and a is caBed the integration step size. Now, if
x(t0 ) and u(t0 ) are known, then x(t0 + a) can be computed a1gebraically from (5.3).
Using this equation repeated1y or recursively, the solution of (5.1) dueto any x(O)
and any u(t), t 2: O, can be computed. For example, from the given x(O) and u(O),
we can compute
x(a) = (1 + aA)x(O) + bu(O)a
Example 5.2. 1
y(t) = [1 - l]x(t)
due to the initial condition x(O) = [2 -1]' and the input u(t) 1, for t 2:: O,
where the prime denotes the transpose.
For this equation, (5.3) becomes
a
1.5a
J[xx (t(t
1 0 )]
2 0
) +
[o]
1 .
1
. a
5.2 COMPUTER COMPUTATION OF STATE-VARIABLE EQUATIONS 157
which implies
x 1((k + l)a) = x 1(ka) + ax2 (ka) (5.5a)
y(t)
3.0
2.5
2.0
1.5
1.0
0.5
0.0~--~----~----~--~----~----L---~----~----L----J __L-~
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
Figure 5.1 Results of computer simulation.
a = 0.25 and then for a = 0.125. The last two results are very close, therefore we
stop the computation.
The exact solution of (5.4) can be computed, using the procedure in Section
2.7, as
y(t) = 2 + 4e-t - 3e-0.5t
The computed result is very close to the exact solution.
Exercise 5.2. 1
Discretize and then compute the solution of i(t) = - 0.5x(t) + u(t) due to x(O) =
1 and u(t) = 1, for t 2:: O. Compare your result with the exact solution.
Before the age of personal computers, control systems were mostly simulated on
mainframe computers. Many computer programs are available at computing centers.
Two of the most widely available are
IBM Scientific Routine Package
LSMA (Library of Statistics and Mathematics Association)
In these packages, there are subroutines for solving differential equations, linear
algebraic equations, and roots of polynomials. These subroutines have been thor-
oughly tested and can be employed with confidence. Most computing centers also
have LINPACK and Eispack, which were developed under the sponsorship of the
National Science Foundation and are considered to be the best for solving linear
algebraic problems. These programs are often used as a basis for developing other
computer programs.
Many specialized digital computer programs have been available for simulating
continuous-time systems in mainframe computers: CSMP (Continuous System Mod-
eling Program), MIDAS (Modified lntegration Digital Analog Simulator), MIMIC
(an improved version ofMIDAS), and others. These programs were major simulation
tools for control systems only a few years ago. Now they are probably completely
replaced by the programs to be introduced in the following.
Personal comyuters are now wideJy availabJe. A Jar_ge amount of com_puter
software has been developed in universities and industries. About 90 programs
are listed in Reference [30]. We list in the following sorne commercially available
packages:
CTRL-C (Systems Control Technology)
EASY5 (Boeing Computer Services Company)
MATLAB (Math Works Inc.)
MATRIXx (lntegrated Systems Inc.)
Program CC (Systems Technology)
Simnon (SSPA Systems)
For illustration, we discuss only the use of MATLAB. * The author is familiar with
version 3.1 and the Student Edition of MATLAB, which is a simplified edition of
version 3.5. Where no version is mentioned, the discussion is applicable to either
*The author has experience only with PC-MATLAB and has no knowledge of the relative strength or
weakness of the prograrns listed. MA TLAB is now available in many universities. The Student Edition
of MATLAB TM is now available from Prentice Hall.
160 CHAPTER 5 COMPUTER SIMULATION AND REALIZATION
A [- i ~ TJ ~ m
5
- b e - [15 o 05] d ~o (5.6(
y= ex+ du (5.7b)
is represented as (a, b, e, d, iu), where iu denotes the ith input. In our case, we have
only one input, thus we have iu = l.
Suppose we wish to compute the unit-step response of (5.7). In using version
3.1 of MATLAB, we must specify the initial time l 0 , the final time l¡, and the time
interval a at which the output will be printed or plotted. For example, if we choose
l 0 = O, l¡ = 20, and a = 1, then we type
t=0:1 :20;
The three numbers are separated by colons. The first number denotes the initial time,
the last number denotes the final time, and the middle number denotes the time
interval at which the output will appear. Now the following commands
y= step(a,b,e,d, 1,t);
plot(t,y)
willproduce the solid line shown in Figure 5.2. It plots the output at l = O, 1, 2,
... , 1O; they are connected by straight lines. The following commands
t = 0:0.05:20;
plot(t,step(a,b,e,d, 1,t))
will generate the dotted line shown in Figure 5.2, where the output is plotted every
0.05 second. In using version 3.5 or the Student Edition of MATLAB, if we type
step(a,b,e,d, 1)
then the response will appear on the screen. There is no need to specify l 0 , l¡, and
a. They are chosen automatically by the computer.
5.3 EXISTING COMPUTER PROGRAMS 161
0.9
0.8
0.7
0.6
0.5
0.4
0.3 \\~~~~-
ÜL---~-----L----~----~--~----~----~----~--~----~
o 2 4 6 8 10 12 14 16 18 20
We discuss how MATLAB computes the step response of (5.7). It first trans-
forms the continuous-time state-variable equation into a discrete-time equation as in
(2.89). This can be achieved by using the command c2d which stands for continuous
to discrete. Note that the discretization involves only A and b; e and d remain
unchanged. If the sampling period is chosen as 1, then the command
[da,db] = c2d(a,b, 1)
will yield
-0.1375 -0.8502 -0.1783] 0.3566]
da = 0.3566 0.3974 -0.1371 db 0.2742
[ [
0.2742 0.7678 0.9457 0.1086
Thus, the discretized equation of (5.6) is
-0.1375 -0.8502
- 0.1783] [0.3566]
x(k + 1) 0.3566 0.3974 -0.1371 x(k) + 0.2742 u(k) (5.8a)
[
0.2742 0.7678 0.9457 0.1086
y(k) = [1.5 O 0.5]x(k) (5.8b)
that several methods are listed in MA TLAB for computing eAT. One of them is to
use the infinite series in (2.67). In using the series, there is no need to compute the
eigenvalues of A.
A basic block diagram is any diagram that is obtained by interconnecting the three
types of elements shown in Figure 5.3. The three types of elements are multipliers,
adders, and integrators. The gain k of a multiplier can be positive or negative, larger
or smaller than l. An adder or a summer must ha ve two or more inputs and one and
only one output. The output is simply the sum of all inputs. If the input of an
integrator ÍS X(t), then itS OUtput equals fh X( T)dT. This choice of variable is not as
convenient as assigning the output of the integrator as x(t). Then the input of the
integrator is x(t): = dxj dt as shown in Figure 5.3. These three elements can be easily
built using operational amplifier (op-amp) circuits. For example, a multiplier with
gain k can be built as shown in Figure 5.4(a) or (b) deperiding on whether k is
positive or negative. The adder can be built as shown in Figure 5.4(c). Figure 5.4(d)
shows an implementation of the integrator with R = 1 kO = 1000 O, e = 1O- 3 F
or R = 1 Mil = 106 0, e = 1 ¡..tF = 10- 6 F. For simplicity, the grounded inverting
terminals are not plotted in Figure 5.4(b through f).
Now we show that every state-variable equation can be represented by a basic
block diagram. The procedure is simple and straightforward. If an equation has n
state variables or, equivalently, has dimension n, we need n integrators. The output
of each integrator is assigned as a state variable, say, x;; then its input is X¡. If it is
assigned as -X¡, then its input is -X¡. Finally, we use multipliers and adders to
build up the state-variable equation. This is illustrated by an example. Consider
X¡(t)]
[ x (t)
2
[~ -0.3][x 1(t)]
-8 x 2 (t)
+ [-2]u(t)
O
(5.9a)
y(t) = [- 2 3] [
X1 (t)] + 5u(t) (5.9b)
x 2 (t)
lt has dimension 2 and needs two integrators. The outputs of the two integrators are
assigned as x 1 and x 2 as shown in Figure 5.5. Their inputs are x1 and x2 • The first
equation of (5.9a) is x1 = 2x 1 - 0.3x2 - 2u. lt is generated in Figure 5.5 using
the solid line. The second equation of (5.9a) is x2 = x 1 - 8x2 and is generated using
the dashed line. The output equation in (5.9b) is generated using the dashed-and-
R R R
(a) (b)
R R
XI e R
.X
(e) (d)
Rla R Rla e
XI VI
R!b Re= 1
vz
X
v3
-(ax 1 + bx2 + cx3) -x=av 1 +bv 2 +cv 3
(e) (f)
dotted line. It is indeed simple and straightforward to develop a basic block diagram
for any state-variable equation. Conversely, given a basic block diagram, after as-
signing the output of each integrator as a state variable, a state-variable equation can
be easily obtained. See Problem 5.5.
Every basic element can be built using an op-amp circuit as shown in Figure
5.4; therefore every state-variable equation can be so built through its basic block
R R R e R
(a)
-2x 1
R e e R/2 R
-x2
u y
R/8
0.3 R
Re= 1
0.3x 2
-Su
(b)
Figure 5.6 Implementations of (5.9).
5.5 REALIZATION PROBLEM 165
- (ax 1 + bxz + cx3 ). The circuit in Figure 5.4(f) can actas an integrator, an adder,
and multipliers. If we assign its output as x, then we have - i = au 1 + buz + cu 3 •
If we assign the output as - x, then we have i = au 1 + buz + cu 3 • It is important
to mention that we can assign the output either as x or - x, but we cannot alter its
relationship with the inputs.
Figure 5.6(b) shows an op-amp circuit implementation of (5.9) by using the
elements in Figure 5.4(e) and (f). It has two integrators. The output of one integrator
is assigned as x 1 ; therefore, its input equals - .i1 = - 2x 1 + 0.3Xz + 2u as shown.
The output of the second integrator is assigned as - Xz; therefore its input should
equal .iz = x 1 - 8xz as shown. The rest is se1f-explanatory. A1though the numbers
of capacitors used in Figure 5.6(a) and (b) are the same, the numbers of operational
amplifiers and resistors in Figure 5.6(b) are considerably smal1er.
In actual operational amplifier circuits, the range of signals is limited by the
supplied voltages, for examp1e ± 15 volts. Therefore, a state-variable equation may
have to be scaled before implementation. Otherwise, saturation may occur. This and
other technical details are outside the scope of this text, and the interested reader is
referred to References [24, 50].
Considera transfer function G(s). To compute the response of G(s) dueto an input,
we may find the Laplace transform of the input and then expand G(s)U(s) into partial
fraction expansion. From the expansion, we can obtain the response. This procedure
requires the computation of al1 po1es of G(s)U(s), or all roots of a polynomial. This
can be easily done by using MATLAB. A polynomial in MA TLAB is represented
by a row vector with coefficients ordered in descending powers. For example, the
polynomial (s + 1?(s + 1.001) = s 4 + 4.001s 3 + 6.003sz + 4.003s + 1.001
is represented as
p = [1 ,4.001 ,6.003,4.003, 1.001]
or
p = [1 4.001 6.003 4.003 1.001]
The entries are separated by commas or spaces. The command
roots(p)
will generate
1.001
1.0001
1 +0.0001i
1-0.0001i
We see that the results differ slightly from the exact roots - 1, - 1, - 1, and - 1.00 l.
If we perturb the polynomial to s 4 + 4.002s 3 + 0.6002sz + 4.002s + 1, then the
166 CHAPTER 5 COMPUTER SIMULATION AND REALIZATION
command
roots([1 4.002 6.002 4.002 1])
will generate
-1.2379
-9.9781 +0.208i
-9.9781-0.208i
-0.8078
The roots are quite different from those of the original polynomial, even though the
two polynomials differ by less than 0.03%. Thus, roots of polynomials are very
sensitive to their coefficients. One may argue that the roots of the two polynomials
are repeated or clustered together; therefore, the roots are sensitive to coefficients.
In fact, even if the roots are well spread, as in the polynomial
p(s) = (s + 1)(s + 2)(s + 3) · · · (s + 19)(s + 20)
the roots are still very sensitive to the coefficients. See Reference [15, p. 219].
Furthermore, to develop a computer program to carry out partial fraction expansion
is not simple. On the other hand, the response of state-variable equations is easy to
program, as is shown in Section 5.2. lts computation does not require the compu-
tation of roots or eigenvalues, therefore it is less sensitive to parameter variations.
For these reasons, it is desirable to compute the response of G(s) through state-
variable equations.
Considera transfer function G(s). If we can find a state-variab1e equation
x(t) Ax(t) + bu(t) (S. lOa)
such that the transfer function from u to y in (5.10) equals G(s) or, from (2.75),
G(s) = c(sl- A)- 1b + d
then G(s) is said to be realizable and (5.10) is called a realization of G(s). The term
realization is well justified, for G(s) can then be built or implemented using op-amp
circuits through the state-variable equation. lt tums out that G(s) is realizable if and
only if G(s) is a proper rational function. If G(s) is an improper rational function,
its realization will assume the form
x(t) Ax(t) + bu(t) (5.lla)
Example 5.5. 1
Consider
s 4 + 2s 3 s 2 + 4s + 12
(5.13)
G(s) = 2s 4 + l0s 3 + 20s 2 + 20s + 8
Clearly we have G(oo) = 1/2 = 0.5; it is the ratio of the coefficients associated
with s 4 . We compute
(s 4 + 2s 3 - s 2 + 4s + 12) -
4
0.5(2s + l0s 3 + 20s 2 + 20s + 8)
Gs(s) : = G(s) - G(oo)
2s 4 + l0s 3 + 20s 2 + 20s + 8 (5.14)
- 3s 3 - lls 2 - 6s + 8
2s 4 + 10s 3 + 20s 2 + 20s + 8
lt is strictly proper. Next we divide its numerator and denominator by 2 to yield
- l.5s 3 - 5.5s 2 - 3s + 4 N(s)
G,(s) = s 4 + 5s 3 + l0s 2 + lOs + 4 =: D(s)
This step normalizes the leading coefficient of D(s) to l. Thus (5.13) can be written
as
s 4 + 2s 3 - s 2 + 4s + 12
G(s)
2s 4 + 10s 3 + 20s 2 + 20s + 8 (5.15)
- l.5s 3 - 5.5s 2 3s + 4
05
· + s + 5s 3 + 10s 2 +
4 lOs + 4
This completes the preliminary steps of realization. We mention that (5.14) can also
be obtained by direct division as
0.5
2s 4 + 10s 3 + 20s 2 + 20s + 8)s 4 + 2s 3 s 2 + 4s + 12
s + 5s + 1Os 2 + 1Os + 4
4 3
11s 2 6s + 8
168 CHAPTER 5 COMPUTER SIMULATION ANO REALIZATION
x(t)
[T o
1
o
o
o Tl m x(l) + u(l)
(5.17a)
the output of the second integrator to the input of the third integrator. A similar
remark applies to the fourth equation of (5.17a). From (5.17b), we can readily draw
the upper part of Figure 5. 7. The basic block diagram has four loops with loop gains
-a3/s, -a2/s 2, -a 1/s 3 and -a0/s 4. They touch each other. Thus, we have
Ll = 1 - ( - a3 + - az + -a 1 + - ao) = 1 + a3 + a2 + a1 + ao
s s2 s3 s4 s s2 s3 s4
There are five forward paths from u to y with path gains:
bo bl bz b3
PI = s4 Pz = s3 p3 =
s2
p4
S
and
p5 = d
Note that the first four paths touch all the loops; whereas P 5 , the direct transmission
path, does not touch any loop. Thus we have
.::1 1 = .::1 2 = .::1 3 = Ll 4 = 1
and
This is the same as (5.16). Thus (5.17) is a realization of (5.12) or (5.16). This can
also be shown by computing c(sl - A)~ 1b + d in (5.17). This is more tedious and
is skipped.
Because G(s) is scalar-that is, G(s) = G'(s)-we have
G(s) = c(sl- A)~ 1 b + d = b'(sl- A')~ 1 c' + d
Thus, the following state-variable equation
~ ~]
-a 3
x(t) =
[
-a_ 2 o
-a 1 O o 1
x(t) + ¡;:]
bl
u(t) (5.18a)
-a0 o O O b0
170 CHAPTER 5 COMPUTER SIMULATION AND REALIZATION
Example 5.5.2
The denominator of G(s) has degree 4, therefore the realization of G(s) has dimension
4. Its controllable-form realization is
x [-~ -10
o
1
o
-10
o
o -~]x+m· (5.200)
Example 5.5.3
Consider
1 0.5 O · s 2 + O · s + 0.5
G(s) = - = - 3 = --:::-----==------
3
2s3 s s + O · s2 + O · s + O
The denominator of G(s) has degree 3, therefore the realization of G(s) has dimension
3. Because G(s) is strictly proper or G(oo) = O, the realization has no direct trans-
5.5 REALIZATION PROBLEM 171
y [O O 0.5]x
Exercise 5.5. 1
Exercise 5.5.2
Find realizations of
5
a. 2s 2 + 4s + 3
2
b. -:3:-----
s +
S - 1
3
c. -2s-:5:--+-1
will generate the controllable-form realization in (5.20). To find the step response
of G(s), we type
step(num,den)
if we use version 3.5 or the Student Edition of MA TLAB. Then the response will
appear on the monitor. If we use version 3.1 of MA TLAB to find the step response
of G(s) from Oto 20 seconds and with print-out interval 0.1, we type
t=0:0.1 :20;
y= step(num,den,t);
plot(t,y)
Then the response will appear on the monitor. Inside MATLAB, the transfer function
is first realized as a state-variable equation by calling tf2ss, and then discretized by
calling c2d. The response is then computed and plotted.
Example 5.5.4
Consider the transfer function
20 2 10
G(s) - ----:::---- --.--.--- (5.21)
- (s + 2) 2 (2s + 3) s+2 s+2 2s+3
with grouping shown; the grouping is quite arbitrary. lt is plotted in Figure 5.8(a).
Now we assign the output of each block as a state variable as shown. From the first
(a)
(b)
or
which becomes
or
Similar!y, the second block implies x2 = - 2x2 + x 1• These equations and y = x3
can be arranged as
y = [O O l]x (5.22b)
This is caBed a tandem or cascade realization because it is obtained from the tandem
connection of blocks.
Example 5.5.5
Consider
This equation contains the derivative of u, thus we cannot assign the output of the
first block as a state variable. Now we assign the outputs of the first and second
blocks as w 1 and w 2 as shown in Figure 5.8(b). Then we have
W1(s) = 4s + U(s) =
S + 3
2 ( + --10)
4
+
-
S 3
U(s)
and
2
s + 4s + 8 ( 2s + 6 )
Wz(s) = s2 + 2s + 2 W¡(s) = 1 + s2 + 2s + 2 W¡(s)
[ ~:] [- ~ - ~] [ ::] + [ ~] w 1
(5.25)
W2 = [2 6{::] + W¡
If we assign the output of the third block in Figure 5.8(b) as x 4 , then we have, using
(5.26b),
i 4 = -x4 + w2 = -x4 + 2x2 + 6x3 + x 1 + 4u (5.27)
y = [O O O 1]x (5.28b)
izations. The outputs of blocks with transfer function b/(s + a) can be assigned as
state variables (see also Problem 5.9). The outputs of blocks with transfer function
(s + b)/(s + a) or of degree 2 cannot be so assigned.
In the following, we discuss a different type of realization, called parallel
realization.
Example 5.5.6
Consider the transfer function in (5.21). We use partial fraction expansion to expand
itas
20 -40 -20 40
G(s) + + ------ (5.29)
(s + 2?(2s + 3) S + 2 (s + 2) 2 S + 1.5
(a)
(b)
Figure 5.9 shows two different plots of (5.29). If we assign the output of each block
as a state variable as shown, then from Figure 5.9(a), we can readily obtain
.i1 = -2x1 + x2 .i2 = -2x2 +u
and
-1.5x3 + 40u y
176 CHAPTER 5 COMPUTER SIMULATION AND REALIZATION
[~~]
i 3
[ - ~
O
- 2
O
~ [:~] + [ ~]
- 1.5
]
x3 40
u (5.30a)
y = [ - 20 - 40 1] X (5.30b)
Exercise 5.5.3
Show that the block diagram in Figure 5.9(b) with state variables chosen as shown
can be described by
(5.3la)
y [O 40]x (5.3lb)
Example 5.5.7
Consider the transfer function in (5.23). Using partial fraction expansion, we expand
itas
(4s + 2)(s 2 + 4s + 8)
G(s)
(s + 3)(s 2 + 2s + 2)(s + 1) (5.32)
-5 5 4s + 12
--+--+
s +1 s + 3 s2 + 2s + 2
and
Tandem and parallel realizations are less sensitive to parameter variations than
the controllable- and observable-form realizations are. For a comparison, see Ref-
erence [ 13].
(5.34)
G(s) = s 4 + 5s 3 + 10s 2 + lis + 3
1t is strictly proper and has 1 as the leading coefficient of its denominator. Therefore,
its realization can be read from its coefficients as
-10 -11
o o
(5.35a)
o
o
y = [1 4 7 6]x (5.35b)
[ -~~ ~ ~]x+[~]u
0
(5.36a)
-11 o o 1 7
-3 o o o 6
178 CHAPTER 5 COMPUTER SIMULATION ANO REALIZATION
y = [1 O O O]x (5.36b)
G(s) - --=-----
S + 2
(5.37)
- s 2 + 3s + 1
(5.38a)
y [1 2]x (5.38b)
or
x [ =~ ~] x + [~] u (5.39a)
y = [1 O]x (5.39b)
These realizations have dimension 2. They are caBed mínima! realízatíons of G(s)
in (5.34) or (5.37), because they have the smallest number of state variables among
all possible realizations of G(s). The realizations in (5.35) and (5.36) are called
nonmínímal realizations. We mention that minimal realizations are minimal equa-
tions discussed in Section 2.8. Nonminimal realizations are not minimal equations.
If D(s) and N(s) have no common factor, then the controllable-form and
observable-form realizations of G(s) = N(s)/D(s) are minimal realizations. On the
other hand, if we introduce a common factor into N(s) and D(s) such as
S+ 2 (s + 2)P(s)
G(s) = 2 3
S + S + (s 2 + 3s + 1)P(s)
then depending on the polynomial P(s), we can find many nonminimal realizations.
For example, if the degree of P(s) is 5, then we can find a 7 -dimensional realization.
Such a nonminimal realization uses unnecessarily large numbers of components and
is not desirable in practice.
Exercise 5.6. 1
G(s) = s3 + s2 + s - 3
5.6 MINIMAL REALIZATIONS 179
Exercise 5.6.2
Find three different nonminima1 rea1izations of dimension 3 for the transfer function
G(s) = 1/(s + 1).
where G 1(s) and G2 (s) are proper rationa1 functions. We assume that both G 1(s) and
G2 (s) are irreducib1e-that is, the numerator and denominator of each transfer func-
tion have no common factor. Certain1y, we can find a minima1 realization for G 1 (s)
and that for Gz(s). By connecting these two realizations, we can obtain a rea1ization
for the two-input, one-output system. This realization, however, may not be a min-
ima1 rea1ization.
We now discuss a method of finding a minima1 realization for the two-input,
one-output system in Figure 5 .11. First, the system is considered to have the follow-
ing 1 X 2 vector transfer function
(5.41)
We expand G(s) as
where d; = G;(oo) and G 5 ;(s) are strictly proper, that is, deg N;(s) < deg D;(s). Let
D(s) be the 1east common mu1tip1ier of D 1(s) and D 2 (s) and have 1 as its 1eading
1
The material in this section is used only in Chapter 10 and its study may be postponed.
180 CHAPTER 5 COMPUTER SIMULATION AND REALIZATION
Note that deg N¡(s) < deg D(s), for i = 1, 2. For convenience of discussion, we
as sume
4 3
D(s) = s + a3s + GzS 2 + a1s + ao
(5.43)
N 1 (s) = b31 s 3 + b21 s 2 + b11 s + b01
and
N 2(s) b32 s 3 + b22 s 2 + b 12 s + b02
Then a minimal realization of (5.42) is
•
x(t) ~ ~ ~]
o o 1
x(t) + - ¡~:: ~::l[r(t)J
b¡1
-
bl2 y(t)
(5.44a)
O O O b01 b02
r(t)]
e(t) = [1 O O O]x(t) + [d 1 d 2 ] [ y(t) (5.44b)
Example 5.6. 1
e 1(t) = [1 3
][X1(t)J
x 2 (t)
To realize G 2 (s) with input y and output e2 , we expand it as
E 2 (s) ( -s + 2) 1.5
Gz(s) = Y(s) = 2(s + 1) = -0. 5 +~
5.6 MINIMAL REALIZATIONS 181
[X,(t)] [' o]
[-~
-2
i 2 (t) o ]x,(t]
O x + O O [r(:) J
2 (t)
i3(t) o - 1 x 3 (t) O 1 y( )
2~;: 1~]
S + 3
G(s) = [ -(s_+_1-)(-s_+_2_)
(5.45)
= [O -0.5] + [ S + 3 3 J
(s + 1)(s + 2) 2(s + 1)
Its basic block diagram is shown in Figure 5.13. It has two integrators, one less than
the block diagram in Figure 5.12. Thus, the realization in Figure 5.12 is nota minimal
realization. Note that the summer in Figure 5.11 is easily identifiable with the right-
most summer in Figure 5.12. This is not possible in Figure 5.13. The summer in
Figure 5.11 is imbedded and cannot be identified with any summer in Figure 5.13.
Exercise 5.6.3
a. L:s ~ \) s : 1 J
PROBLEMS 183
PROBLEMS
5.1. Write a program to compute the output y of the following state-variable equa-
tions due to a unit-step input. The initial conditions are assumed to be zero.
What integration step sizes will you choose?
a. x =
[-1
81 -2
-0.9
J X + [ 1.1
1.5]
U
y= [0.7 2.1]x
b. x
[:9
-0.1
o 1] + [LI]
:·5 X ~ U
2
y = [2.5 1.2]x
5.2. Repeat Problem 5.1 using a commercially available software. Compare the
results with those obtained in Problem 5 .l.
5.3. Draw basic block diagrams for the equations in Problem 5.1.
5.4. Draw operational amplifier circuits for the two state-variable equations in Prob-
lem 5.1. Use the elements in Figure 5.4(e) and (f).
5.5. Develop a state-variable equation for the basic block diagram shown in Figure
P5.5.
Figure P5.5
184 CHAPTER 5 COMPUTER SIMULATION AND REAUZATION
5.6. Develop a state-variable equation for the op-amp circuit shown in Figure P5.6.
Figure P5.6
5.7. Draw a basic block diagram for the observable-form equation in (5.18) and
then use Mason's formula to compute its transfer function.
5.8. Find realizations for the following transfer functions and draw their basic block
diagrams.
s2 + 2
a. G 1(s)
4s 3
3s 4 + 1
b. G 2 (s)
2s 4
+ 3s + 4s 2 +
3
S + 5
(s + 3) 2
e G3 (s) = -----'----=-----'---
. (s + lf(s + 2)
5.9. Show the equivalence of the blocks in Figure P5.9.
,-----:-----------,
1
u-~x u
~
1 1
L ____________ _j
Figure P5.9
PROBLEMS 185
5.10. Use Problem 5.9 to change every block of Figure P5.10 into a basic block
diagram and then develop state-variable equations to describe the two systems.
(a)
(b)
Figure P5.1 O
5.11. Find tandem and parallel realizations for the following transfer functions:
(s + 3)2
a. (s + + 2)
1)2 (s
(s + 3?
b. (s + 2?(s 2 + 4s + 6)
5. 12. a. Find a minimum realization of
G(s) =
(s 2)(2s 2 + 3s + 4)
b. Find realizations of dimension 3 and 4 for the G(s) in (a).
5.13. a. Consider the armature-controlled de motor in Figure 3.1. Show that if the
state variables are chosen as x 1 = 8, x 2 = é and x 3 = ia, then its state-
variable description is given by
{} = [1 O O]x
186 CHAPTER 5 COMPUTER SIMULATION ANO REALIZATION
0(s) 1 k
G(s) = - = ------- '-----
U(s) s[(Js + f)(Ra + Lé) + k,kh]
Find a realization of G(s). [Although the realization is different from the
state-variable equation in (a), they are equivalent. See Section 11.3.]
c. Every state variable in (a) is associated with a physical quantity. Can you
associate every state variable in (b) with a physical quantity?
5.14. Consider the block diagram shown in Figure P5.14. (a) Compute its overall
transfer function, realize it, and then draw a basic block diagram. (b) Draw a
basic block diagram for each block and then connect them to yield the overall
system. (e) If k is to be varied over a range, which basic block diagram, (a) or
(b), is more convenient?
2s 2 + s- 1 y
s 3 + s 2 + 3s + 2
Figure P5. 14
5.15. Consider the block diagram shown in Figure P5.15. lt has a tachometer feed-
back with transfer function 2s. Differentiators are generally not built using
operational amplifier circuits. Therefore, the diagram cannot be directly simu-
lated using operational amplifier circuits. Can you modify the diagram so that
it can be so simulated?
Figure P5. 15
5.16. Find minimal realizations for the following 1 X 2 vector transfer functions:
O. [ S + 2 S + 1]
(s + 1)2 s + 2
b.
2
[(s +s 1;s + 2) ~]
S+ 2
c. [ s
2
+ s + 1 s - 1
3
J
s3 + 2s 2 + 1 s 3 + 2s 2 + 1
5.17. Consider the block diagram shown in Figure 5.11. Suppose G 1(s) and G 2 (s)
are given as in Problem 5.16(c). Draw a basic block diagram. Can you identify
the summer in Figure 5.11 with a summer in your diagram?
PROBLEMS 187
5.18. a. Consider the block diagram in Figure 5.7. Show that if the state variables
x 1 , x 2 , x 3 , and x4 in Figure 5.7 are renamed as x4 , x 3 , x 2 , and x 1, then the
block diagram can be described by
1 o
x(t)
[L o
o
-a¡
o
1
-a2
_u m X(l) + u(t)
x(t)
[~ o
o o
o o -a 1
1 o
_ _
a2
-a3
-a,]
x(t) + -
b1
b2
b3
n u(t)
6. 1 INTRODUCTION
With the background introduced in the preceding chapters, we are now ready to
study the design of control systems. Before introducing specific design techniques,
it is important to obtain a total picture of the design problem. In this chapter, we
first discuss the choice of plants and design criteria and then discuss noise and
disturbance problems encountered in practice. These problems impose constraints
on control systems, which lead to the concepts of well-posedness and total stability.
We also discuss the reason for imposing constraints on actuating signals. Feedback
systems are then shown to be less sensitive to plant perturbations and externa! dis-
turbances then are open loop systems. Finally, we discuss two general approaches
in the design of control systems.
In this computation, the moments of inertia of motor and gear trains, which are not
yet determined, are not included. Also, we consider neither the power required to
overcome static, Coulomb, and viscous frictions nor disturbances due to gusting.
Therefore, the horsepower of the motor should be larger than the one computed in
(6.1). After the size of a motor is determined, we must select the type of motor: de,
ac, or hydraulic. The choice may depend on availability at the time of design, cost,
reliability, and other considerations. Past experience may also be used in this choice.
For convenience of discussion, we choose an armature-controlled de motor to drive
the antenna. A de generator is also chosen as a power amplifier, as shown in Figure
6.1. This collection of devices, including the load, is called the plant of the control
system. We see from the foregoing discussion that the choice of a plant is not unique.
(a)
,-------------------------,
V
g
u--++- G 1 (s)
1
1 1
1 G(s) 1
L-------------------------~
(b)
Different designers often choose different plants; even the same designer may choose
different plants at different times.
Once a plant is chosen, the design problem is concemed with how to make the
best use of the plant. Generally, the plant alone cannot meet the design objective.
We must introduce compensators and transducers and hope that the resulting overall
system will meet the design objective. If after trying all available design methods,
we are still unable to design a good control system, then either the design objective
is too stringent or the plant is not properly chosen. If the design objective can be
relaxed, then we can complete the design. If not, then we must choose a new plant
and repeat the design. Thus a change of the plant occurs only as a last resort. Other-
wise, the plant remains fixed throughout the design.
Once a plant with transfer function G(s) is chosen, the design problem is to design
an overall system, as shown in Figure 6.2, to meet design specifications. Different
applications require different specifications. For example, in designing a control
system to aim an astronomical telescope at a distant star, perfect aiming or accuracy
is most important; how fast the system achieves the aiming is not critical. On the
other hand, for a control system that drives guided missiles to aim at incoming enemy
missiles, speed of response is as critical as accuracy. In general, the performance of
control systems is divided into two parts: steady-state performance, which specifies
accuracy, and transient performance, which specifies the speed of response. The
steady-state performance may be defined for a step, ramp, or acceleration reference
input. The transient response, however, is defined only for a step reference input.
Before proceeding, it is important to point out that the behavior of a control
system depends only on its overall transfer function G (s) from the reference input
0
to the plant output y. lt does not depend explicitly on the plant transfer function
G(s). Thus, the design problem is essentially the search for a G0 (s) to meet design
specifications. Let G (s) be of the form
0
any reference input and the system will break down or bum out. Thus, every G (s) 0
r-------------------,
1
rl ~
----r ~~
1 1
1 ~w 1
L-------------------~
to be designed must be stable. The design problem then becomes the search for a
stable G,(s) to meet design specifications.
1a o a o f3o 1
a
Because step functions correspond to positions, this error is called the position error.
Clearly if G (0) = 1 or {30 = a 0 then the position error is zero. If we require the
0
or
(1 - y)ao < f3o < (1 + y)ao (6.4)
Thus, the specification on the position error can easily be translated into the constant
coefficients a 0 and {3 0 of G0 (s). Note that the position error is independent of a¡ and
/3;, for i :::::: l.
192 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, AND FEEDBACK
Exercise 6.3. 1
Find the range of {30 so that the position error of the following transfer function is
smaller than 5%.
f3o
G(s)
s2 + 2s + 2
[Answer: 1.9 < {30 < 2.1.]
~oo a a
1(1 - Ga(O))t - G~(O)I (6.6)
0 6
This error will be called the velocity error, because ramp functions correspond to
velocities. We see that if G 0 (0) = {3 0 / a 0 "/'= 1, then r(t) and y 5 (t) have different
slopes, as shown in Figure 6.3(a), and their difference approaches infinity as t ~ oo.
Thus the velocity error is infinity. Therefore, in order to have a finite velocity error,
we must have G 0 (0) = 1 or {30 = a 0 . In this case, r(t) and Ys(t) have the same
r(t) /i
/ / y (t)
/
o o
(a) (b)
slope, as is shown in Figure 6.3(b), and the velocity error becomes finite and equals
Thus the conditions for having a zero velocity error are a 0 = {30 and a 1 = {3 1, or
G 0 (0) = 1 and G~(O) = O. They are independent of a¡ and {3¡, for i ;::: 2.
The preceding analysis can be extended to acceleration reference inputs or any
inputs that are polynomials of t. We will not do so here. We mention that, in addition
to the steady-state performances defined for step, ramp, and acceleration functions,
there is another type of steady-state performance, defined for sinusoidal functions.
This specification is used in frequency-domain design and will be discussed in
Chapter 8.
The plant output y(t) is said to track asymptotically the reference input r(t) if
If the position error is zero, then the plant output will track asymptotically any step
reference input. For easy reference, we recapitulate the preceding discussion in the
following. Consider the design problem in Figure 6.2. No matter how the system is
designed or what configuration is used, if its overall transfer function G,is) in (6.2)
is stable, then the overall system has the following properties:
l. If G 0 (0) = 1 or a 0 = {30 , then the position error is zero, and the plant output
will track asymptotically any step reference input.
2. If G 0 (0) = 1 and G~(O) = O, or a 0 = {30 and a 1 = {3 1, then the velocity error
is zero, and the plant output will track asymptotically any ramp reference input.
3. If G 0 (0) = l, G~(O) = O, and G~(O) = O, or a 0 = {30 , a 1 = {3 1, and a 2 = {32 ,
then the acceleration error is zero, and the plant output will track asymptotically
any acceleration input.
G s = N¡(s)
1( ) s¡D (s)
1
with N 1(0) #- O and D 1(0) #- O. Now we claim that if G1(s) is of type 1, and if the
unity-feedback system is stable, then the position error of the unity-feedback system
is O. Indeed, if G¡(s) is of type 1, then the overall transfer function is
N1(s)
sD 1(s) N¡(s)
1 + Nt(s) sD 1(s) + N1(s)
sD 1(s)
Therefore we have
N1(0) = N1(0) = l
O X D 1(0) + N 1(0) N1(0)
which implies that the position error is zero. Thus, the plant output will track asymp-
totically any step reference input. Furthermore, even if there are variations of the
parameters of N1(s) and D1(s), the plant output will still track any step reference input
so long as the overall system remains stable. Therefore, the tracking property is said
to be robust. U sing the same argument, we can show that if the loop transfer function
is of type 2 and if the unity-feedback system is stable, then the plant output will
track asymptotically and robustly any ramp reference input (Problem 6.7).
L--------~
yr~~
L.:_j- e 1
-r+®Y
-S
Figure 6.4 (a) Unity-feedback system. (b) Unity-feedback system with a forward gain.
(e) Nonunity-feedback system.
6.3 PERFORMANCE CRITERIA 195
If G 1(s) is of type O, that is, G 1(s) = N1(s)/D 1(s) with N 1(0) "" O and D 1(0) ""O,
then we have
N1(s)
and G (O) = N¡(O) "" 1
D 1(s) + N1(s) o D 1(0) + N 1(0)
and the position error is different from zero. Thus, if the loop transfer function in
Figure 6.4(a) is of type O, then the p1ant output will not track asymptotically any
step reference input. This problem can be resolved, however, by introducing the
forward gain
D 1(0) + N1(0)
k
N1(0)
as show~in Figure 6.4(b). Then the transfer function G0 (s) from r to y has the
property G 0 (0) = 1, and the plant output will track asymptotically any step reference
input r. In practice, there is no need to implement gain k. By a proper calibration or
setting of r, it is possible for the plant output to approach asymptotically any desired
va1ue.
There is one problem with this design, however. If the parameters of G1(s) =
G(s)C(s) change, then we must recalibrate or reset the reference input r. Therefore,
this design is not robust. On the other hand, if G1(s) is of type 1, then the tracking
property of Figure 6.4(a) is robust, and there is no need to reset the reference input.
Therefore, in the design, it is often desirable to have type 1 loop transfer functions.
We mention that the preceding discussion holds only for unity-feedback sys-
tems. If a configuration is not unity feedback, such as the one shown in Figure 6.4(c),
even if the plant is of type 1 and the feedback system is stable, the position error is
not necessarily zero. For example, the transfer function of Figure 6.4(c) is
2 S + 2
1 +-
s·
ep __ ,_2
-2 11 = 0.5 = 50%
The position error is not zero even though the plant is of type l. Therefore, system
types are useful in determining position or velocity errors in the unity-feedback
configuration, but not necessarily useful in other configurations.
y y
Overshoot
Y, _ _ _ _¡___
Ys
0.9 Y,
0.9 Ys
--T-
0.04y,
o tS
(a) (b)
only for step reference inputs. Consider the outputs dueto a unit-step reference input
shown in Figure 6.5, in which Ys denotes the steady state of the output. The transient
response is generally specified in terms of the rise time, settling time, and overshoot.
The rise time can be defined in many ways. We define it as the time required for
the response to rise from Oto 90% of its steady-state value, as shown in Figure 6.5.
In other words,. it is the smallest tr such that
y(tr) = 0.9ys
The time denoted by t5 in Figure 6.5 is called the settling time. lt is the time for the
response to reach and remain inside ± 2% of its steady-state value, or it is the
smallest t5 such that
ly(t) - Y.,l :S 0.02ys for all t :2':: ts
Let Ymax be the maximum value of ly(t)l, for t :2':: O, or
Ymax : = max ly(t)l
Then the overshoot is defined as
y(t)ly,,
1.2
1.0
physicallimitations of the pilot and the maneuverability of the aircraft. In the design
of an elevator, any appreciable overshoot is undesirable. Different applications have
different specifications.
A system is said to be sluggish if its rise time and settling time are large. If a
system is designed for a fast response, or to have a small rise time and a small
settling time, then the system may exhibit a large overshoot, as can be seen from
Figure 4.7. Thus, the requirements on the rise time and overshoot are often confiict-
ing and must be reached by compromise.
The steady-state response of G (s) depends only on a number of coefficients of
0
G (s); thus the steady-state performance can easily be incorporated into the design.
0
The transient response of G0 (s) depends on both its poles and zeros. Except for sorne
special cases, no simple relationship exists between the specifications and pole-zero
locations. Therefore, designing a control system to meet transient specifications is
not as simple as designing one to meet steady-state specifications.
Noise and disturbances often arise in control systems. For example, if a potentiom-
eter is used as a transducer, noise will be generated (because of brush jumps, wire
irregularity, or variations of contact resistance). Motors and generators also generate
noise because of irregularity of contact between carbon brushes and commutators.
Shot noise and thermal noise are always present in electronic circuits. Therefore,
noise, usually high-frequency noise, exists everywhere in control systems.
Most control systems will also encounter extemal disturbances. A cruising air-
craft may encounter air turbulence or air pockets. A huge antenna may encounter
strong or gusting winds. Fluctuations in power supply, mechanical vibrations, and
hydraulic or pneumatic pressure will also disturb control systems.
Variation of load is also common in control systems. For example, consider a
motor driving an audio or video tape. At the beginning and end, the amounts of tape
on the reel are quite different; consequently, the moments of inertia of the load are
not the same. As a result, the transfer function of the plant, as can be seen from
(3.17), is not the same at all times. One way to deal with this problem is to choose
198 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, AND FEEDBACK
(a) (b)
the average moment of inertia or the largest moment of inertia (the worst case),
compute the transfer function, and use it in the design. This transfer function is
called the nominal transfer function. The actual transfer function may differ from
the nominal one. This is called plant perturbation. Aging may also change plant
transfer functions. Plant perturbations are indeed inevitable in practice.
One way to deal with plant perturbation is to use the nominal transfer function
in the design. The difference between actual transfer function and nominal transfer
function is then considered as an extemal disturbance. Thus, disturbances may arise
from extemal sources or intemalload variations. To simplify discussion, we assume
that noise and/ or disturbance will enter at the input and output terminals of every
block, as shown in Figure 6.7. These inputs also generate sorne responses at the
plant output. These outputs are undesirable and should be suppressed or, if possible,
eliminated. Therefore, a good control system should be able to track reference inputs
and to reject the effects of noise and disturbances.
In this and the following sections we discuss sorne physical constraints in the design
of control systems. Without these constraints, design would become purely a math-
ematical exercise and would have no relation to reality. The first constraint is that
compensators used in the design must have proper transfer functions. As discussed
in the preceding chapter, every proper transfer function can be realized as a state-
variable equation and then built using operational amplifier circuits. If the transfer
function of a compensator is improper, then its construction requires the use of pure
differentiators. Pure differentiators built by using operational amplifiers may be un-
stable. See Reference [18]. Thus compensators with improper transfer functions
cannot easily be built in practice. For this reason, all compensators used in the design
will be required to have proper transfer functions.
In industry, proportional-integral-derivative (PID) controllers or compensators
are widely used. The transfer functions of proportional and integral controllers are
kP and kJ s; they are proper transfer functions. The transfer function of derivative
controllers is ké, which is improper. However, in practice, derivative controllers
6.5 PROPER COMPENSATORS AND WELL-POSEDNESS 199
are realized as
for sorne constant N. This is a proper transfer function, and therefore does not vio late
the requirement that all compensators have proper transfer functions. In the remain-
der of this chapter, we assume that every component of a control system has a proper
transfer function. If we encounter a tachometer with improper transfer function ks,
we shall remodel it as shown in Figure 3.10(e). Therefore, the assumption remains
valid.
E ven though all components have proper transfer functions, a control system so
built may not have a proper transfer function. This is illustrated by an example.
Example 6.5. 1
Consider the system shown in Figure 6.7(a). The transfer functions of the plant and
the compensator are all proper. Now we compute the transfer function Gy/s) from
r to y. Because the system is linear, in computing Gy/s), all other inputs shown
(n¡, i = 1, 2, and 3) can be assumed zero or disregarded. Clearly we have
+ 1) . -S -
- (s -s
S + 2 S + 1 S +2 -s
Gyr(s) -0.5s
- (s + 1) S S S +2 - S
+ S + 2 .S -+-1 ---
S+ 2
It is improper! Thus the propemess of all component transfer functions does not
guarantee the propemess of an overall transfer function.
output is completely dominated by the noise and the system cannot be used in
practice.
In conclusion, if a control system has an improper closed-loop transfer function,
then high-frequency noise will be greatly amplified and the system cannot be used.
Thus, a workable control system should not contain any improper closed-loop trans-
fer function. This motivates the following definition.
o Definition 6. 1
A system is said to be well-posed or closed-loop proper if the closed-loop
transfer function of every possible input/output pair of the system is proper. •
We have assumed that noise and disturbance may entera control system at the
input and output terminals of each block. Therefore, we shall consider not only the
transfer function from r to y, but also transfer functions from those inputs to all
variables. Let Gpq denote the transfer function from input q to output p. Then the
system in Figure 6.7(a) or (b) is well posed if the transfer functions Ger, Gvr Gur
Gyr' Gen¡' Gun 1 , Gyn¡' Genz' Gunz' Gynz' Gen 3 , Gun 3 , Gyn 3 , are all proper. These transfer
functions are clearly all closed-loop transfer functions and, strictly speaking, the
adjective closed-loop is redundant.lt is kept in Definition 6.1 to stress their difference
from open-loop transfer functions.
The number of possible input/output pairs is quite large even for the simple
systems in Figure 6.7. Therefore, it appears to be difficult to check the well-posed-
ness of systems. Fortunately, this is not the case. In fact, the condition for a feedback
system to be well posed is very simple. A system that is built with blocks with proper
transfer functions is well posed if and only if
li(oo) =F O (6.8)
where li is the characteristic function defined in (3.37). For the feedback systems in
Figure 6. 7, the condition becomes
li(oo) = 1 + C(oo)G(oo) =F O (6.9)
For the system in Figure 6.7(a), we have C(s) = - (s + 1)/(s + 2) and G(s) =
s/(s + 1) which imply C(oo) = -1, G(oo) = 1 and 1 + C(oo)G(oo) = O. Thus the
system is not well posed. For the system in Figure 6.7(b), we have
Thus the system is well posed. As a check we compute the closed-loop transfer
functions from n2 to u, y, e, and u in Figure 6.7(b). In this computation, all other
inputs are assumed zero. The application of Mason's formula yields
1 S+ 2
2s - (s + 1) 2s s+2-2s -s + 2
\ 1 +--. 1 -
S + 1 (s + 2) S +2 S +2
6.5 PROPER COMPENSATORS AND WELL-POSEDNESS 201
2s
(s + 1) 2s(s + 2)
2s - (s + 1) (s + 1)(- s + 2)
1 +
S + S + 2
-2s(s + 2)
(s + 1)(- s + 2)
and
2s - (s + 1) 2s
---.
S + 1 S + 2 S+ 2 2s
Gvn2(s) (6.10)
2s - (s + 1) s+2-2s -s + 2
+ ---.
S + 1 S + 2 S + 2
They are indeed all proper. Because the condition is (6.8) can easily be met, a control
system can easily be designed to be well posed. We remark that if a plant transfer
function G(s) is strictly prope.r and if C(s) is proper, then the condition in (6.9) is
automatically satisfied. Note that the conditions in (6.8) and (6.9) hold only if the
transfer function of every block is proper. If any one of them is improper, then the
conditions cannot be used.
To conclude this section, we discuss the relationship between well-posedness
and propemess of compensators. Propemess of compensators is concemed with
open-loop propemess, whereas well-posedness is concemed with closed-loop prop-
emess. Open-loop propemess does not imply closed-loop propemess, as is demon-
strated in the system in Figure 6.7(a). lt can be verified, by computing all possible
closed-loop transfer functions, that the system in Figure 6.8 is well posed. However,
the system contains one improper compensator. Thus, well-posedness does not imply
propemess of compensators. In conclusion, open-loop propemess and closed-loop
propemess are two independent concepts. They are also introduced for different
reasons. The former is needed to avoid the use of differentiators in realizing com-
pensators; the latter is needed to avoid amplification of high-frequency noise in
overall systems.
-
r +
+
In the design of control systems, the first requirement is always the stability of
transfer functions, G0 (s), from the reference input r to the plant output y. However,
this may not guarantee that systems will work properly. This is illustrated by an
example.
Example 6.6. 1
Consider the system shown in Figure 6.9. The transfer function from r to y is
S - 1 1
--.---
S + 1 S - 1 S + 1 2
G 0 (s) 2 X 2 X (6.11)
S - 1
+--.--
1 1 S + 2
1 +--
S + 1 S - 1 S + 1
lt is stable. Because G0 (0) = 1, the position error is zero. The time constant of the
system is 1/2 = 0.5. Therefore, the plant output will track any step reference input
in about 5 X 0.5 = 2.5 seconds. Thus the system appears to be a good control
system.
A el ose examination of the system in Figure 6.9 reveals that there is a pole-zero
cancellation between C(s) and G(s). Will this cause any problem? As was discussed
earlier, noise or disturbance may entera control system. We compute the transfer
function from n to y in Figure 6.9:
S - 1 S - 1 S + 1
Gy/s) (6.12)
S - 1 (s - 1)(s + 2)
1 + - - . -S -- -1 1 +
S + 1 S + 1
C(s) G(s)
1t is unstable! Thus any nonzero noise, no matter how small, will excite an un-
bounded plant output and the system will bum out. Therefore the system cannot be
used in practice, even though its transfer function from r to y is stable. This moti vates
the following definition.
--
6.6 TOTAL STABILITY 20~
o Definition 6.2
A systern is said to be total/y stable if the closed-loop transfer function of every
possible input-output pair of the systern is stable. •
1
In addition to pole-zero cancellations, missing poles may also arise from parallel connection and
other situations. See Reference [15, pp. 436-437]. In this text, it suffices to consider only pole-zero
cancellations.
204 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, AND FEEDBACK
S 0.9
---.--
S + 1.1 S - 1 2(s - 0.9)
G~(s) 2 X ---------------
s - 0.9 1 s2 + 0.1s - 1.1 + s - 0.9
+ ·--
S + 1.1 S - 1 (6.13)
2(s - 0.9) 2(s - 0.9)
s2 + 1.1s - 2 (s + 2.0674)(s - 0.9674)
It is unstable! Thus, its step response y(t) will approach infinity and is entirely
different from the one in Figure 6.9. In conclusion, unstable pole-zero cancellations
are permitted neither in theory nor in practice.
Stable pole-zero cancellations, however, are an entirely different matter. Con-
sider the system shown in Figure 6.11(a). The plant transfer function is G(s) =
3/(s 2 + 0.1s + 100), and the compensator transfer function is C(s) =
(s 2 + 0.1s + 100)/s(s + 2).NotethatC(s)isoftype 1,thereforethepositionerror
of the unity feedback system is zero. The overall transfer function from r to y is
s2 + 0.1s + 100 3
s(s + 2) s2 + 0.1s + 100
G 0 (S)
s 2 + 0.1s + 100 3
1 +
s(s + 2) s2 + 0.1s + 100 (6.14)
3
s(s + 2) 3
3 s2 + 2s + 3
+
s(s + 2)
The number of the poles of G 0 (s) is 2, which is 2 less than the total number of poles
of G(s) and C(s). Thus, G0 (s) has two missing poles; they are the roots of s 2 + 0.1s
+ 100. Because the poles of G 0 (s) and the two missing poles are stable, the system
is totally stable. The unit-step response of G 0 (s) is computed, using MATLAB, and
plotted in Figure 6.11 (b) with the so lid line.
Now we study the effect of imperfect pole-zero cancellations. Suppose the com-
pensator becomes
6.6 TOTAL STABILITY 205
- s-0.9
--
S+ 1.1
f---. -
s-1
1
-r-
y
s 2 + 0.09s + 99
C'(s) = - - - - - -
s(s + 2)
dueto aging or inexact realization. With this C'(s), the transfer function of Figure
6.1l(a) becomes
s2 + 0.09s + 99 3
2
s(s + 2) s + O.ls + 100
G~(s)
s 2 + 0.09s + 99 3
1 + ------- 2
s(s + 2) s + O.ls + 100
3(s 2 + 0.09s + 99)
(6.15)
s(s + 2)(s 2 + O.ls + 100) + 3(s 2 + 0.09s + 99)
3s 2 + 0.27 s + 297
s + 2.1s + 103.2s 2 + 200.27s + 297
4 3
- -
r + s 2 +0.15+100
s(s + 2)
3
s 2 + 0.15 + 100
y
(a)
0.8
0.6
0.4
0.2
4 5 6 7 8
(b)
lts unit-step response is plotted in Figure 6. l1(b) with the dotted line. We see that
the two responses in Figure 6.11 (b) are hardly distinguishable. Therefore, unlike
unstable pole-zero cancellation, imperfect stable pole-zero cancellations may not
cause any serious problem in control systems.
0.06 . - - - - - - - - - - - - - - - - - - - ,
0.05
0.04
0.03
0.02
0.01
o
-0.01
-0.02
-0.03
-0.04 '--~-~~-~-~~.........,-~-~----'
o 2345678910
Figure 6.12 Effect of step disturbance.
6. 7 SATURATION-CONSTRAINT ON ACTUATING SIGNALS 207
A system can be totally stable with stable pole-zero cancellations. However, if can-
celed stable poles are elose to the imaginary axis or have large imaginary parts, then
disturbance or noise may excite a plant output that is oscillatory and slow decaying.
If poles lie inside the region e shown in Figure 6.13, then their cancellations will
not cause any problems. Therefore, perfect or imperfect cancellation of poles lying
inside the region e is permitted in theory and in practice. The exact boundary of the
region e depends on each control system and performance specifications, and will
be discussed in the next chapter.
There are two reasons for using pole-zero cancellation in design. One is to
simplify design, as will be seen in the next chapter. The other reason is due to
necessity. In model matching, we may have to introduce pole-zero cancellations to
insure that the required compensators are proper. This is discussed in Chapter 10.
Example 6.7.1
Consider the system shown in Figure 6.14. The element A is an amplifier with gain
2. The overall transfer function is
208 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, AND FEEDBACK
A
s+2
s(s -1)
(a) (b)
2·---
S + 2
s(s - 1) 2(s + 2) 2(s + 2)
(6.17)
S +2 s 2
- s + 2s + 4
+2·---
s(s - 1)
Because G (0) = 4/4 = 1, the system has zero position error and the plant output
0
will track any step reference input without an error. The plant outputs excited by
r = 0.3, 1.1, and 1.15 are shown in Figure 6.15 with the solid lines.
In reality, the amplifier may have the characteristic shown in Figure 6.14(b ).
For ease of simulation, the saturation is approximated by the dashed lines shown.
The responses of the system due to r = 0.3, 1.1, and 1.15 are shown in Figure 6.15
with the dashed lines. These responses are obtained by computer simulations. If
r = 0.3, the amplifier will not saturate and the response is identical to the one
obtained by using the linear model. If r = 1.1, the amplifier saturates and the re-
sponse differs from the one obtained by using the linear model. If r = 1.15, the
y(t)
3-
1.1
1
0.3
\
o 2 4 7
3 5 6
\
Figure 6.15 Effect of saturation.
6.8 OPEN-LOOP AND CLOSED-LOOP CONFIGURATIONS 209
response approaches infinity oscillatorily and the system is not stable, although the
linear model is always stable for any r. This example shows that linear analysis
cannot be used if signals run outside the linear range.
for all t 2:: O, where u(t) is the actuating signal and M is a constant. This constraint
arises naturally if val ves are used to generate actuating signals. The actuating signals
reach their maximum values when the valves are fully open. In a ship steering
system, the constraint exists because the rudder can tum only a finite number of
degrees. In electric motors, because of saturation of the magnetic field, the constraint
1 also exists. In hydraulic motors, the movement of pistons in the pilot cylinder is
limited. Thus, the constraint in (6.18) exists in most plants. Strictly speaking, similar
constraints should also be imposed upon compensators. If we were to include all
these constraints, the design would become very complicated. Besides, compared
with plants, compensators are rather inexpensive, and hence, if saturated, can be
replaced by ones with larger linear ranges. Therefore, the saturation constraint is
generally imposed only on the plant.
Actuating signals depend on reference input signals. If the amplitude of a ref-
erence signal is doubled, so is that of the actuating signal. Therefore, in checking
whether or not the constraint in (6.18) is met, we shall use the largest reference input
signal. However, for convenience, the constraint M in (6.18) will be normalized to
correspond to unit-step reference inputs. Therefore, in design, we often require the
actuating signal due to a unit-step reference input to have a magnitude less than a
certain value.
To keep a plant from saturating is not a simple problem, because in the process
of design, we don't know what the exact response of the resulting system will be.
Hence, the saturation problem can be checked only after the completion of the de-
sign. If saturation does occur, the system may have to be redesigned to improve its
performance.
(a) (b)
In order to compare the two configurations, we must consider noise and dis-
turbances as discussed in Section 6.4. If there were no noise and disturbances in
control systems, then there would be no difference between open-loop and closed-
loop configurations. In fact, the open-loop configuration may sometimes be prefer-
able because it is simpler and less expensive. Unfortunately, noise and disturbances
are unavoidable in control systems.
The major difference between the open-loop and feedback configurations is that
the actuating signal of the former does not depend on the plant output. It is pre-
determined, and will not change even if the actual plant output is quite different
from the desired value. The actuating signal of a feedback system depends on the
reference signal and the plant output. Therefore, if the plant output deviates from
the desired value due to noise, externa! disturbance, or plant perturbations, the de-
viation will reflect on the actuating signal. Thus, a properly designed feedback sys-
tem should perform better than an open-loop system. This will be substantiated by
examples.
Example 6.8. 1
Consider the two amplifiers shown in Figure 6.17. Figure 6.17(a) is an open-loop
amplifier. The amplifier in Figure 6.17 (b) is built by connecting three identical open-
loop amplifiers and then introducing a feedback from the output to the input as
shown, and is called a feedback amplifier. Their block diagrams are also shown in
Figure 6.17. The gain of the open-loop amplifier is A = -lOR/R = -10. From
Figure 6.17(b), we have
e = - ( lOR r
R
+ lOR
R¡
y) = - 10 (r + !!_ y)
R¡
= A (r + !!_ y)
R¡
Thus the constant f3 in the feedback loop equals RjR1. Note that the feedback is
positive feedback. The transfer function or the gain of the feedback amplifier is
A3
Ao = _ {3A3 (6.19)
1
6.8 OPEN-LOOP ANO CLOSED-LOOP CONFIGURATIONS 211
R¡= 10.101 R
T~
~
~~
~------~CIJr--------~
(a) (b)
In order to make a fair comparison, we shall require both amplifiers to have the same
gain-that is, A 0 = A = - 10. To achieve this, f3 can readily be computed as f3 =
0.099, which implies R1 = 10.101R.
The feedback amplifier needs three times more hardware and still has the same
gain as the open-loop amplifier_ Therefore, there seems no reason to use the former_
Indeed, this is the case if there is no perturbation in gain A.
Now suppose gain A decreases by 10% each year due to aging. In other words,
the gain becomes -9 in the second year, -8.1 in the third year, and so forth. We
compute
-9.96
1 - 0.099(- 9?
(- 8.1) 3
A = ----~--~--~ -9.91
o 1 - 0_099(-8.1) 3
and so forth. The results are listed in the following:
1 2 3 4 5 6 7 8 9 10
-Ao 10 9.96 9.92 9.84 9.75 9.63 9.46 9.25 8.96 8.6
We see that although the open-loop gain A decreases by 10% each year, the closed-
loop gain decreases by from 0.4% in the first year to 4.1% in the tenth year_ Thus
the feedback system is much less sensitive to plant perturbation_
212 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, ANO FEEDBACK
If an amplifier is to be taken out of service when its gain falls below 9, then the
open-loop amplifier can serve only one year, whereas the closed-loop amplifier can
last almost nine years. Therefore, even though the feedback amplifier uses three
times more hardware, it is actually three times more economical than the open-loop
amplifier. Furthermore, the labor cost of yearly replacement of open-loop amplifiers
can be saved.
Exomple 6.8.2
Its time constant is 5; therefore it will take 25 seconds (5 X time constant) for the
rollers to reach the final speed. This is very slow, and we decide to design an overall
system with transfer function
(6.21)
lts time constant is 1/2 = 0.5; therefore the speed of response of this system is
much faster. Because G0 (0) = 2/2 = 1, the plant output of G0 (s) will track any
step reference input without an error.
Now we shall implement G0 (s) in the open-loop and closed-loop configurations
shown in Figure 6.18(b) and (e). For the open-loop configuration, we require
co
y
10
5s + 1
~ co (b)
~ ~
10
5s + 1
(a)
(e)
5s + 1 s + 2 (6.24)
5s +
= 1 +
5s 5s
It consists of a proportional compensator with gain 1 and an integral compensator
with transfer function 1/5s; therefore it is called a PI compensator or controller. We
see that it is a type 1 transfer function.
If there are no disturbances and plant perturbation, the open-loop and closed-
loop systems should behave identically, because they ha ve the same overall transfer
function. Because G0 (0) = 1, they both track asymptotically any step reference input
without an error. In practice, the load of the rollers is not constant. From Figure
6.18(a), we see that before and after the engagement of an ingot with the rollers,
the load is quite different. Even after the engagement, the load varies because of
nonuniform thickness of ingots. We study the effect of this load variation in the
following.
Plant Perturbation
The transfer function of the motor and rollers is assumed as G(s) = 10/(Ss + 1)
in (6.20). Because the transfer function depends on the load, if the load changes, so
does the transfer function. Now we as sume that, after the design, the transfer function
changes to
9
G(s) (6.25)
(4.5s + 1)
This is called plant perturbation. We now study its effect on the open-loop and
closed-loop systems.
After plant perturbation, the open-loop overall transfer function becomes
5s + 1 9
G (s) = C 1(s)G(s) = · ---- (6.26)
oo 5(s + 2) (4.5s + 1)
214 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, AND FEEDBACK
Because G (0) = 9/10 =F- 1, this perturbed system will not track asymptotically
00
any step reference input. Thus the tracking property of the open-loop system is lost
after plant perturbation.
Now '!!_e compute the overall transfer function of the closed-loop system with
perturbed G(s) in (6.25). Clearly, we have
5s
----·
+ 1 9
C2 (s)G(s) 5s 4.5s + 1
Goc(s)
+ C2 (s)G(s) 5s + 1 9
+ 5s 4.5s + (6.27)
9(5s + 1) 45s + 9
5s(4.5s + 1) + 9(5s + 1) 22.5s 2 + 50s + 9
Because G oc(O) 1, this perturbed overall closed-loop system still track any step
reference input without an error. In fact, because the compensator is of type 1, no
matter how large the plant perturbation is, the system will always track asymptoti-
cally any step reference input, so long as the overall system remains stable. This is
called robust tracking. In conclusion, the tracking property is destroyed by plant
perturbation in the open-loop system but is preserved in the closed-1oop system.
Disturbance Rejection
One way to study the effect of load variations is to introduce plant perturbations as
in the preceding paragraphs. Another way is to introduce a disturbance p(t) into the
plant input as shown in Figure 6.18(b) and (e). Now we study the effect of this
disturbance in the open-1oop and closed-loop systems. From Figure 6.18(b), we see
that the transfer function from p to y is not affected by the open-loop compensator
C 1(s). If the disturbance is modeled as a step function of magnitude a, then it will
excite the following plant output
10 a
(6.28)
5s + S
10 a
yp(oo) = lim yp(t) = lim sYp(s) = lim s · - - - · - = lOa
t--->oo s--->0 s--->0 5s + S
In other words, in the open-loop configuration, the step disturbance will excite a
nonzero plant output; therefore, the speed of the rollers will differ from the desired
speed. For example, if the disturbance is as shown in Figure 6.19(a), then the speed
will be as shown in Figure 6.19(b). This differs from the desired speed and will
cause unevenness in thickness of aluminum sheets. Thus, the open-loop system is
not satisfactory.
..
6.8 OPEN-LOOP AND CLOSED-LOOP CONFIGURATIONS 215
p(t)
1 75 100
o 25 50 o 25 50 75 100 o 25 50 75 100
Now we study the closed-loop system. The transfer function from p to y is,
using Mason's formula,
10/(5s + 1) lOs
(6.29)
10 5s + (5s + 1)(s + 2)
1 + ---
5s + 1 5s
Now ifthe disturbance is P(s) = a/ s, the steady-stateoutputyP dueto the disturbance
is
. a
lim yp(t) = lim sYp(s) hm sGyp(s) · -
t~oo s~o s-->0 S
lOsa
lim - - - - - - = O
s-->0 (5s + 1)(s + 2)
This means that the effect of the disturbance on the plant output eventually vanishes.
Thus, the speed of the rollers is completely controlled by the reference input, and
thus, in the feedback configuration, even if there are disturbances, the speed will
retum, after the transient dies out, to the desired speed, as shown in Figure 6.19(c).
Consequently, evenness in the thickness of aluminum sheets can be better
maintained.
We remark that in the closed-loop system in Figure 6.18(c), there is a pole-zero
cancellation. The canceled pole is - 1/5, which is stable but quite close to the
jw-axis. Although this pole does not appear in G0 (s) in (6.21), it appears in Gyd(s)
in (6.29). Because of this pole (its time constant is 5 seconds), it will take roughly
25 seconds (5 X time constant) for the effect of disturbances to vanish, as is shown
in Figure 6.19(c). It is possible to use different feedback configurations to avoid this
pole-zero cancellation. This is discussed in Chapter 10. See also Problem 6.14.
From the preceding two examples, we conclude that the closed-loop or feedback
configuration is less sensitive to plant perturbation and disturbances than the open-
loop configuration. Therefore, in the remainder of this text, we use only closed-loop
configurations in design.
216 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, ANO FEEDBACK
With the preceding discussion, the design of control systems can now be stated as
follows: Given a plant, design an overall system to meet a given set of specifications.
We use only feedback configurations because they are less sensitive to disturbances
and plant perturbation than open-loop configurations are. Because improper com-
pensators cannot easily be built in practice, we use only compensators with proper
transfer functions. The resulting system is required to be well posed so that high-
frequency noise will not be unduly amplified. The design cannot have unstable pole-
zero cancellation, otherwise the resulting system cannot be totally stable. Because
of the limitation of linear models and devices used, a constraint must generally be
imposed on the magnitude of actuating signals. The following two approaches are
available to carry out this design:
l. We first choose a feedback configuration anda compensator with open param-
eters. We then adjust the parameters so that the resulting feedback system will
hopefully meet the specifications.
2. We first search for an overall transfer function G0 (s) to meet the specifications.
We then choose a feedback configuration and compute the required
compensator.
These two approaches are quite different in philosophy. The first approach starts
from intemal compensators and works toward extemal overall transfer functions.
Thus, it is called the outward approach. This approach is basically a trial-and-error
method. The root-locus and frequency-domain methods discussed in Chapters 7 and
8 take this approach. The second approach starts from extemal overall transfer func-
tions and then computes intemal compensators, and is called the inward approach.
This approach is studied in Chapters 9 and 10. These two approaches are independent
and can be studied in either order. In other words, we may study Chapters 7 and 8,
and then 9 and 10, or study first Chapters 9 and 10, and then Chapters 7 and 8.
To conclude this chapter, we mention a very important fact of feedback. Con-
sidera plant with transfer function G(s) = N(s)/D(s) and consider the feedback
configuration shown in Figure 6.20. Suppose the transfer function of the compensator
is C(s) = B(s)/A(s). Then the overall transfer function is given by
B(s) N(s)
C(s)G(s) A(s) D(s) B(s)N(s)
+ C(s)G(s) B(s) N(s) A(s)D(s) + B(s)N(s)
+-·--
A(s) D(s)
~y
-~,~
Figure 6.20 Feedback system.
PROBLEMS 217
The zeros of G(s) and C(s) are the roots of N(s) and B(s); they remain to be the zeros
of G 0 (s). In other words, feedback does not affect the zeros of G(s) and C(s). The
poles of G(s) and C(s) are the roots of D(s) and A(s); after feedback, the poles of
GJs) become the roots of A(s)D(s) + B(s)N(s). The total numbers of poles before
feedback and after are the same, but their positions have now been shifted from D(s)
and A(s) to A(s)D(s) + B(s)N(s). Therefore, feedback affects the poles but not the
zeros of the plant transfer function. The given plant can be stable or unstable, but
we can always introduce feedback and compensators to shift the poles of G(s) to
desired position. Therefore feedback can make a good overall system out of a bad
plant. In the outward approach, we choose a C(s) and hope that G0 (s) will be a good
overall transfer function. In the inward approach, we choose a good G 0 (s) and then
compute C(s).
PROBLEMS
6.1. Find the ranges of {3¡ so that the following transfer functions have position
errors smaller than 10%.
{3¡s + f3o
a. s 2 + 2s + 2
{32s 2 + {3 1s + f3o
b. s3 + 3s 2 + 2s + 3
{32s2 + {3¡s + f3o
c. s 3 + 2s 2 + 9s + 68 ,_,
6.2. Find the ranges of {3¡ so that the transfer functions in Problem 5.1 have velocity
errors smaller than 10%.
6.3. Consider the three systems shown in Figure P6.3. Find the ranges of k so that
the systems are stable and have position errors smaller than 10%.
6.4. Repeat Problem 6.3 so that the systems have velocity errors smaller than 10%.
6.5. a. Find the range of k0 such that the system in Figure P6.5 is stable. Find the
value of k0 such that the system has a zero position error or, equivalently,
such that y will track asymptotically a step reference input.
b. If the plant transfer function in Figure P6.5 becomes 5.1/(s - 0.9) dueto
aging, will the output still track asymptotically any step reference input? If
not, such a tracking is said to be not robust.
6.6. Consider the unity feedback system shown in Figure 6.4(a). We showed there
that if the loop transfer function G1(s) = C(s)G(s) is of type 1 or, equivalently,
can be expressed as
G (s) = "!!_(s)
1
sD 1(s)
where N 1(0) =t= O and D 1(0) =t= O, and if the feedback system is stable, then the
218 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, ANO FEEDBACK
'+t81(H2)\Hl)tr
(a)
,·r01*.,i,,.,,tr
(b)
-
r +
5
S-]
Figure P6.5
plant output will track asymptotically any step reference input. Now show that
the t~cking is robust in the sense that, even if there are perturbations in N(s)
and D(s), the position error is still zero as long as the system remains stable.
6.7. a. Consider the unity feedback system shown in Figure 6.4(a). Show that if
G 1(s) is of type 2 or, equivalently, can be expressed as
N 1(s)
G,(s) = s2J5¡(s)
with N 1(0) ~ O and D¡(O) ~ O, and if the unity feedback system is stable,
then its velocity error is zero. In other words, the plant output will track
asymptotically any ramp reference input.
b. Show that the tracking of a ramp reference input is robust even if there are
perturbations in N¡(s) and D1(s) as long as the system remains stable. Note
that G1(s) contains 1/s 2 , which is the Laplace transform of the ramp refer-
ence input. This is a special case of the interna! model principie, which
states that if G 1(s) contains R(s), then y(t) will track r(t) asymptotically and
the tracking is robust. See Reference [15].
....
PROBLEMS 219
6.8. Consider the system shown in Figure P6.8. Show that the system is stable. The
plant transfer function G(s) is of type 1, is the position error of the feedback
system zero? In the unity feedback system, the position and velocity error can
be determined by system type. Is this true in nonunity feedback or other con-
figurations?
G(s)
Figure P6.8
6.9. Show that if a system is designed to track t 2 , then the system will track any
reference input of the form r0 + r 1t + r2 t 2 •
6.10. The movement of a recorder's pen can be controlled as shown in Figure
P6.10(a). Its block diagram is shown in Figure P6.10(b). Find the range of k
such that the position error is smaller than 1%.
ae
amplifier
ae
motor
>
r= 0.02 m
(a)
(b)
Figure P6. 1O
220 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, AND FEEDBACK
6. 11. Consider the systems shown in Figure P6.11. Which of the systems are not
well posed? If not, find the input -output pair that has an improper closed-loop
transfer function.
~ H ~
S- 1 S+ 1
S+ J S
(a) (b)
-
r +
!
S
J
y
)o
y
(e) (d)
Figure P6. 11
6.12. Discuss the total stability of the systems shown in Figure P6.11.
6. 13. Consider the speed control of rollers discussed in Figure 6.18. We now model
the plant transfer function as 1O/ ( rs + 1), with T ranging between 4 and 6.
Use the compensators computed in Figure 6.18 to compute the steady-state
outputs of the open-loop and feedback systems due to a unit-step reference
input for the following three cases: (a) T equals the nominal value 5, (b) T =
4, and (e) T = 6. Which system, open-loop or feedback system, is less sensitive
to parameter variations?
6. 14. a. Consider the plant transfer function shown in Figure 6.18. Find a k in Figure
P6.14, if it exists, such that the overall transfer function in Figure P6.14
equals 2/(s + 2).
b. If the plant has a disturbance as shown in Figure 6.18, find the steady-state
output of the overall system in (a) due to a unit-step disturbance input.
c. Which feedback system, Figure 6.18(c) or Figure P6.14, is less sensitive to
plant perturbations? The loop transfer function in Figure 6.18(c) is of type
l. Is the loop transfer function in Figure P6.14 of type 1?
Figure P6. 14
PROBLEMS 221
6.15. Consider the system shown in Figure P6.15. The noise generated by the am-
plifier is represented by n. If r = sin t and n = 0.1 sin IOt, what are the steady-
state outputs due to r and n? What is the ratio of the amplitudes of the outputs
excited by r and n?
Figure P6. 15
6.16. Consider the systems shown in Figure P6.16. (a) If the plant, denoted by P,
has the following nominal transfer function
s(s 2 + 2s + 3)
show that the two systems have the same steady-state output dueto r(t) = sin
0.1t. (b) If, dueto aging, the plant transfer function becomes
r 3s(s 2 + 2s + 3) 3(s2 + 2s + 3)
3
s + 3.5s 2 + Ss + 3 s2 + 3.5s + 5
'----------------~
(a) (b)
Figure P6. 16
6.17. The comparison in Problem 6.16 between the open-loop and closed-loop sys-
tems does not consider the noise due to the transducer (which is used to intro-
duce feedback). Now the noise is modeled as shown in Figure P6.17.
a. Compute the steady-state Yc dueto n(t) = 0.1 sin IOt.
b. What is the steady-state Ye dueto r(t) = sin 0.1t and n(t) = 0.1 sin IOt?
c. Compare the steady-state error in the open-loop system in Figure P6.16(a)
with the one in the closed-loop system in Figure P6.17. Is the reduction in
the steady-state error due to the feedback large enough to offset the increase
of the steady-state error due to the noise of the transducer?
222 CHAPTER 6 DESIGN CRITERIA, CONSTRAINTS, AND FEEDBACK
Figure P6. 17
6.18. a. Consider the feedback system shown in Figure P6.18. The nominal values
of all k; are assumed to be l. What is its position error?
b. Compute the position error if k 1 = 2 and k2 = k 3 = l. Compute the position
error if k2 = 2 and k 1 = k 3 = l. Compute the position error if k 3 = 2 and
k¡ = k2 = l.
Figure P6. 18
The Root-Locus
Method
7.1 INTRODUCTION
As was discussed in the preceding chapter, inward and outward approaches are
available for designing control systems. There are two methods in the outward ap-
proach: the root-locus method and the frequency-domain method. In this chapter,
we study the root-locus method.
In the root-locus method, we first choose a configuration, usually the unity-
feedback configuration, anda gain, a compensator of degree O. We then search the
gain and hope that a good control system can be obtained. lf not, we then choose a
different configuration and/ or a more complex compensator and repeat the design.
Because the method can handle only one parameter at a time, the form of compen-
sators must be restricted. This is basically a trial-and-error method. We first use an
example to illustrate the basic idea.
223
224 CHAPTER 7 THE ROOT-LOCUS METHOD
lt could be the transfer function of a motor driving a load. The problem is to design
an overall system to meet the following specifications:
l. Position error = O
2. Overshoot :s: 5%
3. Settling time :s: 9 seconds
4. Rise time as small as possible.
Before carrying out the design, we must first choose a configuration and a
compensator with one open parameter. The simplest possible feedback configuration
and compensator are shown in Figure 7.l. They can be implemented using a pair of
potentiometers and an amplifier. The overall transfer function is
1
.. k·
s(s + 2) k
G {s) (7.2)
0
1 s2 + 2s +k
1 +k o
s(s + 2)
The first requirement in the design is the stability of G0 (s). Clearly, G0 (s) is stable
if and only if k> O. Because G0 (0) = k/ k = 1, the system has zero position error
for every k > O. Thus the design reduces to the search for a positive k to meet
requirements (2) through (4). Arbitrarily, we choose k = 0.36. Then G0 (s) becomes
0.36 0.36
G (s) = = --------:- (7.3)
o s2 + 2s + 0.36 (s + 0.2)(s + 1.8)
One way to find out whether or not G0 (s) will meet (2) and (3) is to compute
analytically the unit-step response of (7.3). A simpler method is to carry out computer
simulation. If the system does not meet (2) or (3), then k = 0.36 is not acceptable.
If the system meets (2) and (3), then k = 0.36 is a possible candidate. We then
choose a different k and repeat the process. Finally, we choose from those k meeting
(2) and (3) the one that has the smallest rise time. This completes the design.
From the preceding discussion, we see that the design procedure is very tedious
and must rely heavily on computer simulation. The major difficulty arises from the
fact that the specifications are given in the time domain, whereas the design is carried
out using transfer functions, or in the s-plane. Therefore, if we can translate the time-
domain specifications into the s-domain, the design can be considerably simplified.
This is possible for a special class of transfer functions and will be discussed in the
next subsection.
~~1¡
y
y(t) =
(7.5)
w
1 - ___!! e- at sin (wd t + (})
wd
where wd = wn(l - ~ 2 ) 1 1 2 , a = ?wn, and (} = cos - 1 ?. The steady-state response
of y(t) is Ys = 1, and the maximum value, as computed in (4.13), is
Ymax = max ly(t)l = 1 + e- 1TW- ?2)- I/2
Thus the overshoot, as defined in Section 6.3.3, is
Overshoot = Ymax - 11
/ 1
Ims
s-=o 100
'< 1
E 80 1\
~
<U
8<U
0.. 60
Res oo
~
¡:; 40
li
s=cose >
o '\
20 .......
.........
~ 1:::::
o 0.2 0.4 0.6 0.8 1.0 1.2
s-plane Damping ratio s
(a) (b)
We see that the overshoot depends only on the damping ratio. The relationship is
plotted in Figure 7.2(b). From the plot, the range of [ for a given overshoot can be
obtained. For example, if the overshoot is required to be less than 20%, then the
damping ratio must be larger than 0.45, as can be read from Figure 7.2(b). Now we
translate this relationship into a pole region. Because
[ = cos (}
and
In other words, for the system in (7.4), if the overshoot is required to be less than
5%, then the poles of G0 (s) must líe inside the sector bounded by 45°, as is shown
in Figure 7.3. This translates the specification on overshoots into a desired pole
region.
Next we consider the settling time. As defined in Section 6.3.3, the settling time •
is the time needed for the step response to reach and stay within 2% of its steady-
state value. The difference between y(t) in (7.5) and its steady state Ys = 1 is
(7.6)
for [ < l. Note that u = [wn is the magnitude of the real part of the poles (see
Figure 7.2); it is the distance of the complex-conjugate poles from the imaginary
Clearly, for given ~ and wn, the settling time can be computed from (7.7). lt is,
however, desirable to develop a formula that is easier to employ. If ~ < 0.8, then
(7.7) becomes
This is smaller than 0.02 if t 2: 4.5/ u. Hence, given a settling time t5 , if u 2: 4.5/ts
or, equivalently,
4.5
-(Real parts of the poles) 2: - (7.8)
t,
then the specification on settling time can be met. Although (7.8) is developed for
complex poles with damping ratio smaller than 0.8, it also holds for real poles with
~ > 1.05. Thus, in general, if both poles of G0 (s) in (7.4) lie on the left-hand side
of the vertical line shown in Figure 7 .4, then the specification on settling time can
be met. The condition in (7.8) is consistent with the statement that the step response
reaches and remains within 1% of its steady-state value in five time constants. The
settling time is defined for 2% and equals 4.5 X time constant.
Now we can combine the specifications on overshoot and settling time. The
poles of (7 .4) must lie inside the section as shown in Figure 7.3 to meet the overshoot
specification and must lie on the left-hand side of the verticalline in Figure 7.4 to
meet the settling time specification. Therefore, to meet both specifications, the poles
must lie inside the region denoted by e in Figure 7.4. The exact boundary of e can
be obtained from the specifications on overshoot and settling time.
lms
Overshoot
Settling time
Figure 7.4 Desired pole region.
228 CHAPTER 7 THE ROOT-LOCUS METHOD
Exercise 7.2. 1
By definition, the rise time is the time required for the step response of G0 (s) to
rise from O to 90 percent of its steady-state value. The translation of the rise time
into a pole region cannot be done quantitatively, as in the case of overshoot and
settling time. All we can say is that, generally, the farther away the closest pole 1
from the origin of the s-plane, the smaller the rise time. Strictly speaking, this state-
ment is not correct, as can be seen from Figure 4. 7. The rise times of the responses
in Figure 4.7 are all different, even though the distances of the corresponding com-
plex poles from the origin all equal wn- On the other hand, because the time scale
of Figure 4.7 is wnt, as the distance wn increases, the rise time decreases. Thus, the
assertion holds for a fixed ?- Because there is no other better guideline, the assertion
that the farther away the closest pole from the origin, the smaller the rise time will
be used in the design.
We recapitulate the preceding discussion as follows:
These simple rules, although not necessarily exact, are very convenient to use in
design.
1
The system has two poles. If they are complex, the distances of the two complex-conjugate poles from
the origin are the same. If they are real and distinct, then one pole is closer to the origin then the other.
We consider only the distance to the closer pole. If a system has three or more poles, then we consider
the pole closest to the origin.
7.2 QUADRATIC TRANSFER FUNCTIONS WITH A CONSTANT NUMERATOR 229
Ims
"'
-"'
-':::,
k=5
t 2
-1
-" 1
-_" k=21
- ~ 11 1
-~
k= 0.36 k= 1 =:l"
k=O k= 0.36
-2
hand side of the vertica11ine passing through - 4.5/ts = - 0.5. Hence, if all po1es
of G0 (s) 1ie inside the shaded region in Figure 7.5, the overall system will meet the
specifications on overshoot and sett1ing time.
The po1es of G0 (s) in (7.2) for k = 0.36, 0.75, 1, 2, and 5 are computed in the
following:
k=2 -1 +"
-1 meet both (2) and (3)
They are p1otted in Figure 7.5. Note that there are two po1es for each k. For k =
0.36, a1though one po1e lies inside the regi<;>n, the other is on the right-hand side of
the vertica11ine. Hence if we choose k = 0.36, the system will meet the specification
on overshoot but not that on settling time. If we choose k = 5, then the system will
meet the specification on settling time but not that on overshoot. However for k =
0.75, 1, and 2, all the po1es are within the allowab1e region, and the system meets
230 CHAPTER 7 THE ROOT-LOCUS METHOD
the specifications on overshoot and sett1ing time. Now we discuss how to choose a
k from 0.75, 1, and 2, so that the rise time will be the smallest. The po1es corre-
sponding to k = O. 75 are - 0.5 and - 1.5; therefore, the distan ce of the el o ser po1e
from the origin is 0.5. The po1es corresponding to k = 1 are - 1 and - l. Their
distance from the origin is 1 and is 1arger than 0.5. Therefore, the system with
k = 1 has a smaller rise time than the one with k = 0.75Jhe po1es corresponding
to k = 2 are -,1 ± j l. Their distance from the origin is 1 + 1 = 1.4, which is
the 1argest among k = 0.75, 1, and 2. Therefore the system with k = 2 has the
smallest rise time or, equiva1ently, responds fastest. The unit-step responses of the
system are shown in Figure 7 .6. They bear out the preceding discussion.
For this examp1e we are able to find a gain k to meet all the specifications. If
sorne of the specifications are more stringent, then no k may exist. For examp1e, if
the settling time is required to be 1ess than 2 seconds, then all po1es of G0 (s) must
1ie on the 1eft-hand si de of the vertica11ine passing through -4.5/2 = - 2.25. From
Figure 7.5, we see that no poles meet the requirement. Therefore, no k in Figure 7.1
• can yie1d a system with sett1ing time 1ess than 2 seconds. In this case, we must
choose a different configuration and/ or a more comp1icated compensator and repeat
the design.
Exercise 7.2.2
Considera p1ant with transfer function 2/ s(s + 4). (a) Find the range of k in Figure
7.1 such that the resu1ting system has overshoot 1ess than 5%. (b) Find the range of
k such that the system has sett1ing time smaller than 4.5 seconds. (e) Find the range
of k to meet both (a) and (b). (d) Find a va1ue of k from (e) such that the system has
the smallest rise time.
[Answers: (a) O < k< 4. (b) 1.5 < k < oo. (e) 1.5 < k< 4. (d) k = 4.]
1.2r----.----.----,-----,----,----,----,----.-----.---.
----------_--.:.:.--.:.:. -~---=-=---=---"""'"--~--------------
0.8
k= 0.75
0.6
0.4
/ 1
! 1
/ 1
0.2 / 1
! 1
/1
_,;~,
0 o~---L----~2----~3_____4L____sL____L6----~7----~s-----9L---~1o
The example in the preceding sections illustrates the essential idea of the design
method to be introduced in this chapter. The method consists of two major
components:
l. The translation of the transient performance into a desired pole region. We then
try to place the poles of the overall system inside the region by choosing a
parameter.
2. In order to facilitate the choice of the parameter, the poi es of the overall system
as a function of the parameter will be plotted graphically. The method of plotting
is called the root-locus method.
In this section we discuss further the desired pole region. The root-locus method is
discussed in the next section.
The desired pole region in Figure 7.4 is developed from a quadratic transfer
function with a constant numerator. We shall check whether it is applicable to other
types of transfer functions. Consider
y(t)
-------- ------
------ ------
xd:;S¿~-= ----
/'----
o~~~~---2~---------47----------¿6----------~------------.
S
1 +
a
2 (7.10)
s + 1.2s +
Figure 7.8(a) shows the unit-step responses of G0 (s) for a = 4, 1, 0.6, and 0.2, and
Figure 7.8(b) shows the unit-step responses of G0 (s) for a = -4, -1, -0.6, and
- 0.2. The responses for a = 4 and -4 are quite similar to the one for a = oo. In
other words, if the zero is far away (either in the right half plane or in the left half
plane) from the complex conjugate poles, the concept of dominant poles is still
applicable. As the left-half-plane zero moves closer to the origin, the overshoot and
settling time become larger. However, the rise time becomes smaller. If the zero is
in the right half plane, ora < O, the unit-step response will become negative and
then positive. This is called undershoot. 2 For a = - 4, the undershoot is hardly
detectable. However as the right-half-plane zero moves closer to the origin, the
undershoot becomes larger. The overshoot, settling time, and rise time also become
larger. Thus, the quantitative specifications developed in Figure 7_4 are no longer
applicable. This is not surprising, because the response of a system depends on its
poles and zeros, whereas zeros are not considered in the development of the desired
pole region.
Even for the simple systems in (7.9) and (7.10), the relationships between the
specifications for the transient performance and the pole region are no longer as
precise as for quadratic transfer functions with a constant numerator. However, be-
cause there is no other simple design guideline, the desired pole region developed
in Figure 7 A will be used for all overall transfer functions. Therefore, if an overall
transfer function is not quadratic as in (7_4) and cannot be approximated by (7_4),
then there is no guarantee that the resulting system will meet the transient specifi-
cations by placing all poles inside the desired pole region. It is therefore important
to simulate the resulting system on a computer, to check whether or not it really
meets the specifications. lf it does not, the system must be redesigned.
2
It was shown by Norimatsu and Ito [49] that if G0 (s) has an odd number of open right-half-plane real
zeros, then undershoots always occur in step responses of G0 (s).
7.4 PLOT OF ROOT LOCI 233
(a)
,.----------
,._______________________ --
4 6
\ -0.2
(b)
From the example in Section 7 .2, we see that the design requires the computation
of the poles of G0 (s) as a function of k. In this section, we shall discuss this problem.
Consider the unity-feedback system shown in Figure 7.9, where G(s) is a proper
rational function and k is a real constant. Let G(s) = N(s)/D(s). Then the overall
234 CHAPTER 7 THE ROOT-lOCUS METHOD
PG(>)I¡-'
Figure 7.9 Unity-feedback system.
transfer function is
k N(s)
kG(s) D(s) kN(s)
1 + kG(s) D(s) + kN(s)
The roots of D(s) + k/N(s) or the poles of G (s) as a function of a real k are called
0
the root loci. Many software programs are available for computing root loci. For
example, for G(s) = 1/s(s + 2) = l/(s 2 + 2s + 0), the following commands in
version 3.1 of MATLAB
num=[1];den=[1 2 O];
k=0:0.5_:10;
r= rlocus(num,den,k);
plot(r, 'x')
will plot 21 sets of the poles of kG(s)/(1 + kG(s)) for k = O, 0.5, 1, 1.5, ... , 9.5,
and 10. lf we use version 3.5 or the Student Edition of MATLAB, the command
rlocus(num,den)
will plot the complete root loci on the screen. Therefore, to use an existing computer
program to compute root loci is very simple. Even so, it is useful to understand the
general properties of root loci. From the properties, we can often obtain a rough plot
of root loci even without any computation or measurement. This can then be used
to check the correctness of computer printout.
To simplify discussion, we assume
q(s + z 1)(s + z2 )
G(s) = _ ___!c__2__-----".:....:....._---=:..._ (7.14)
(s + p 1)(s + P 2 )(s + P3)
7.4 PLOT OF ROOT LOCI 235
where - zi and -Pi denote, respectively, zeros and poles, and q is a real constant,
positive or negative. Because G(s) is assumed to have real coefficients, complex-
conjugate poles and zeros must appear in pairs. Now we shall write 1 + kG(s) =
O as
q(s + z 1)(s + z2 ) 1
G(s) = ---'--'--"-----=--- (7.15)
(s + p¡)(s + p 2)(s + P3) k
Then the roots of D(s) + kN(s) are those s, real or complex, which satisfy (7.15)
for sorne real k. Note that for each s, say s 1, each factor on the left-hand side of
(7,15) Í6 a vector emitting from a pole or zero to s 1 , as shown in Figure 7.10. The
magnitude 1 · 1 is the length of the vector. The phase <):: is the angle measured from
the direction of positive real axis; it is positive if measured counterclockwise, nega-
tive if measured clockwise. The substitution of
and
S¡ +Pi
into (7 .15) yields
1
(7.16)
k
This equation actually consists of two parts: the magnitude condition
lms
-p3
Figure 7.10 Vectors in s-plane.
236 CHAPTER 7 THE ROOT-LOCUS METHOD
Note that 1: q equals O if q > O, 1T or - 1T if q < O, 8¡ and cP¡ can be positive (if
measured counterclockwise) or negative (if measured clockwise). In the remainder
of this and the next sections, we discuss only the phase condition. The magnitude
condition will not arise until Section 7.4.3.
Because k is real, we have
1: ( - !)
k
= { ± 1T, ± 31T, ± 51T, ...
O, ±21T, ±41T, ...
if k>
if k<
o
o
Two angles will be considered the same if they differ by ± 21T radians or ± 360° or
their multiples. Using this convention, the phase condition in (7.18) becomes
..
1T if k> o
Total phase: = 1: q + 81 + 82 - (cP 1 + cr2 + cfJ 3 ) = { (7.19)
o if k< o
We see that the constant k does not appear explicitly in (7.19). Thus the search
for the root loci becomes the search for all s 1 at which the total phase of G(s 1) equals
O or 1T. If s 1 satisfies (7.19), then there exists a real k 1 such thatD(s 1) + k1N(s 1) =
O. This k 1 can be computed from (7.17).
We recapitulate the preceding discussion in the following. The poles of G0 (s)
or, equivalently, the roots of D(s) + kN(s) for sorne real k are those s 1 such that the
total phase of G(s 1) equals O or 1T. The way to search for those s 1 is as follows. First
we choose an arbitrary s 1 and draw vectors from the poles and zeros of G(s) to s 1
as shown in Figure 7.10. We then use a protractor to measure the phase of each
vector. If the total phase is O or 1T, then s 1 is a point on the root loci. If the total
phase is neither O nor 1r, then s 1 is not on the root loci. We then try a different point
and repeat the process. This is a trial-and-error method and appears to be hopelessly
complicated. However, using the properties to be discussed in the next subsection,
we can often obtain a rough sketch of root loci without any measurement.
complex s-plane. We first plot all poles and zeros on the s-plane and then measure
the angle from every pole and zero toa chosen s. Note that the constant q does not
appear on the plot, but it still contribu tes a phase to (7 .19). To simplify discussion,
we assume in this section that q > O. Then the phase of q is zero and the total phase
of G(s) will be contributed by the poles and zeros only. We discuss now the general
properties of the roots of the polynomial
D(s) + kN(s) (7.21)
1 + k N(s) 1 + kG(s)
D(s)
PROPERTY 1
The root loci consist of n continuous trajectories as k varies continuously from O
to oo. The trajectories are symmetric with respect to the real axis. •
The polynomial in (7.21) has degree n. Thus for each real k, there aren roots.
Because the roots of a polynomial are continuous functions of its coefficients, the n
roots form n continuous trajectories as k varíes from O to oo. Because the coefficients
of G(s) are real by assumption, complex-conjugate roots must appear in pairs. There-
fore the trajectories are symmetric with respect to the real axis.
PROPERTY 2
Every section of the real axis with an odd number of real poles and zeros
(counting together) on its right side is a part of the root loci for k 2: 0. 3 •
lf k 2: O, the root loci consist of those s with total phases equal to 180°. Recall
that we have assumed q > O, thus the total phase of G(s) is contributed by poles
and zeros only. We use examples to establish this property. Consider
3
More generally, if q > O and k > O or q < O and k < O, then every section of the real axis whose right-
hand side has an odd number of real poles and real zeros is part of the root loci. If q < O and k > O or
q > O and k < O, then every section of the real axis whose right-hand side has an even number of real
poles and real zeros is part of the root loci.
238 CHAPTER 7 THE ROOT-LOCUS METHOD
Their poles and zeros are plotted in Figure 7.11(a). If we choose s 1 = 2.5 in Figure
7.11(a) and draw vectors from poles 1 and -2 to s 1 and from zero -4 to s 1, then
the phase of every vector is zero. Therefore, the total phase of G 1(s 1) is zero. Thus
s 1 = 2.5 is not a zero of 1 + kG 1(s) = O for any positive real k. If we choose
s2 = O and draw vectors as shown in Figure 7.11(a), then the total phase is
0-0- Tr= -Tr
which equals Tr after the addition of 2Tr. Thus s 2 = O is on the root loci. In fact,
every point in [- 2, 1] has a total phase of Tr, thus the entire section between [- 2,
1] is part of the root loci. The total phase of every point between
[ -4, -2] can be shown to be 2Tr, therefore the section is not on the root loci. The
total phase of every point in (oo, -4] is Tr, thus it is part of the root loci. The two
sections (oo, -4] and [ -2, 1] have odd numbers of real poles and zeros on their
right-hand sides.
The transfer function in (7.23) has only real poles and zeros. Now we consider
•
G (s) _ -----,---2--'(s_+----'2)_ _ _ __ (7.24)
2
- (s + 3) 2 (s + 1 + j4)(s + 1 - j4)
which has a pair of complex-conjugate poles. The net phase dueto the pair to any
point on the real axis equals O or 2Tr as shown in Figure 7.1l(b). Therefore, in
applying property 2, complex-conjugate poles and zeros can be disregarded. Thus
for k > o. the sections (- oo, - 3] and r- 3, - 2] are part of the root loci.
Exercise 7.4. 1
Exercise 7.4.2
PROPERTY 3
The n trajectories migrate from the poles of G(s) to the zeros of G(s) as k
increases from O to oo. •
7.4 PLOT OF ROOT LOCI 239
Ims Ims
Asymptote
-----EIJ=!=~:::::!=i\\~-~---+--:~ Res
-4 -2 . \__¡ 2.5 2
s2 =O
(a) (b)
The roots of (7.21) are simply the roots of D(s) if k O. The roots of D(s)
+ kN(s) = O are the same as the roots of
D(s) + N(s) = O
k
Thus its roots approach those of N(s) as k~ oo. Therefore, as k increases from Oto
oo, the root loci exit from the poles of G(s) and enter the zeros of G(s). There is one
problem, however. The number of poles and the number of zeros may not be the
same. If n (the number of poles) > m (the number of zeros), then m trajectories will
enter the m zeros. The remaining (n - m) trajectories will approach (n - m) asymp-
totes, as will be discussed in the next property.
PROPERTY 4
For large s, the root loci will approach (n - m) number of straight lines, called
asymptotes, emitting from
¡ Poles - ¡ Zeros )
( No. of poles - No. of zeros' 0 (7.270)
4
If G(s) has no zeros, then the centroid equals the center of gravity of a11 poles.
240 CHAPTER 7 THE ROOT-LOCUS METHOD
These formulas will give only (n - m) distinct angles. We list sorne of the
angles in the following table.
180°
2 ±9if
3 ±60°, 180°
4 ± 45°, ± 135°
•
We justify the property by using the pole-zero pattem shown in Figure 7.12(a).
For s 1 very large, the poles and zeros can be considered to cluster at the same point-
say, a-as shown in Figure 7.12(b). Note the units of the scales in Figure 7.12(a)
and (b). Consequently the transfer function in (7.22) can be approximated by
q(s + z1) • • • (s + zm) q
~----~--~--~~= ----~--- for s very large (7.28)
(s + p 1) · • · (s + Pn) (s - at~m
In other words, all m zeros are canceled by poles, and only (n - m) poles are left
ata. Now we compute the relationship among Z¡, p¡, anda. After canceling q, we
tum (7 .28) upside down and then expand it as
(s + p¡) · · · (s + Pn)
(s + z 1) · • • (s + zm)
Ims Ims
2 3
X
(a) (b)
Figure 7. 12 Asymptotes.
7.4 PLOT OF ROOT LOCI 241
or
(k Poles) (k zeros)
a=
n-m (No. of poles) (No. of zeros)
This establishes (7.27a).
With all (n - m) number of real poles located ata, it becomes simple to find
all s 1 with a total phase of 7T, or, more generally, ± 'TT, ± 37T, ± 57T, .... Thus
each pole must contribute ± 'TT/(n - m), ± 37T/(n - m), ± 57T/(n - m), .... This
establishes (7.27b). We mention that (n - m) asymptotes divide 360° equally and
are symmetric with respect to the real axis.
Now we shall use this property to find the asymptotes for G 1(s) in (7.23) and
Gis) in (7.24). The difference between the numbers of poles and zeros of G 1(s) is
1; therefore, there is only one asymptote in the root loci of G 1(s). lts degree is
7T/1 = 180°; it coincides with the negative real axis. In this case, it is unnecessary
to compute the centroid. Por the transfer function G 2 (s) in (7.24), the difference
between the numbers of poles and zeros is 3; therefore, there are three asymptotes
in the root loci of G2 (s). Using (7.27a), the centroid is
- 3 - 3 - 1 - j4 - 1 + j4 - (- 2) -6
= -2
3 3
Thus the three asymptotes emit from (- 2, 0). Their angles are ± 60° and 180°. Note
that the asymptotes are developed for large s, thus the root loci will approach them
for large s or large k.
Now we shall combine Properties (3) and (4) as follows: If G(s) has n poles
and m zeros, as k increases from O to oo, n trajectories will emit from the n poles.
Among the n trajectories, m of them will approach the m zeros; the remaining
(n - m) trajectories will approach the (n - m) asymptotes. 5
Exercise 7.4.3
Find the centroids and asymptotes for Gis) in (7.25) and G 4 (s) in (7.26).
[Answers: (1.5, 0), ±90°; no need to compute centroid, 180°.]
5
The G(s) in (7.20) can be, for s very large, approximated by q/s"-m. Because it equals zero at s = oo,
G(s) can be considered to have n - m number of zeros at s = oo. These zeros are located at the end of
the (n - m) asymptotes. If these infinite zeros are included, then the number of zeros equals the number
of poles, and the n trajectories will emit from the n poles and approach the n finite and infinite zeros.
242 CHAPTER 7 THE ROOT-LOCUS METHOD
PROPERTY 5
Breakaway points-Solutions of D(s)N'(s) - D'(s)N(s) = O. •
and
where the prime denotes differentiation with respect to s. The elimination of k from
(7 .29) yields
Im s
which implies
(7.30)
Thus a breakaway point S 0 must satisfy (7.30) and can be obtained by solving the
equation. For example, if G(s) = (s + 4)/(s - 1)(s + 2), then
D(s) = s2 + S 2 D'(s) 2s + 1
N(s) S + 4 N'(s)
and
D(s)N' (s) - D' (s)N(s) s2 + s - 2 - (2s 2 + 9s + 4) o (7.31)
or
s2 + Ss + 6 = O
Its roots are -0.8 and -7.2. Thus the root loci have two breakaway points atA =
-0.8 and B = -7.2 as shown in Figure 7.13. For this example, the two solutions
yield two breakaway points. In general, not every solution of (7.30) is necessarily a
breakaway point for k ::::: O. Although breakaway points occur mostly on the real
axis, they may appear elsewhere, as shown in Figure 7.14(a). If two loci break away
from a breakaway point as shown in Figure 7.13 and Figure 7.14(a), then their
tangents will be 180° apart. If four loci break away from a breakaway point (it has
four repeated roots) as shown in Figure 7.14(b), then their tangents will equally
divide 360°.
With the preceding properties, we are ready to complete the root loci in Figure
7.13 or, equivalently, the solutions of
S + 4 1
(s - 1)(s + 2) k
Ims
Breaka~
pomt ~
--~-____;:+-----?!<:'-------- Res
-2 -1 o
(a) (b)
for k ;::::: O. As discussed earlier, the sections (- oo, - 4] and [- 2, 1] are parts of the
root loci. There is one asymptote that coincides with the negative real part. There
are two breakaway points as shown. Because the root loci are continuous, the root
loci must assume theform indicated by the dotted line shown in Figure 7.13. The
exact loci, however, must be obtained by measurement. Arbitrarily we choose an s 1
and draw vectors from zero - 4 and poles - 2 and 1 to s 1 as shown in Figure 7.13.
The phase of each vector is measured using a protractor. The total phase is
which is different from ± 180°. Thus s 1 is not on the root loci. We then try s2 , and
the total phase is measured as -190°. lt is not on the root loci. We then try s 3 , and
the total phase roughly equals -180°. Thus s 3 is on the root loci. From the fact that
they break away at point A, pass through s3 , and come in at point B, we can obtain
the root loci as shown. Clearly the more points we find on the root loci, the more
accurate the plot. The root loci in Figure 7.13 happens to be a circle with radius 3.2
• and centered at -4. This completes the plot of the root loci of G 1(s) in (7 .23).
Exercise 7.4.4
Find the breakaway points for G3 (s) in (7.25) and G4 (s) in (7.26). Also complete the
root loci of Gis).
PROPERTY 6
Angle of departure or arrival. •
Every trajectory will depart from a pole. If the pole is real and distinct, the
direction of the departure is usually 0° or 180°. If the pole is complex, then the
direction of the departure may assume any degree between 0° and 360°. Fortunately
this angle can be measured in one step. Similarly the angle for a trajectory to arrive
ata zero can also be measured in one step. We now discuss their measurement.
Consider the transfer function G2 (s) in (7.24). lts partial root loci are obtained
in Figure 7.1l(b) and repeated in Figure 7.15(a). There are four poles, so there are
four trajectories. One departs from the pole at - 3 and enters the zero at - 2. One
departs from another pole at - 3 and moves along the asymptote on the negative
real axis. The last two trajectories will depart from the complex-conjugate poles and
m ove toward the asymptotes with angles ± 60°. To find the angle of departure, we
draw a small circle around pole -1 + }4 as shown in Figure 7.15(b). We then find
a point s 1 on the circle with a total phase equal to 7T. Let s 1 be an arbitrary point on
the circle and let the phase from pole - 1 + }4 to s 1 be denoted by 01• If the radius
of the circle is very small, then the vectors drawn from the zero and all other poles
to s 1 are the same as those drawn to the pole at - 1 + }4. Their angles can be
measured, using a protractor, as 76°, 63°, and 90°. Therefore, the total phase of Gz(s)
7.4 PLOT OF ROOT LOCI 245
Ims
(a) (b)
at s 1 is
76° - (63° + 63° + 90° + 01) = - 140° - 01
Note that there are two poles at - 3, therefore there are two 63° in the phase equation.
In order for s 1 to be on the root loci, the total phase must be ± 180°. Thus we have
01 = 40°. This is the angle of departure.
Once we have the asymptote and the angle of departure, we can draw a rough
trajectory as shown in Figure 7.15. Certainly, if we find a point, sayA = }5 shown
in the figure, with total phase 180°, then the plot will be more accurate. In conclusion,
using the properties discussed in this section, we can often obtain a rough sketch of
root loci with a minimum amount of measurement.
Exercise 7.4.5
Compute the angle of arrival for G4 (s) in (7.26) and then complete its root loci.
246 CHAPTER 7 THE ROOT-LOCUS METHOD
a+ jf3 a+ jf3
lt has three poles. If a = a + v3{3 or, equivalently, if the three poles form an
equilateral triangle, then the root loci are all straight lines as shown in Figure 7 .16(a).
If a < a + \13{3, then the root loci are as shown in Figure 7.16(b). If a >
a + v3{3, then the root loci have two breakaway points as shown in Figure 7.16(c).
Although the relative positions of the three poles are the same for the three cases,
their root loci have entirely different pattems.
As an another example, consider
S + 1
(7.33)
G(s) = s2(s + a)
Its approximate root loci for a = 3, 7, 9, and 11 are shown in Figure 7.17. lt has
two asymptotes with degrees ± 90°, emitting respective! y from
_O_+_O_+_(_-_a--')_-_(_-_1) = _1_-_a = _ _ _ _
1 3 4 5
3 - 1 2 ' ' '
As a moves away from the origin, the pattem ofroot loci changes drastically. There-
fore to obtain exact root loci from the properties is not necessarily simple. On the
other hand, none of the properties is violated in these plots. Therefore the properties
can be used to check the correctness of root loci by a computer.
6
This example was provided by Dr. Byunghak Seo.
7.4 PLOT OF ROOT LOCI 247
lms
2
k2
~
-3.7 -0.27
--~~-r~+\+-~~~-+-\~~~--+-~--~--~Res
-6 -5 -4 2 3 4
o
X -3
-143°. We compute
Its roots are computed, using MATLAB, as 3.9870 ± }2.789, -3.7, and -0.274.
C1ear1y, -3.7 and -0.274 are breakaway points, but not the complex-conjugate
roots. Using the breakaway points and departure angles, we can readily obtain a
rough sketch of root loci as shown in Figure 7.19 with heavy lines. As k increases
from Oto oo, the three roots of D(s) + kN(s) or, equivalently, the three poles of the
unity-feedback system in Figure 7.18 will move along the trajectories as indicated
by arrows.
The rates of emigration of the three closed-loop poles are not necessarily the
same. To see this, we list the poles for k = O, 1 and 2:
k = 0: 1, -3 ± }3
k= 1: 0.83, -3.4 ± }2
k= 2: 0.62, -5.1, -2.9
We see that, at k = 1, the complex-conjugate poles have moved quite far away from
- 3 ± }3, but the real pole is still very close to l. As k continues to increase, the
complex-conjugate poles collide at s = - 3.7, and then one moves to the left and
the other to the right on the real axis. The pole moving to the left approaches - oo
as k approaches infinity. The one moving to the right collides with the real pole
emitting from 1 at s = -0.274. They split and then enter the complex-conjugate
zeros of G(s) with angles ± 143°. They cross the imaginary axis roughly at s =
±ji.
7.4 PLOT OF ROOT LOCJ 249
From the preceding discussion, we can now determine the stability range of k.
At k = O, the unity-feedback system has one unstable pole at s = 1 and one pair
of stable complex-conjugate poles at -3 ± }3. As k increases, the unstable closed-
loop pole moves from 1 into the left half plane. It is on the imaginary axis at
k= k 1, and then becomes stable for k> k 1. Note that the complex-conjugate closed-
loop poles remain inside the open left half plane and are stable as k increases from
Oto k 1• Therefore, the unity-feedback system is stable if k> k 1 • The three closed-
loop poles remain inside the open left half plane until k = k2 , where the root loci
intersect with the imaginary axis. Then the closed-loop complex-conjugate poles
move into the right half plane and become unstable. Therefore, the unity-feedback
system is stable if k 1 < k< k 2 •
To compute k 1 and k2 , we must use the magnitude equation. The magnitude of
(7.35) is
ls -· 1 + J21 ls - 1 - 121
ls - Ills + 3 + J3lls + 3 - J3l =
¡- 11
k =
1
k 17 36
· )
where we have used the fact that k > O. Note that k1 is the gain of the root loci at
s = O and k2 the gain at s = j l. To compute k1 , we set s = O in (7 .36) and compute
1 1- 1 + J21 1- 1 - J21 v5 v5 1
(7.37)
k¡ 13 + J3ll3 - J3l Vi8 Vi8 3.6
which implies k1 = 3.6. This step can also be carried out by measurement. We draw
vectors from all the poles and zeros to s = O and then measure their magnitudes.
Certainly, excluding possible measurement errors, the result should be the same as
(7.37). To compute k2 , we draw vectors from all the poles and zeros tos = jl and
measure their magnitudes to yield
1.4 X 3.2
1.4 X 3.6 X 5 k2
which implies
k2 = 5.6
Thus we conclude that the overall system is stable in the range
3.6 = k1 < k < k2 = 5.6
This result is the same as the one obtained by using the Routh test.
Exercise 7.4.6
Consider the G2 (s) in (7.24) with its root loci plotted in Figure 7.15. Find the range
of positive k in which the system is stable.
[Answer: O ::5 k < 38.]
250 CHAPTER 7 THE ROOT-LOCUS METHOD
In this section we discuss the design using the root-locus method. We use the ex-
ample in (7.1) to deve1op a design procedure. The procedure, however, is applicable
to the general case. lt consists of the following steps:
Step 1: Choose a configuration and a compensator with one open parameter k such
as the one in Figure 7 .l.
Step 2: Compute the overall transfer function and then find the range of k for the
system to be stable and to meet steady-state specifications. If no such k
exists, go back to Step l.
Step 3: Plot root loci that yield the poles of the overall system as a function of the
parameter.
Step 4: Find the desired pole region from the•specifications on overshoot and set-
tling time as shown in Figure 7 .4.
Step 5: Find the range of k in which the root loci lie inside the desired pole region.
If no such k exists, go to Step 1 and choose a more complicated compen-
sator or a different configuration.
Step 6: Find the range of k that meets 2 and 5. If no such k exists, go to Step l.
Step 7: From the range of k in Step 6, find a k to meet the remaining specifications,
such as the rise time or the constraint on the actuating signal. This step
may require computer simulation of the system.
We remark that in Step 2, the check of stability may be skipped because the stability
of the system is automatically met in Step 5 when all poles lie inside the desired
pole region. Therefore, in Step 2, we may simply find the range of k to meet the
specifications on steady-state performance.
Example 7.5. 1
We use an example to illustrate the design procedure. Considera plant with transfer
function
S + 4
(7.38)
G(s) = (s + 2)(s - 1)
This plant has two poles and one zero. Design an overall system to meet the following
specifications:
l. Position error :::; 10%
2. Overshoot :::; 5%
3. Settling time :::; 4.5 seconds
4. Rise time as small as possible.
Step 1: We try the unity-feedback configuration shown in Figure 7.20.
7.5 DESIGN USING THE ROOT-LOCUS METHOD 251
k·------
S + 4
(s + 2)(s - 1) k(s + 4)
(7.39)
S + 4 s 2
+ (k + 1)s + 4k - 2
+k------
(s + 2)(s - 1)
Thus the system is stable for k > 0.5. Next we find the range of k to have
position error less than 10%. The specification requires, using (6.3),
4k - 2 4k 1 - 2
1
1 1 ---:::; 0.1 (7.40)
1
4k - 2 4k - 2 2k -
where we have used the fact that k > 0.5, otherwise the absolute value
sign cannot be removed. The inequality in (7.40) implies
10 :::; 2k - 1
or
11
k 2:- = 5.5 (7.41)
2
Thus, if k 2: 5.5, then the system in Figure 7.20 is stable and meets spec-
ification (1). The larger k is, the smaller the position error.
Steps 3 and 4: Using the procedure in Section 7.4.1, we plot the root loci of
1 + kG(s) = O in Figure 7.21. Por convenience of discussion, the poles
corresponding to k= 0.5, 0.7, 1, 5, ... are a1so indicated. They are actually
obtained by using MATLAB. Note that for each k, there are two po1es, but
only one is indicated. The specification on overshoot requires all poles to
1ie inside the sector bounded by 45°. The specification on settling time
requires all poles to lie on the left-hand side of the vertical line passing
through - 4.5/ ts = - l. The sector and the vertical line are al so plotted
in Figure 7.21.
Step 5: Now we shall find the ranges of k to meet the specifications on overshoot
and settling time. From Figure 7.21, we see that if 0.5 < k< 1, the two
...
lms
k2 = 13.3 k=l
sz\ / k=0.5
Res
-8 -4 -2
k=0.7
/
/
/
poles lie inside the sector bounded by 45°. If 1 < k < 5, the two poles
move outside the sector. They again move inside the sector for k > 5. Thus
if 0.5 < k < 1 or 5 < k, the overall system meets the specification on
overshoot. If k< 1, although one pole of GJs) is on the left-hand side of
the verticalline passing through -1, one pole is on the right-hand side.
If k > 1, then both poles are on the left-hand side. Thus if k > 1, the
system meets the specification on settling time.
Step 6: The preceding discussion is summarized in the following:
k> 0.5: stable
k > 5.5: meets specification (1). The larger k is, the smaller the
position error.
k > 5 or 1 > k> 0.5: meets specification (2)
k > 1: meets specification (3).
Clearly in order to meet (1), (2), and (3), k must be larger than 5.5.
Step 7: The last step of the design is to find a k in k > 5.5 such that the system
has the smallest rise time. To achieve this, we choose a k such that the
closest pole is farthest away from the origin. From the plot we see that as
k increases, the two complex-conjugate poles of G 0 (s) move away from
the origin. At k = 13.3, the two complex poles become repeated poles at
s = -7.2. At k = 15, the poles are -10.4 and -6.4; one pole moves
away from the origin, but the other moves closer to the origin. Thus, at
k = 13.3, the poles of G0 (s) are farthest away from the origin and the
system has the smallest rise time. This completes the design.
7.5 DESIGN USING THE ROOT-LOCUS METHOD 253
It is important to stress once again that the desired pole region in Figure 7.4 is
developed for quadratic transfer functions with a constant numerator. The G0 (s) in
(7.39) is not such a transfer function. Therefore, it is advisable to simulate the re-
sulting system. Figure 7.22 shows the unit-step responses ofthe system in (7.39) for
k = 13.3 (dashed line) and k = 5.5 (solid line). The system with k = 13.3 is better
than the one with k = 5.5. Its position error, settling time, and overshoot are roughly
4%, 1.5 seconds, and 10%. The system meets the specifications on position error
and settling time, but not on overshoot. This system will be acceptable if the re-
quirement on overshoot can be relaxed. Otherwise, we must redesign the system.
The root loci in Figure 7.21 are obtained by using a personal computer; there-
fore, the gain k is also available on the plot. If the root loci are obtained by hand,
then the value of k is not available on the plot. In this case, we must use the magnitude
equation
1
(s +
(s +
2)(s -
4)
1)
1 ¡- 11k
to compute k. For example, to find the value of k 1 shown in Figure 7.21, we draw
vectors from all poles and zeros to s 1 and then measure their magnitudes to yield
(s + 4) 1 3.2 ~- 11
1
(s + 2)(s - 1) s=s, = 3.2 X 5 = ""'k;
which implies k 1 = 5. To compute k 2 , we draw vectors from all poles and zeros to
s2 and measure their magnitudes to yield
(s + 4) 1 3.2 ~- 11
1
(s + 2)(s - 1) s=sz = 5.2 X 8.2 = --¡;
1.4.----,----,----,----,----,----.----,----.----,---,
1.2 [\
1 '--'~--~_-_--_-_--_-_--_-_--_-_--_-_--_-_-_--_-_--_-_--_-_--_-_--_-_--_-_~_
0.8
0.6
1
0.4
0.2
00 2 3 4 5 6 7 8 9 10
which implies k2 13.3. Thus, the gain can be obtained from the magnitude
equation.
7.5.1 Discussion
l. Although we studied only the unity-feedback configuration in the preceding
section, the root-locus method is actually applicable to any configuration as long
as its overall transfer function can be expressed as
(7.42)
where p(s) and q(s) are polynomials, independent of k, and k is a real parameter
to be adjusted. Since the root-locus method is concemed only with the poles of
G0 (s), we plot the roots of
p(s) + kq(s) (7.430)
or the solutions of
q(s)
(7.43b)
p(s) k
as a function of real k. We see that (7.43a) and (7.43b) are the same as (7.12)
and (7 .13), thus all discussion in the preceding sections is directly applicable to
(7 .42). For example, consider the system shown in Figure 7 .23. lts overall trans-
fer function is
s+k2 10
k¡·-s+2. s(s 2 + 2s
+ 2)
+
S k2
+ k . - - . --;;------10
¡ s + 2 s(s 2 + 2s + 2)
lük¡(S + k2)
s(s + 2)(s + 2s + 2) + 10k 1(s + k2)
2
1t has two parameters, k 1 and k2 • If we use a digital computer to plot the root
loci, it makes no difference whether the equation has one, two, or more param-
eters. Once the root loci are obtained, the design procedure is identical to the
one discussed in the preceding sections. If the root loci are to be plotted by
hand, we are able to handle only one parameter ata time. Arbitrarily, we choose
This is in the form of (7.42). Thus the root-locus method is applicable. In this
case, the root loci are a function of k2 .
2. The root-locus method considers only the poles. The zeros are not considered,
as can be seen from (7.42). Thus the method is essentially a pole-placement
problem. The poles, however, cannot be arbitrarily assigned; they can be as-
signed only along the root loci.
3. The desired pole region in Figure 7.4 is developed for quadratic transfer func-
tions with a constant numerator. When it is used to design other types of transfer
functions, it is advisable to simulate resulting systems to check whether they
really meet the given specifications.
2 1
s(s + l)(s + 5) k
P*+l~!H5)1
y
Ims
1 1
1¡
)
ti
1 1
1
1 1
k=O 1
k= o 1 2
._~~--+----r--_,~~~~~--,_---+--~--~Res
k--? 00 -5 -4 -3 -2 \ -1 1
\
1~\
0.9
for k> O. The root loci are shown in Figure 7.25. There are three asymptotes with
centroid at
o- 1- 5
= -2
3
and with angles ± 60° and 180°. The breakaway point can also be computed ana-
lytically by solving
D(s)N' (s) - D' (s)N(s) = - (3s 2 + 12s + 5) = - 3(s + 0.47)(s + 3.5)
lts solutions are -0.47 and - 3.5. Clearly -0.47 is a breakaway point, but -3.5
is not. 7
In order for the resulting system to have settling time less than 5 seconds, all
the poles of G0 (s) must lie on the left-hand side of the verticalline passing through
the point -4.5/ts = -0.9. From the root loci in Figure 7.25 we see that this is not
possible for any k > O. Therefore, the configuration in Figure 7.24 cannot meet the
specifications.
As a next try, we introduce an additional tachometer feedback as shown in
Figure 7.26. Now the compensator consists of a proportional compensator with gain
7
1t is a breakaway point of the root loci for k < O.
7.6 PROPORTIONAL-DERIVATIVE (PD) CONTROLLER 257
2k
s(s + 1)(s + 5)
2k 2k 1s
1 + +
s(s + 1)(s + 5) s(s + 1)(s + 5)
2k
(7.45)
s(s + 1)(s + 5) + 2k + 2k 1s
10
s3 + 6s 2 + 5s + 2k 1s + 10
The root loci of (s 3 + 6s 2 + 5s + 10) + k 1(2s) or of
1 2s
k¡ (7.46)
2s
(s + 5.42)(s + 0.29 + j1.33)(s + 0.29 - }1.33)
are plotted in Figure 7.27. There are three trajectories. One moves from pole -5.4
to the zero at s = O along the negative real axis; the other two are complex conjugates
and approach the two asymptotes with centroid at
8
A different arrangement of PD controllers is U(s) = (k + k 1s)E(s). See Chapter 11. The arrangement
in Figure 7.26, that is, U(s) = kE(s) + k1sY(s), is preferable, because it differentiates y(t) rather than
e(t), which often contains discontinuity at t = O. Therefore, the chance for the actuating signa! in Figure
7.26 to become saturated is less.
258 CHAPTER 7 THE ROOT-LOCUS METHOD
1 1
1
k¡= 1 k = 3 k=4k=5~
1
11 1 1 ~ 1 ~1 "
1
Res
-5.4 -5 -4 -31 -2 -11
1 / o 2
1/
-1
-2
/1 -3
/ 1
/
/
/
Figure 7.27 Root loci of (7.46).
Because G (0) = 1, the system in (7.45) has zero position error and its velocity
0
e t = 15 + 2k¡ - o1 = 2k¡ + 5
V() 10 10
Thus the smaller k1, the smaller the error. To meet the specification on overshoot,
all poles must lie in the sector bounded by 45°, as shown in Figure 7.27. The real
pole lies inside the sector for all k1 > O. The complex poles move into the section
at about k1 = 3 and move out at about k1 = 6.5. Therefore, if 3 < k1 < 6.5, then
all three closed-loop poles lie inside the sector and the system meets the specification
on overshoot. To meet the specification on settling time, all poles must lie on the
left-hand side of the verticalline passing through -4.5/5 = -0.9. The real pole
moves into the right-hand side at about k1 = 5; the complex poles move into the
left-hand side at about k1 = 2.5. Therefore if 2.5 < k1 < 5, then all poles lie on the
left-hand side of the verticalline and the system meets the specification on settling
time. Combining the preceding two conditions, we conclude that if 3 < k 1 < 5, then
the system meets the specifications on overshoot and settling time.
The condition for the system to have the smallest rise time is that the closest
pole be as far away as possible from the origin. Note that for each k 1, G0 (s) in (7.45)
has one real pole and one pair of complex-conjugate poles. We list in the following
7.6 PROPORTIONAL-DERIVATIVE (PD) CONTROLLER 259
the poles and their shortest distance from the origin for k1 = 3, 4, and 5:
4 -2, -2 ±ji 2
5 -1,-2.5 ±}1.94 1
Because the system corresponding to k1 = 4 has the largest shortest distance, it has
the smallest rise time among k1 = 3, 4, and 5. Recall that the velocity error is smaller
if k1 is smaller. Therefore, if the requirement on velocity error is more important,
then we choose k1 = 3. If the requirement on rise time is more important, than we
choose k1 = 4. This completes the design.
The overall transfer function in (7.45) is not quadratic; therefore, the preceding
design may not meet the design specifications. Figure 7.28 shows the unit-step re-
sponses of (7.45) for k1 = 4 (solid line) and 3 (dashed line). The overshoot, settling, ·
and rise times of the system with k1 = 4 are, respectively, O, 3.1 and 2.2 seconds.
The system meets all design specifications. The overshoot, settling, and rise times
of the system with k1 = 3 are, respectively, 4.8%, 6.1, and 1.9 seconds. The system
does not meet the specification on settling time but meets the specification on over-
shoot. Note that the system with k1 = 3 has a smaller rise time than the system with
k1 = 4, although the distance of its closest poles from the origin for k 1 = 3 is
1.2,----.----.----,----,----,----,----.----.----.---,
2 3 4 5 6 7 8 9 10
smaller than that for k1 = 4. Therefore the rule that the farther away the closest pole
from the origin, the smaller the rise time, is not applicable for this system. In con-
clusion, the system in Figure 7.26 with k = 5 and k1 = 4 meets all design require-
ments and the design is completed.
Consider again the design problem studied in Section 7.6. As shown there, the design
cannot be achieved by using the configuration in Figure 7.24. However, if we intro-
duce an additional tachometer feedback or, equivalently, if we use a PD controller,
then the design is possible. The use of tachometer feedback, however, is not the only
way to achieve the design. In this section, we discuss a different design by using a
compensating network as shown in Figure 7.29. The transfer function of the com-
pensating network is chosen as
s +a
C(s): = (7.47)
s + aa
It is called a phase-lead network, if a > 1; a phase-lag network, if a < l. The
reason for calling it phase-lead or phase-lag will be given in the next chapter. See
also Problem 7.9.
The transfer function of the system in Figure 7.29 is
k s + a 2
+ aa s(s + 1)(s + 5)
s
s + a 2
1 + k---------
s + aa s(s + 1)(s + 5) (7.48)
2k(s + a)
s(s + 1)(s + 5)(s + aa) + 2k(s + a)
Its denominator has degree 4 and the design using (7 .48) will be comparatively
complex. To simplify design, we shall introduce a stable pole-zero cancellation.
Because both - 1 and _- 5 lie inside the desired pole region, either one can be
canceled. Arbitrarily, we choose to cancel the pole at -l. Thus we choose a = 1
in (7.47) and the overall transfer function in (7.48) reduces to
G S - 2k (7.49)
0
( ) - s(s + 5)(s + a) + 2k
y
2
s(s + 1) (s + 5)
10
G (s) - -3- = - - - - - 2- - = - - - - -
0 - s + (5 + a)s + 5as + 10 (7.50)
10
(s 3 + 5s 2 + 10) + as(s + 5)
-1 s(s + 5)
(7.51)
s(s + S)
(s + 5.35)(s - 0.18 + jl.36)(s - 0.18 - jl.36)
Ims
a=O a==
-6 -5 -4 -3 -2 2
a=O
-2
-3
from (6.7),
eJt) = ¡salO
0
1 ~~~ x 100%
Hence in order to have the smallest possible velocity error, we choose a to be a 1.
The parameter a 1 can be obtained from (7.51) by measurement as
s~ ;.:~)(s
1
1: 1 1 1(s + 5.35)(s - 0.18 - 0.18 - }1.36) ls=s,
1.4 X 4.1 1
4.45 X 1.2 X 2.7 2.52
which implies a 1 = 2.52. Hence by choosing k = 5, a = 1, and a = 2.52, the
system in Figure 7.29 may meet all the design specifications, and the design is
completed. The total compensator is
•
k(s + a) 5(s + 1)
(7.52)
s + aa S + 2.52
lt is a phase-lead network.
The unit-step response of the system in Figure 7.29 with (7.52) as its compen-
sator is plotted in Figure 7.28 with the dotted line. Its overshoot is about 3.6%; its
settling time is 4.5 seconds. lt also responds very fast. Thus the design is satisfactory.
PROBLEMS
7.1. Sketch the root loci for the unity-feedback system shown in Figure 7.1 with
S + 4
o. G(s) 2
s (s + l)
(s + 4)(s + 6)
b. G(s)
(s - l)(s + 1)
s 2 + 2s + 2
e G(s) = ----=---=-----
2
. (s + 1f(s + 4s + 6)
7.2. Sketch the root loci of the polynomials
o. s 3 + 2s 2 + 3s + ks + 2k
b. s\1 + 0.001s)(l + 0.002s) + k(l + 0.1s)(l + 0.25s)
7.3. Use the root-locus method to show that
o. The polynomial s 3 + s 2 + s + 2 has one real root in (- 2, - 1) and a pair
of complex-conjugate roots with real part in (0, 1). [Hint: Write the poly-
nomial as s 2 (s + l) + k(s + 2) with k = 1.]
b. The polynomial
has three real roots anda pair of complex-conjugate roots. Also show that
the three real roots lie in (5, 3), (0, - 3), and (- 5, - oo).
7.4. The root loci ofthe system shown in Figure P7.4(a) are given in Figure P7.4(b).
Find the following directly from measurement on the graph.
o. The stability range of k.
b. The real pole that has the same value of k as the pair of pure imaginary
poles.
c. The k that meets (i) overshoot :=::; 20%, (ii) settling time :=::; lO seconds, and
(iii) smallest possible position error.
7.5. Consider the feedback system shown in Figure P7.5. Sketch root loci, as a
function of positive real k, for the following:
1 4
o. G(s) , H(s) = (s + 2)
s(s+ 1) s + 4
2
s + 4s + 4 S + 5
b. G(s) H(s)
s(s - 1) s2 + 2s + 2
264 CHAPTER 7 THE ROOT-LOCUS METHOD
~
y
(>+3) , , ' ; . , , . , , 1
(a)
k~-=
Im~
3 k~=
\
\
\ V
\
' 1
\
-~=02
' '¡--, 1
k~= k=O k~-=
Res
-4 -3 -2 -1 o 1 2
/,.- ..... k=O
/
/
/
!\ -1
1
/ \ -2
, 1 ~
\
(b) Figure P7.4
Figure P7.5
7.6. Consider the unity-feedback system shown in Figure 7.1. Let the plant transfer
function be
S + 1
G(s) = - - - - - - - - - -
(s - 0.2 + j2)(s 0.2 - }2)
Find the ranges of k to meet the following
a. Position error < 10%
b. a. and overshoot < 15%
c. a., b., and settling time < 4.5 seconds
d. a., b., c., and the smallest possible rise time.
PROBLEMS 265
Digital measurement
Amplifier
Machine
too!
(a)
TG}-1 ·~ H. ._s(0_.3_~_+_1). . JI 1 ·
(b)
Figure P7.7
7.8. The depth below sea level of a submarine can be maintained by the control
system shown in Figure P7.8. The transfer function from the stem plane angle
(} to the actual depth y of the submarine can be modeled as
lO(s + 2) 2
G(s) = (s + 10)(s 2 + 0.1)
Actuator and
suhmarine dynamics
r- --------
----------
---------
10(s + 2) 2 y
(s + 10) (s 2 + 0.1)
Depthy
_L Pressure
....__ _--1 transducer r - - - - - - '
Stern 1
plane
(a) (b)
Figure P7.8
266 CHAPTER 7 THE ROOT-LOCUS METHOD
position error is less than 5%, the settling time is less than 10 seconds, and the
overshoot is less than 2%.
7.9. a. Consider C(s) = (s + 2)/(s + 1). Compute its phase at s = JI. Is it
positive or negative?
b. Consider C(s) = (s + a)/(s + b). Show that the phase of G(jw) for every
w > O is positive for O < a < b and negative for O < b < a. (Thus, the
transfer function is called a phase-lead network if b > a and a phase-lag
network if a > b.)
7.10. Consider the unity-feedback system shown in Figure P7.10. Use the Routh test
to find the range of real a for the system to be stable. Verify the result by using
the root-locus method. Find the a such that the system has the smallest settling
time and overshoot. Is ita phase-lead or phase-lag network?
Figure P7. 1O
7.11. The speed of a motor shaft can be controlled accurately using a phase-locked
loop [39]. The schematic diagram of such a system and its block diagram are
shown in Figure P7 .11. The desired speed is transformed into a pulse sequence
with a fixed frequency. The encoder at the motor shaft generates a pulse stream
whose frequency is proportional to the motor speed. The phase comparator
generates a voltage proportional to the difference in phase and frequency.
Sketch the root loci of the system. Does there exist a k such that the settling
time of the system is smaller than 1 second and the overshoot is smaller than
10 percent?
Desired
speed .------,
Compensation
network
(a)
(b)
Figure P7.11
PROBLEMS 267
7.12. The transfer function from the thrust deftection angle u to the pitch angle () of
a guided missile is found to be
Actuator Missile
- +
Figure P7. 12
7.13. Consider the control system shown in Figure P7.13. Such a system may be
used to drive potentiometers, dials, and other devices. Find k 1 and k2 such that
the position error is zero, the settling time is less than 1 second, and the over-
shoot is less than 5%. Can you achieve the design without plotting root loci?
Figure P7.13
7.14. One way to stabilize an ocean liner, for passengers' comfort, is to use a pair
of fins as shown in Figure P7.14(a). The fins are controlled by an actuator,
which is itse1f a feedback system consisting of a hydraulic motor. The transfer
function of the actuator, compared with the dynamics of the liner, may be
simplified as a constant k. The equation goveming the roll motion of the
liner is
JO(t) + r¡Ó(t) + a()(t) = ku(t)
where () is the roll angle, and ku(t) is the roll moment generated by the fins.
The block diagram of the linear and actuator is shown in Figure P7.14(b). lt
is assumed that a/J = 0.3, r¡/2-v;;¡ = 0.1, and k/ a = 0.05. A possible
configuration is shown in Figure P7 .14(c ). If k 1 = 5, find a k2 , if it exists, such
268 CHAPTER 7 THE ROOT-LOCUS METHOD
that (1) position error ::5 15%, (2) overshoot ::5 5%, and (3) settling time ::5 30
seconds. If no such k2 exists, choose a different k1 and repeat the design.
()
r--------------1
1 1
[ ______________ J
1
(a) (b)
•
0.015
7.15. A highly simplified model for controlling the yaw of an aircraft is shown in
Figure P7.15(a), where (} is the yaw error and cp is the rudder deftection. The
rudder is controlled by an actuator whose transfer function can be approximated
as a constant k. Let J be the moment of inertia of the aircraft with respect to
the yaw axis. For simplicity, it is assumed that the restoring torque is propor-
tional to the rudder deftection cp(t); that is,
J8(t) = - kcp(t)
The configurations of compensators are ehosen as shown in Figure P7 .15(b),
(e), and (d), where G(s) = - k/ls 2 = -2/ s 2 • We are required to designan
overall system such that (1) velocity error ::5 10%, (2) overshoot ::5 10%,
(3) settling time ::5 5 seconds, and (4) rise time is as small as possible. Is it
possible to achieve the design using configuration (b)? How about (e) and (d)?
In using (e) and (d), do you have to plot the root loci? In this problem, we
assume that the saturation of the actuating signal will not occur.
7. 16. Consider the plant discussed in Section 6.2 and shown in Figure 6.1. lts transfer
function is computed as
300
G(s) = s(s 3 + 184s 2 + 760.5s + 162)
PROBLEMS 269
Desired
direction
t (b)
1 e
~/
1
(e)
(a)
(d)
Figure P7. 15
Design an overall system such that (1) position error :5 10%, (2) settling time
:5 5 seconds, and (3) overshoot is as small as possible.
7.17. Consider the system shown in Figure 7.26. Let k = 10. Use the root-locus
method to find a k1 so that the system meets the specifications listed in
Section 7.6.
7.18. Consider the system shown in Figure 7.26. Let k1 = 4. Use the root-locus
method to find a k so that the system meets the specifications listed in Section
7.6.
7.19. In Figure 7.29, if we choose a = 5, then the system involves a stable pole-
zero cancellation at s = -5. Is it possible to find k and a in Figure 7.29 so
that the system meets the specifications listed in Section 7.6? Compare your
design with the one in Section 7. 7 which has a stable pole-zero cancellation at
S= -1.
Frequency-Domain
Techniques
8.1 INTRODUCTION
In this chapter we introduce a design method that, like the root-locus method, takes
the outward approach. In this approach, we first choose a configuration, then search
a compensator and hope that the resulting overall system will meet design specifi-
cations. The method is mainly limited to the unity-feedback configuration shown in
Figure 8.1, however. Because of this, it is possible to translate the design specifi-
cations for the overall system into specifications for the plant transfer function G(s).
If G(s) does not meet the specifications, we then search for a compensator C(s) so
that C(s)G(s) will meet the specifications and hope that the resulting unity-feedback
configuration in Figure 8.1 will perform satisfactorily. Thus, in this method we work
directly on G(s) and C(s). However, the objective is still the overall system
G (s) = G(s)C(s)/(1 + G(s)C(s)). This feature is not shared by any other design
0
method.
The method has another important feature; it uses only the information of G(s)
along the positive imaginary axis, that is, G(jw) for all w 2': O. Thus the method is
called the frequency-domain method. As discussed in Chapter 4, G(jw) can be ob-
tained by direct measurement. Once G(jw) is measured, we may proceed directly
to the design without computing the transfer function G(s). On the other hand, ifwe
are given a transfer function G(s), we must first compute G(jw) before carrying out
the design. Thus we discuss first the plotting of G(jw).
270
8.2 FREQUENCY-DOMAIN PLOTS 271
or
. 1
G(Jw) = jw + 0.5
We discuss the plot of G(jw) as a function of real w 2::: O. Although w is real, G(jw)
is, in general, complex. If w = O, then G(O) = 2. If w = 0.2, then
1 1
G (¡"02
. ) = 0.5 + j0.2 \1'0:29e tan- (0.2/0.5)
1 20
0.53ej2
ImG(jro) IG(Jro)ldB
10
+-A
-2 2 3 1og ro
(a)
dB
<}.G(jw)
0.5
10 100
(b) (e)
Figure 8.2 Frequency of plots of G(s).
and
20 log 0.1 = -20 dB
Note that the decibel gain is positive if IG(jw)l > 1, and negative if IGjwl < l. The
phase, on the horizontal coordinate, is in linear angles. The log magnitude-phase
plot of G(s) in (8.1) is shown in Figure 8.2(b). For example, point A, corresponding
to w = O, has magnitude 6 dB and phase 0°; point B, corresponding to w = 0.5,
has magnitude 2.9 dB and phase -45° and so forth. The plot is quite different from
the one in Figure 8.2(a).
The plot in Figure 8.2(c) is called the Bode plot. It actually consists of two plots:
gain versus frequency, and phase versus frequency. The gain is expressed in decibels,
the phase in degrees. The frequency on the horizontal coordinate is expressed in
logarithmic units as shown. Thus w = 1 corresponds to O; w = 1O corresponds to
1; w = 100 corresponds to 2, and so forth. Note that w = O appears at - oo. Thus
point A should appear at - oo and has 6 dB and zero degree. The complete Bode
plot of G(s) in (8.1) is plotted in Figure 8.2(c). We remark that w appears as a
variable on the plots in Figure 8.2(a) and (b) whereas it appears as coordinates in
8.2 FREQUENCY-DOMAIN PlOTS 273
Figure 8.2(c). Although the three plots in Figure 8.2 look entirely different, they are
plots of the sarne G(jw). It is clear that if any plot is available, the other two plots
can be obtained by change of coordinates.
With digital computers, the computation and plotting of G(jw) become very
simple. Even so, it is useful to be able to estímate a rough sketch of G(jw). This is
illustrated by an example.
Example 8.2. 1
Before computing G(jw) for any w, we shall estímate first the values of G(jw) as
w ~O and w ~ oo. Clearly we have
2 1
s ~O or w ~ 0: G(s) = - or G(jw) = -:- ==> /G(jw)/ ~ oo, 1:: G(jw)
2s JW
and
2
s ~ oo or w ~ oo: G(s) = 3 ==> /G(jw)/ ~ O, 1:: G(jw) = -270°
S
They imply that for w very small, the phase is - 90° and the amplitude is very large.
Thus the plot will start somewhere in the region denoted by A shown in Figure
8 .3( a). As w increases to infinity, the plot will approach zero or the origin with phase
Im G(jw) ImG(jw)
0.6"---
B
(a) (b)
-270° or + 90° as shown in Figure 8.3(a). Recall that a phase is positive if measured
clockwise, negative if measured counterclockwise. Now we compute G(jw) at
úJ = 1:
2 2
G(jl) ---:-:-:----,.,----c:-= = 0.6e-JI620
}1(}1 + 1)(}1 + 2) ei90o • 1.4eJ450 . 2.2el270
•
Exercise 8.2. 1
Plot the polar, log magnitude-phase, and Bode plots of G(s) = 1/(s + 2).
To conclude this section, we discuss the plot of G(s) = 1/(s + 0.5) using
MATLAB. We first list the commands for version 3.1 of MATLAB:
n=[1];d=[1 0.5];
w = logspace( -1 ,2) or w = logspace( -1 ,2,200);
(re,im] = nyquist(n,d,w);
plot(re,im),title('Polar plot')
[mag,pha] = bode(n,d,w);
db = 20*1og1 O(mag);
plot(pha,db),title('Log magnitude-phase plot')
semilogx(w,db),title('Bode gain plot')
semilogx(w,pha),title('Bode phase plot')
The numerator and denominator of G(s) are represented by the row vectors n and
d, with coefficients arranged in descending power of s, separated by spaces or com-
mas. Command logspace( -1 ,2,200) generates 200 equally spaced frequencies in
logarithmic scale between 10- 1 = 0.1 and 102 = 100 radians per second. If 200
is not typed, the default is 50. Thus logspace(- 1,2) generates 50 equally spaced
frequencies between 0.1 and 100. Command nyquist(n,d,w) 1 computes the real part
and imaginary part of G(jw) at w. Thus, plot(re,im) generates a polar plot. Com-
mand bode(n,d,w) computes the magnitude and phase of G(jw) at w. The magni-
1
The name Nyquist will be introduced in a later section. The Nyquist plot of G(s) is defined as G(jw)
for w 2: O and w <O, whereas the polar plot is defined for only w 2: O. Because command nyquist(n,d,w)
in version 3.1 of MATLAB computes only positive w, a better name would be polar(n,d,w).
8.3 PlOTTING BODE PlOTS 275
In this section, we discuss the plot of Bode plots by hand. One may wonder why we
bother to study this when the plot can be easily obtained on a personal computer.
lndeed, one can argue strongly for not studying this section. But in the study, we
can leam the following: the reason for using logarithmic scales for frequency and
magnitude, the mechanism for identifying a system from its Bode plot, and the reason
for using the Bode plot, rather than the polar or log magnitude-phase plot, in the
design. Besides, the plot of Bode plots by hand is quite simple; it does not require
much computation.
We use an example to discuss the basic procedure of plotting Bode plots.
Consider
Ss + 50 S(s + 10)
G(s) (8.3)
s 2
+ 99.8s - 20 (s - 0.2)(s + 100)
First we write it as
1
-5 X 10(1 + 10
s)
G(s)
0.2 X 100( 1 -
0~2 s) ( 1 + _1_
100
s)
(8.4)
-25(1
. + _!__s)
10
1 1
(1 _0.2
_ s)(1 +100
- s)
276 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
lt is important to express every term in the form of 1 + Ts. The gain of G(s) in
decibe1s is 20 1og jG(jw)j or
1
20 log IG(s)l = 20 log 1-2.51 + 20 log 11 + 10 s 1
(8.5)
1
- 20 log 11 - . s 1 - 20 log 11 + __.!.__ s \
02 100
and the phase of G(s) is
- <}:: (1 - -0.21 s)
We see that the decibel gain of G(s) is simply the algebraic sum of the decibel gain
of each term of G(s). Adding all gains of the terms in the numerator and then
subtracting those in the denominator yields the gain of G(s). Similar remarks apply
to the phase of G (s ). Other than the constant term, (8.4) consists only of linear factors
of form (1 + Ts). Therefore we discuss these linear factors first.
dB
0.1
- ...!... ' , 110
-
r r ', r
''
'
Poles
'
'
1
-20 1
---------------------~,
~ -20 dB/decade
-20 log 11 ±jrwl ,
Ifwe pick the 3-dB point and draw a smooth curve as shown in Figure 8.4, then the
curve will be a very good approximation of the Bode gain plot of (l ± Ts). Note
that the Bode gain plot of (l + 7s) is identical to that of (l - 7s). lf they appear
in the numerator, their asymptotes go up with slope 20 dB/decade. lf they appear
in the denominator, their asymptotes go down with slope -20 dB/decade. In other
words, the asymptotes of 20 Iog /1 ± jTw/ go up with slope 20 dB/decade; the
asymptotes of - 20 log Jl ± jTwl go down with slope - 20 dB / decade.
Now we plot the Bode gain plot of (8.4) or (8.5). The comer frequency of zero
(1 + s/10) is 10, and its asymptote goes up with slope 20 dB/decade as shown in
Figure 8.5 with the dashed lines. The comer frequency of pole (l - s/0.2) is 0.2,
and its asymptote goes down with slope - 20 dB j decade as shown with the dotted
dB
40
/
// 20
/
20 /
SdB
_ _ _ _ _ _ _ _ _ ¿ /_ _ _ _ _ _ _ _
/
lines. Note that the negative sign makes no difference in the gain plot. The comer
frequency of pole (1 + s/100) is 100, and its asymptote goes down with slope
- 20 dB / decade as shown with the dashed-and-dotted lines. Thus, there are three
pairs of asymptotes. Now we consider gain -2.5 in (8.4). lts decibel gain is 20 log
1-2.51 = 8 dB; it is a horizontalline, independent of w, as shown. The sum of the
horizontal line and the three pairs of asymptotes is shown by the heavy so1id 1ine.
It is obtained by adding them point by point. Because the p1ot consists of only straight
lines, we need to compute the sums only at w = 0.2, 10, 100, and ata point larger
than w = 1OO. The sum can also be obtained as follows. From w = O to w = 0.2,
the sum of an 8-dB line and three 0-dB lines clear1y equa1s 8 dB. Between [0.2, 10],
there is only one asymptote with slope -20 dB/decade. Thus we draw from w =
0.2 a line with slope -20 dB/decade up to w = 10 as shown. Between [10, 100],
there is one asymptote with slope 20 dB / decade and one with slope - 20 dB / decade,
thus the net is O dB/decade. Therefore, we draw a horizonta11ine from w = 10 to
w = 100 as shown. For w :2: 100, two asymptotes have s1ope - 20 dB / decade
and one has s1ope 20 dB / decade. Thus, the net is a straight line with slope
-20 dB/decade. There are three comer frequencies, at w = 0.2, 10, and 100.
Because they are far apart, the effects of their Bode gain p1ots on each other are
small. Thus the difference between the Bode p1ot and the asymptotes at every comer
frequency rough1y equals 3 dB. Using this fact, the Bode gain plot can then be
obtained by drawing a smooth curve as shown. This completes the plotting of the
Bode gain plot of (8.4).
In other words, for w in (- oo, 0.1/ r), the phase of ( 1 ± jwr) can be approximated
by 0°; for W in (10/ T, oo), the phase of (1 + jwT) Can be approximated by 90° and
the phase of (1 - jwr), by -90° as shown in Figure 8.6. We then connect the end
points by a dashed straight line as shown. The exact phase of (1 + jwr) is also
plotted in Figure 8.6 using a solid line. We see that at w = 0.1/T and w = 10/r,
the differences are 5.7°. There is no difference at w = 1/r. The differences at the
midpoints between 0.1/ rand 1/Tand between 1/ rand 10/ rare 3.4°. The differences
are fairly small between the dashed straight line and the exact phase plot. Thus, the
phase plot of (1 + jwr) can be approximated by the straight lines. Note that the
phase of (1 - jwr) equals the reflection of the phase of (1 + jwr) to negative
angles. Thus the phase of (1 - jwr) can also be approximated by the straight lines.
8.3 PLOTTING BODE PLOTS 279
5.7°
3.40 L ____
t -9: (1 +jr:w) (zero)
=- -9: (1-jr:w) (pole)
ro
0.1
r: r:l
r:
-9: (1-jr:w) (zero)
--1
-9: (1 + jr:w)
=-
---------
(pole)
Thus, the Bode gain plots of (1 + 'TS) and (1 - 'TS) are the same. If they appear as
zeros, then their plots are as shown in the upper part of Figure 8.4; if they appear
as poles, then their plots are as shown in the lower part of Figure 8.4. The situation
in phase plots is different. If (1 + 'Ts) appears as a zero, or (1 - 'Ts) appears as a
pole, then their phases are as shown in the upper part of Figure 8.6. If (1 + 'TS)
appears as a pole, or (1 - 'TS) appears as a zero, then their phases are as shown in
the lower part of Figure 8.6.
With the preceding discussion, we are ready to plot the Bode phase plot of (8.3)
or (8.6). The phase of gain -2.5 is 180°. It is a horizontalline passing through 180°
as shown in Figure 8.7. The comer frequency of zero (1 + s/10) is 10; its phase
for w smaller than one-tenth of 1o is 0°; its phase for w larger than ten times 1o is
+ 90°. The phase for w in (1, 100) is approximated by the dashed 1ine shown in
Figure 8.7. The phase for po1e (1 - s/0.2) is - <t (1 - jw/0.2). Its phase is 0°,
for w < 0.02, and - (- 90°) = + 90°, for w > 2. The approximated phase is plotted
with dotted lines. Similarly the phase of pole (1 + s/ 100) or - <t (1 + jw/ 100) is
plotted with dashed-and-dotted lines. Their sum is denoted by the solid line; it is
obtained using the procedures discussed for the gain plot. By smoothing the straight
lines, the Bode phase plot can then be obtained (not shown).
Once a Bode phase plot is completed, it is always advisable to check the plot
for w ~O and w ~ oo. lf w ~O, (8.3) reduces to -2.5 and its phase is 180°. If w
~ oo, (8.3) reduces to 5s/ s 2 = 5/s and its phase is -90° which also equals 270°.
Thus the plot in Figure 8.7 checks with the two extreme cases. Note that phases are
considered the same if they differ by 360° or its multiples. For example, if the phase
280 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
1 1
-~--~----~----~---
1 1 1 1
1 1
90° -------1·---------------~---------------.:.:::-----r - - - -1- -
--------- 1 J- -- -- 1 1
------ 1 ---- 1
---r--~-~---~--------~~------~---+-----~~----~--------1---------~W
0.01 0.02 0.1 0.2 2 10 ---- - 100 1000
1 --._¿___
----------~------
•
Figure 8.7 Bode phase plot of (8.3).
of -2.5 is plotted as -180°, then the phase plot in Figure 8.7 will be shifted down
by 360°, which is the one generated by calling bode in MATLAB.
Exercise 8.3.1
2(s - 5)
b G(s) = ----=--=-----
• (s + 5)2 (s + 10)
Up to this point, we have considered transfer functions that contain only linear
factors. Now we discuss transfer functions that also contain quadratic factors and
poles or zeros at the origin.
40 dB 1decade
180° i=-2
Zeros
90° i=-1
(¡)
oo (¡)
0.1 10 100
90° i=+1
Po les
180° i=+2
-40 dB 1decade
Figure 8.8 Bode plot of 1/s;.
Quadratic Factors
Consider the complex-conjugate poles 2
w2n
E(s): = 2+ (8.7)
(~J2
s 2~wns + w2n 2~
1 + -s +
wn
with O ::::; t< l. The logarithmic magnitude for s = jw is
~~ (jw) - ( ~)
2
2
The Bode gain plot of 1/(1 - (2Uwn)s + (s/wn) 2 ) equa1s that of (8.7), and its Bode phase plot equals
that of (8.7) refiected to positive angles. To simplify discussion, we study only (8.7).
282 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
20
15
\.--------- s= 0.1
íil
::?-
"
IO
5
v 1-------- 0.2
lt -------- -------- 0.3
~~
------
·¡;¡
~
~ -------- 0.5
o
o ..-
¡--..., '{ ~
-------- 0.7
/' 1.0
-5
-10
Asymptotes f
/ ""
~
-40 dB 1decade
'\~
"'
"~
01)
"
::?-
1;l
oo
90°
s
,.._-... ~ b-
----::::::::: ~ ~ ~\
0.1/
0.2/ /
7 ~~
~
'' ,~
'-
'\
-
"
..t:
p.. 0.3/
~
v ~ ~-
0.5 ------ v /'
-----'<-~t--
180°
0.7 ------ ~ ------ V
1.0_.---
-40 dB/decade. We then compute the damping ratio and use Figure 8.9 to draw
an approximate Bode gain plot. Note that if the quadratic factor appears as zeros,
then one asymptote will go up with slope + 40 dB / decade, and the plots in Figure
8.9 are reversed.
The phase plot of the quadratic factor in (8.7) can be approximated by straight
lines as follows:
w very small or w:::::; O.lwn: 1:: E(jw) = -1:: 1 = 0°
w very large or w ;:::: 10wn: 1:: E(jw) = - 1:: (jw/ wnf = - 180°
The exact phases for w in (0.1wn, lOwn) are plotted in Figure 8.9 for various?. For
small ?, they are quite different from the dashed straight line shown. Thus in plotting
8.3 PLOTTING BODE PLOTS 283
the Bode phase plot of (8.7), we must compute the comer frequency wn as well as
the damping ratio ( and then use Figure 8.9.
Example 8.3. 1
G(s) = -.,...S_O....:.(s_+----'2)__
s(s 2 + 4s + 100)
w~ = 100
which imply wn = 10 and ( = 0.2. We then express every term of G(s) in the form
Gain (dB)
0.1
w
of 1 + O as in (8.4) or (8.7):
50 X 2 ( 1 + ~S) 1
+-S
2
G(s)
2
100s ( 1 + -4 s +s-)
100 100
The asymptotes of the zero, the pole at the origin, and the quadratic term are plotted
in Figure 8.10 using, respectively, the dashed, dotted, and dashed-and-dotted lines.
The sums of these asymptotes are denoted by the thin solid lines. Because the damp-
ing ratio is 0.2, the Bode plot is quite different from the asymptotes at w = wn =
10. Using the p1ot in Figure 8.9, we can obtain the Bode gain and phase plots of
G(s) as shown in Figure 8.10 with the heavy solid lines.
Exercise 8.3.2
a. G(s)
b. G(s)
s(s 2 + 4s + 100)
Ims Ims
w
w
G 1(s)
Res Res
-p o z -z -p o
we obtain Gz(s). Now we compare the gains and phases of G 1 (s) and G2 (s) on the
positive jw-axis. From Figure 8.11, we can see that the vectors from zero z to any
point on the jw-axis and from zero - z to the same point have the same length;
therefore, we have
for all w 2: O. Actually, this fact has been used constantly in developing Bode gain
plots. Although G1 (s) and G2 (s) have the same gain plots, their phases are quite
different. From Figure 8.11, we ha ve
1:: G 1(jw) = 81 - 4>
At w = O, 81 = 180° > 82 = 0°. At w = oo, 81 = 82 = 90°. In general, 81 2: 82
for all w 2: O. Thus we have
for all w 2: O. Thus, if a transfer function has right-half-plane zeros, reftecting these
zeros into the left half plane gives a transfer function with the same amplitude but
a smaller phase than the original transfer function at every w 2: O. This motivates
the following definition:
o Definition 8. 1
A proper rational transfer function is called a minimum-phase transfer function if
all its zeros lie inside the open left half s-plane. lt is called a non-minimum-phase
transfer function if it has zeros in the closed right half plane. Zeros in the closed
right half plane are called non-minimum-phase zeros. Zeros in the open left half
plane are called minimum-phase zeros. •
Exercise 8.3.3
Do they have the same amplitude plots? How about their phase plots? Phases are
considered the same if they differ by ± 360° or their multiples. If their phases are
set equal at w = oo, which transfer function has a smaller phase at every w 2: O? .
286 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
Exercise 8.3.4
8.3.2 ldentification
Determination of the transfer function G(s) of a system from measured data is an
identification problem. As discussed in Section 4. 7.1, if a system is stable, then
jG(jw)j and <t G(jw) can be obtained by measurement. This is also possible if G(s)
has only one unstable pole at s = O (see Problem 4.21). From jG(jw)j and <t G(jw),
we can readily obtain the Bode plot of G(s). Now we discuss how to obtain G(s)
from its Bode plot. This is illustrated by examples.
Example 8.3.2
Find the transfer function of the Bode plot shown in Figure 8.12(a). First we ap-
proximate the gain plot by the three straight dashed lines shown. They intersect at
w = 1 and w = 10. We begin with the leftmost part of the gain plot. There is a
dB dB
60
(a) (b)
straight line with slope - 20 dB / decade, therefore the transfer function has one pole
at s = O. At w = 1, the slope becomes - 40 dB / decade, a decrease of 20 dB / decade,
thus there is a pole with comer frequency w = l. At w = 10, the slope becomes
- 20 dB / decade, an increase of 20 dB / decade, therefore there is a zero with comer
frequency w = 1O. Thus, the transfer function must be of the form
k (1 ± fo)
G(s)
S ( 1 ± f)
This form is very easy to obtain. Wherever there is a decrease of 20 dB / decade in
slope, there must be a pole. Wherever there is an increase of 20 dB / decade, there
must be a zero. The constant k can be determined from an arbitrary w. For example,
the gain is 37 dB at w = 1 or s = jl. Thus we have
ni¡ + o.o1 1
37 20 X log 20 X log ~~
1 1 x vl + 1
j! X ( 1 ± }11)
k X 1.0051
= 20 X log 1.
414
which implies
'
lkl = 1.414 1037/20 1.4 X 10!.85 1.4 . 70.79 99.6
1.005
Now we use the phase plot to determine the signs of each term. The phase of
G(s) for w very small is determined by k/ s. If k is negative, the phase of k/ s is
+ 90°; if k is positive, the phase is -90°. From the phase plot in Figure 8.12(a), we
conclude that k is positive and equals 99.6. If the sign of pole 1 ± s is negative, the
pole will introduce positive phase into G(s) or, equivalently, the phase of G(s) will
increase as w passes through the comer frequency l. This is not the case as shown
in Figure 8.12(a), therefore we have 1 + s. If the sign of zero 1 ± 0.1s is positive,
the zero will introduce positive phase into G(s) or, equivalently, the phase of G(s)
will increase as w passes through the comer frequency at 10. This is not the case as
shown in Figure 8.12(a), thus we have 1 - 0.1s and the transfer function of the
Bode plot is
Example 8.3.3
Find the transfer function of the Bode plot in Figure 8.12(b). The gain plot is first
approximated by the straight lines shown. There are three comer frequencies: 0.2,
5, and 20. At w = 0.2, the slope becomes - 20 dB / decade; therefore, there is one
pole at 0.2 or (1 ± s/0.2). At w = 5, the slope changes from -20 dB/decade to
O; therefore, there is a zero at 5 or (1 ± s/5). At w = 20, the slope changes from
O to - 40 dB / decade; therefore, there is a repeated pole or a pair of complex-
conjugate poles with comer frequency 20. Because of the small bump, it is a quad-
ratic term. The bump is roughly 10 dB high, and we use Figure 8.9 to estímate its
damping ratio ~as 0.15. Therefore, the transferfunction ofthe Bode plot is ofthe form
G(s) 2
s-)( 1± 2 X 0.15 s +(s)
- )
( 1±0.2 20 20
The gain of G(s) at w ____,. O or s = O is 40 dB. Thus we have
20 X log IG(O)I = 20 X log lkl = 40
which implies lkl = 100 or k = ± 100. Using the identical argument as in the
preceding example we can conclude from the phase plot that we must take the
positive sign in all ± . Thus the transfer function of the Bode plot is
100 (1 + ~) 1600(s + 5)
G(s)
( S)(
(s + 0.2)(s 2 + 6s + 400)
1+- 1 +0.3
-s+
0.2 20
From these examples, we see that if a Bode plot can be nicely approximated by
straight lines with slope ± 20 dB/decade or its multiples, then its transfer function
can be readily obtained. The Bode gain plot determines the form of the transfer
function. Wherever the slope decreases by 20 dB/decade, there is a pole; wherever
it increases by 20 dB/decade, there is a zero. Signs of poles or zeros are then
determined from the Bode phase plot. If the Bode plot is obtained by measurement,
then, except for a possible pole at s = O, the system must be stable. Therefore, we
can simply assign positive sign to the poles without checking the phase plot. If the
transfer function is known to be of minimum phase, then we can assign positive sign
to the zeros without checking the phase plot. In fact, if a transfer function is stable
and of minimum phase, then there is a unique relationship between the gain plot and
phase plot, and we can determine the transfer fun,ction from the gain plot alone. To
conclude this section, we mention that devices, such as the HP3562A Dynamic
System Analyzer, are available to measure Bode plots and then generate transfer
functions. This facilitates considerably the identification of transfer functions.
8.4 STABILITY TEST IN THE FREQUENCY DOMAIN 289
Consider the unity-feedback system shown in Figure 8.1. We discuss in this section
a method of checking the stability of the feedback system from its open-loop transfer
function G(s)C(s). The transfer function of the unity-feedback system is
G s _ G(s)C(s) G 1(s)
(8.8)
0
( ) - 1 + G(s)C(s) - · 1 + G 1(s)
where G 1(s) : = G(s)C(s) is called the loop transfer function. The stability of G0 (s)
is determined by the poles of Gjs) or the zeros of the rational function
1 + G 1(s)
Recall that we have introduced two methods of checking whether or not all zeros of
(1 + G1(s)) have negative real parts. If we write G¡(s) = N¡(s)/D 1(s), then the zeros
of (1 + G1(s)) are the roots of the polynomial D 1(s) + N1(s) and we may apply the
Routh test. Another method is to plot the root loci of G¡(s) = - 1/k with k = 1,
as was discussed in Section 7.4.3. In this section, we shall introduce yet another
method of checking whether or not all zeros of (1 + G1(s)) lie inside the open left
half plane. The method uses only the frequency plot of G 1(s) and is based on the
principie of argument in the theory of complex variables.
Ims Ims
Cz
F(s 2 )
-+--+-+-t->rr'--JL-+-+---+-+-_. Re F(s)
s-plane F-plane
(a) (b)
Figure 8. 13 Mapping of C 1 .
290 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
the positive jw-axis of the s-plane by G (s) in (8.1) into the G-plane. A simple el osed
curve is defined as a curve that starts and ends at the same point without going
through any point twice.
Principie of Argument
Let e 1 be a simple closed curve in the s-plane. Let F(s) be a rational function of
s that has neither pole nor zero on e 1 • Let Z and P be the numbers of zeros and
e
poles of F(s) (counting the multiplicities) encircled by 1• Let e 2 be the mapping
of e 1 by F(s) into the F-plane. Then e2 will encircle the origin of the F-plane
e
(Z - P) times in the same direction as 1 . •
where F(s) = 1 + G 1(s). The condition for G0 (s) to be stable is that the numerator
of F(s) is a Hurwitz polynomial or, equivalently, F(s) has no zero in the closed right
half s-plane. In checking stability, the contour e 1 will be chosen to endose the entire
closed right half plane as shown in Figure 8.14(a), in which the radius R of the
semicircle should be very large or infinity. The direction of e 1 is arbitrarily chosen
to be clockwise. Now the mapping of e 1 by F(s) into the F-plane is called theNyquist
8.4 STABILITYTEST IN THE FREQUENCY DOMAIN 291
Ims Ims
(a) (b)
plot of F(s). Similarly, we may define the Nyquist plot of G1(s) as the mapping of
C 1 by G1(s). We first use an example to illustrate the plotting of the Nyquist plots
of G 1(s) and F(s).
Example 8.4. 1
Consider
8s
(8.10)
G¡(s) = (s - l)(s - 2)
We compute G1(jl) = 2.6ej 1610 , G 1(j2) = 2.6ej 1980 , and G 1(jl0) 0.8e-jior_
Using these, we can plot the polar plot of G1(s) or the plot of G 1(jw) for w 2:: O as
shown in Figure 8.15 with the solid curve. lt happens to be a circle. Because all
coefficients of G 1(s) are real, we have
- G1( - jw) = G*¡(jw)
where the asterisk denotes the complex conjugate. Thus the mapping of G1(jw), for
w ::5 O, is simply the reftection, with respect to the real axis, of the mapping of
G 1(jw), for w 2:: O, as shown with the dashed curve. Because G1(s) is strictly proper,
every point of the semicircle in Figure 8.14(a) with R ~ oo is mapped by G1(s) into
O. Thus, the complete Nyquist plot of G1(s) consists of the solid and dashed circles
in Figure 8.15. As w travels clockwise in Figure 8.l4(a), the Nyquist plot of G1(s)
travels counterclockwise as shown.
lf G1(s) is proper, then the mapping of the infinite semicircle in Figure C 1 by
G1(s) is simply a point in the G1-plane. Thus the Nyquist plot of G1(s) consists mainly
of G1(jw) for all w. The polar plot of G1(s), however, is defined as G1(jw) for w 2::
O. Therefore the Nyquist plot of G 1(s) consists of the polar plot of G 1(s) and its
reftection with respect to the real axis. Therefore, the Nyquist plot can be readily
obtained from the polar plot.
292 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
lmF lmG1
t
W>O
Because F(s) = 1 + G 1(s), the Nyquist plot of F(s) is simply the Nyquist plot
of G1(s) shifted to the right by one unit. This can be more easily achieved by choosing
the coordinates of the F-plane as shown in Figure 8.15. In other words, the origin
of the F-plane is the point (- 1,0) in the Grplane. Therefore, once the Nyquist plot
of G/s) is obtained, the Nyquist plot of F(s) is already there.
Exercise 8.4. 1
The transfer function in (8.10) has one zero and no pole on the imaginary axis,
and its Nyquist plot can easily be obtained. Now if a transfer function G1(s) contains
poles on the imaginary axis, then GtCs) is not defined at every point of e 1 in Figure
8.14(a) and its Nyquist plot cannot be completed. For this reason, if G1(s) contains
poles on the imaginary axis as shown in Figure 8.14(b), then the contour e 1 must
be modified as shown. That is, wherever there is a pole on the imaginary axis, the
contour is indented by a very small semicircle with radius r. Ideally, the radius r
should approach zero. With this modification, the Nyquist plot of G1(s) can then be
completed. This is illustrated by an example. Before proceeding, we mention that
the command nyquist in MATLAB will yield an incorrect or incomplete Nyquist
plot if G/s) has poles on the imaginary axis.
Example 8.4.2
Consider
S + 1
G¡(s) = sz(s - 2) (8.11)
lts poles and zero are plotted in Figure 8.16(a). Because G1(s) has poles at the origin,
the contour e 1 at the origin is replaced by the semicircle rej 0, where (} varies from
8.4 STABILITY TEST IN THE FREQUENCY DOMAIN 293
-2 -1
' ',
..... ,._
1
--
(a) (b)
- 90° to 90° and r is very small. To compute the mapping of the small semicircle
by G 1(s), we use, for s very small,
S + 1 1
(8.12)
G¡(s) = s 2 (s - 2) = s 2 ( - 2)
lts phase is 180° and its amp1itude is very large because r is very small. Similrujy,
we have the following
ej180°
1 j900
B: S = rej450 G 1(B) -e
2
2r2ej90o 2r
eji80°
1 ·oo
C: S = rej90o G 1(C) -el
2
2r2ej!Soo 2r
Exercise 8.4.2
G 1(s)
s(s + 1)
and F(s) 1 + G1(s).
To prove this theorem, we first show that G0 (s) is stable if and only ifthe Nyquist
plot of F(s) does not pass through the origin of the F-plane and the number of
counterclockwise encirclements of the origin equals the number of open right-half-
plane poles of G1(s). Clearly G (s) is stable if and only if F(s) has no closed right-
0
half-plane zeros. If the Nyquist plot of F(s) passes through the origin of the F-plane,
then F(s) has zeros on the imaginary axis and G (s) is not stable. We assume in the
0
following that F(s) has no zeros on the imaginary axis. Let Z and P be, respectively,
the numbers of open right-half-plane zeros and poles of F(s) or, equivalently, the
numbers of zeros and poles of F(s) encircled by C 1• Because F(s) and G1(s) have
the same denominator, P also equals the number of open right-half-plane poles of
G¡(s). Now the principie of argument states that
N=Z-P
Clearly F(s) has no open right-half-plane zeros, or G (s) is stable if and only if
0
(a) (b)
From the discussion in the preceding subsection, the encirclement of the Nyquist
plot of F(s) around the origin of the F-plane is the sarne as the encirclement of the
Nyquist plot of G1(s) around point (- 1, O) on the G¡-plane. This establishes the
theorem. We note that if the Nyquist plot of G1(s) passes through ( -1,0), then G0 (s)
has at least one pole on the imaginary axis and G0 (s) is not stable.
We discuss now the application of the theorem.
Exomple 8.4.3
Consider the unity-feedback system in Figure 8.17(a) with G1(s) given in (8.10).
G1(s) has two poles in the open right half plane. The Nyquist plot of G 1(s) is shown
in Figure 8.15. lt encircles (- 1, O) twice in the counterclockwise direction. Thus,
the unity-feedback system is stable. Certainly, this can also be checked by computing
8s
G1(s) (s - 1)(s 2)
G0 (S)
+ G1(s) 8s
1 +
(s - 1)(s - 2)
8s 8s
(s - 1)(s - 2) + 8s s 2 + 5s + 2
which is stable.
Exomple 8.4.4
Consider the unity-feedback system in Figure 8.17(a) with G1(s) given in (8.11).
G1(s) has one open right-half-plane pole. lts Nyquist plot is shown in Figure 8.16;
it encircles ( -1, 0) once in the clockwise direction. Although the number of encir-
clements is right, the direction is wrong. Thus the unity-feedback system is not stable.
This can also be checked by computing
S + 1
2s 2 + s + 1
which is clearly not stable.
In application, we may encounter the problem of finding the range of k for the
system in Figure 8.17(b) or
kG 1(s)
(8.14)
Go(s) = 1 + kG (s)
1
to be stable. Although Theorem 8.1 can be directly applied to solve the problem, it
is more convenient to modify the Nyquist stability criterion as follows:
296 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
THEOREM 8.2
The G0 (s) in (8.14) is stable if and only if the Nyquist plot of G1(s) does not pass
through the critica! point at (- 1/k, O) and the number of counterclockwise
encirclements of ( -1/k, O) equals the number of open right-half-plane poles
of G1(s). •
Example 8.4.5
ImG(jw)
then the Nyquist plot does not encircle ( -1/k, 0) and the feedback system in
Figure 8.17(b) is stable. Now (8.16a) implies O :5 k < 5/4 and (8.16b) implies
O 2:: k > - 1j 4. Thus the stability range of the system is
1 5
--<k<- (8.17)
4 4
This result is the same as (4.21) which is obtained by using the Routh test. We
mention that the stability range can also be obtained by using the root-locus method.
This will not be discussed.
We give a special case of Theorem 8.1 in which loop transfer functions have
no open right-half-plane poles. Poles on the imaginary axis, however, are permitted.
COROLLARY 8. 1
If the loop transfer function G 1(s) in Figure 8.17(a) has no open right-half-plane
poles, then GJs) in (8.13) is stable if and only if the Nyquist plot of G 1(s) does
not encircle critica} point ( - 1, 0) nor passes through it. •
lf we define G 1(s) : = C 1(s)C 2 (s)G(s), then G 0 (s) in (8.18) is stable if and only if the
Nyquist plot of G1(s) does not pass through (- 1, O) and the number of counter-
clockwise encirclements of (- 1, O) equals the number of open right-half-plane poles
of G1(s).
( -1, O) at, say, s= jw 0 , then the numerator of (1 + G 1(s)) equals zero at s = jw0 -
ln other words, the numerator has a zero on the imaginary axis and is not Hurwitz.
Consequently, G0 (s) is not stable. Thus, the distance between the Nyquist plot of
G1(s) and critica! point (- 1, O) can be used as a measure of stability of G0 (s).
Generally, the larger the distance, the more stable the system. The distance can be
found by drawing a circle touching the Nyquist plot as shown in Figure 8.20(a).
Such a distance, however, is not easy to measure and is not convenient for design.
Therefore, it will be replaced by phase margin and gain margin.
Consider the Nyquist plot for w ~ O or, equivalently, the polar plot shown in
Figure 8.20. Let wP >O be the frequency at which 1: G1(jwP) = 180°. This is called
the phase crossover frequency; it is the frequency at which the polar plot of G1(s)
passes through the negative real axis. The distance between - 1 and G1(jwP) as
shown in Figure 8.20(b) is called the gain margin. The distance, however, is not
measured on linear scale; it is measured in decibels defined as
Gain margin : = 20 log 1- Il - 20 log IG1(jwP)j
(8.19)
-20 log IG 1(jwp)l
For example, if G1(jwp) = - 0.5, then the gain margin is + 6 dB. If G1(jwP) = O,
then the gain margin is oo. Note that if G1(jw) lies between -1 and O, then
IG1(jwP)I < 1 and the gain margin is positive. If fc,(jwP)I > 1, or the polar plot of
G1(jwP) intersects the real axis on the left hand side of -1 as shown in Figure
8.20(e), then the gain margin is negative.
Let w8 > O be the frequency such that IG1(jw8 )l = l. It is the frequency at
which the polar plot of G 1(s) intersects with the unit circle as shown in Figure 8.20(b)
and is called the gain crossover frequency. If we draw a straight line from the origin
to G 1(jw8 ), then the angle between the straight line and negative real axis is called
the phase margin. To be more precise, the phase margin is defined as
Phase margin : = 11: (-1)1 - 11: G (jw )l1 8
= 180° - 11: G (jwg)l
1 (8.20)
where the phase of G1(jw8 ) must be measured in the clockwise direction. Note that
if the intersection with the unit circle occurs in the third quadrant as shown in Figure
Gain Phase
margin
''
\ <o"' ' \
\ \
o ¡1 ú)p o ¡i
/ /
Phase
margin
>0
(a) (b) (e)
8.20(b), then the phase margin is positive. If the intersection occurs in the second
quadrant as shown in Figure 8.20(c), then the phase margin is negative.
The phase and gain margins can be much more easily obtained from the Bode
plot. In fact, the definitions in (8.19) and (8.20) are developed from the Bode plot.
Recall that the Bode plot and polar plot differ only in the coordinates and either one
can be obtained from the other. Suppose the polar plots in Figure 8.20(b) and
(e) are translated into the Bode plots shown in Figure 8.21(a) and (b). Then the gain
crossover frequency wg is the frequency at which the gain plot crosses the 20 log
1 = 0-dB line. We then draw a verticalline downward to the phase plot. The distance
in degrees between the - 180° line and the phase plot is the phase margin. If the
phase of G1(jwg) líes above the - 180° line, the phase margin is positive, as shown
in Figure 8.21(a). If it líes below, the phase margin is negative, as shown in Figure
8.21(b). The phase crossover frequency wP is the frequency at which the phase plot
crosses the -180° line. We then draw a verticalline upward to the gain plot. The
distance in decibels between the 0-dB line and the gain plot at wP is the gain margin.
Ifthe gain at wP líes below the 0-dB line, as shown in Figure 8.2l(a), the gain margin
is positive. Otherwise it is negative, as shown in Figure 8.21(b). Thus, the phase and
gain margins can readily be obtained from the Bode plot.
Although the phase and gain margins can be more easily obtained from the
Bode plot, their physical meaning can be more easily visualized on the Nyquist plot.
For example, if G1(s) has no pole in the open right half plane and has a polar plot
roughly of the form shown in Figure 8.20, then its Nyquist plot (the polar plot plus
its reftection) will not encircle (- 1, O) if both the phase and gain margins are
positive. Thus, the overall system G0 (s) = G1(s)/(1 + G1(s)) is stable. In conclusion,
1 G¡(}w) 1 1 G¡(}w) 1
dB dB
Gain Gain
margin
<0
o
-20
1
lw wpl
p OJ
------0~--~~~\~----~~------~w
¡\
1 Phase crossover 1 Phase crossover
frequency frequency
1
-180°
Phase margin > O
(a) (b)
if G1(s) has no open right-half-plane pole, then general/y the overall system
G0 (s) = G1(s)/(1 + G1(s)) is stable if the phase margin and the gain margin of G1(s)
are both positive. Furthermore, the larger the gain and phase margins, the more stable
the system. In design we may require, for example, that the gain margin be larger
than 6 dB and that the phase margin be larger than 30°. If either the phase margin
or gain margin equals O or is negative, then the system G0 (s) is generally not stable.
If a loop transfer function G1(s) has open right-half-plane poles, in order for
G0 (s) to be stable, the Nyquist plot must encircle the critica! point. In this case, the
polar plot may have a number of phase-crossover frequencies and a number of gain-
crossover frequencies as shown in Figure 8.22(a), thus phase margins and gain mar-
gins are not unique. Furthermore, sorne phase margins must be positive and sorne
negative in order for the system to be stable. Thus, the use of phase margins and
gain margins becomes complicated. For this reason, if loop transfer functions have
open right-half-plane poles, the concepts of phase and gain margins are less useful.
Even if loop transfer functions have no pole in the open right half plane, care
must still be exercised in using the phase and gain margins. First, the polar plots of
such transfer functions may not be of the form shown in Figure 8.20. They could
have more than one phase margin and/ or more than one gain margin as shown in
Figure 8.22(b). Moreover, if a polar plot is as shown in Figure 8.22(c), even though
G1(s) has a large phase margin and a large gain margin, the closed loop system
GJs) = G1(s)/(1 + G1(s)) has a poor degree of stability. Therefore, the relationship
between the degree of stability and phase and gain margins is not necessarily exact.
Exercise 8.4.3
Find the gain and phase margins of the following transfer functions:
2
a.
s(s + 1)
5
b.
s(s + l)(s + 2)
c.
s 2 (s + 1)
With the preceding background, we are ready to discuss the design problem. The
problem is: given a plant with transfer function G(s), design a feedback system with
- transfer function G0 (s) to meet a set of specifications. The specifications are generally
stated in terms of position error, rise time, settling time, and overshoot. Because they
8.5 FREQUENCY-DOMAIN SPECIFICATIONS FOR OVERALL SYSTEMS 301
are defined for the time responses of G0 (s), they are called time-domain specifica-
tions. Now, if the design is to be carried out using frequency plots, we must translate
the time-domain specifications into the frequency domain. This will be carried out
in this section. Recall that the specifications are stated for the overall transfer function
G0 (s), not for the plant transfer function G(s).
Steady-State Performance
Let G0 (s) be written as
Go(jw) = IGo(jw)jejO(w)
where O(w) = tan- 1 [Im G0 (jw)/Re G0 (jw)]. The plot of IGo(jw)j with respect to
w is called the amplitude plot and the plot of O(w) with respect to w is called the
phase plot of G0 (s). Typical amplitude and phase plots of control systems are shown
in Figure 8.23. From the final-value theorem, we know that the steady-state per-
formance (in the time domain) is determined by G0 (s) as s ~ O or G0 (jw) as
w ~ O (in the frequency domain). Indeed, the position error or the steady-state error
due to a step-reference input is, as derived in (6.3),
Thus from G0 (0), the position error can immediately be determined. For example, if
G0 (0) = 1, then eP = O; if G0 (0) = 0.95, then the position error is 5%. The velocity
error or the steady-state error dueto a ramp-reference input is, as derived in (6.6),
In order to have finite velocity error, we require G0 (0) = l. In this case, (8.22a)
reduces to
which implies that the velocity error depends only on the slope of G0 (jw) at w = O.
If the slope is zero, velocity error is zero. Thus the steady-state performance can be
easily translated into the values of G0 (jw) and its derivatives at w = O.
Transient Performance
The specifications on the transient performance consist of overshoot, settling time,
and rise time. These specifications are closely related to MP and bandwidth shown
in Figure 8.23. The constant MP' called the peak resonance, is defined as
(8.23)
for w ;:::: O. lt is the largest magnitude of G0 (jw) in positive frequencies. The band-
width is defined as the frequency range in which the magnitude of G0 (jw) is equal
to or larger than 0.707jG (0)j. The frequency wc with the property jG0 (jwJI =
0
0.707jG0 (0)j is called the cutoff frequency. If G0 (0) = 1 orO dB, the amplitude of
G0 (S) at wc is
20 log 0.707 -3 dB
1 G)jw) 1
1 G0 (0) 1
0.71 G)O) 1
(a)
(b)
and, because the power is proportional to IG0 (jw)l 2 , the power of G0 (s) at wc is
(0.707f = 0.5
thus wc is also called the -3 dB or half-power point. In conclusion, the cut-off
frequency of G0 (s) is defined as the frequency at which its amplitude is 70% or
3 dB below the level at w = O, or at which the power is half of that at w = O.
Now we discuss the relationship between the specifications on transient per-
formance, and the peak resonance and bandwidth. Similar to the development of the
desired pole region in Chapter 7, we consider the following quadratic transfer
function
(8.24)
Clearly we have
and
width, the faster the response. This rule of thumb is widely accepted in engineering,
even though the exact relationship is not known and the statement may not be true
for every control system. The speed of response here may mean the rise time or
settling time. We give a plausible argument of the statement by using the quadratic
transfer function in (8.24). From the horizontal coordinate wnt in Figure 4.7, we
argued in Section 7 .2.1 that the rise time is inversely proportional to wn. The Bode
plot of (8.24) is shown in Figure 8.9. The intersection of the - 3-dB horizontalline
with the gain plot yields the cutoff frequency and the bandwidth. Because the hor-
304 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
3
\ 100% ~
1\ \ /MP
80%
o
_g
~
\ ">o
"'""'"'
~ ....... 40%
~
¡j
!'..... 1
~
Ove~shoot- f.----"'
o ~ o
0.2 0.4 0.6 0.8 1.0
Damping ratio
Figure 8.24 Peak resonance and overshoot.
(j)
Steady-State Performance
Consider the loop transfer function G 1(s). We define
KP : = lim G1(s) = G1(0) (8.27a)
s-->0
= 11 + lG¡(O) 1 = 11 : KJ X 100%
•
We see that the position error depends only on KP' thus KP is called the position-
error constant.
lf r(t) = t or R(s) = l/s 2 , then the steady-state error in (8.28) is the velocity
error defined in (6.6). 4 Using the final-value theorem, we have
1 1
ev(t) = lim le(t)l = lim lsE(s)l = lim 1s ( 1 G ( )) 2 1
t->oo s->0 s->0 + 1S S (8.30)
3
If r(t) = a, then the error must be divided by a. Because a = 1, this norma1ization is not needed.
4
1f r(t) = at, then the error must be divided by a. Because a = 1, this norma1ization is not needed.
8.6 FREQUENCY-DOMAIN SPECIFICATIONS FOR LOOP TRANSFER FUNCTIONS 307
KP Kv Ka
TypeO k o o
Type 1 00 k o ep =
11: KJ
Type 2 00 00 k eV = ¡;J
Now if G1(s) is of type 1, then KP = oo and eP = O. Thus the unity-feedback
system in Figure 8.1 will track any step-reference input without an error. If G 1(s) is
of type 2, then Kv = co and ev = O. Thus the system wm track any ramp-reference
input without an error. These are consistent with the conclusions in Section 6.3.2.
To conclude this part, we mention that (8.29) and (8.30) are established for unity-
feedback systems. They are not necessarily applicable to other configurations.
Transient Performance5
The transient performance of G (s) is specified in terms of the peak resonance MP'
0
bandwidth, and high-frequency gain. Now we shall translate these into a set of
specifications for G1(s). To do so, we must first establish the relationship between
G0 (jw) and G 1(jw) = G(jw)C(jw). Let the polar plot of G1(s) be as shown in Figure
8.26(a). Consider the vector G1(jw 1). Then the vector drawn from (- 1, O) to G1(jw 1)
equals 1 + G1(jw 1). Their ratio
G¡(jw¡) - G ( ·w
+ G¡(jw¡) - o 1 ¡)
yields G0 (jw 1). Therefore it is possible to translate G 1(jw) graphically into G0 (jw).
To facilitate the translation, we first compute the loci on the G1-plane that have
constant JG0 (jw)J. Let x + jy be a point of G 1(jw) on the G¡-plane. Clearly we have
. 1 G¡(jw) 1 1 x + jy 1 [ x2 + y2 ] 1/2
JGo(jw)J = 1 + G¡(jw) = 1 + x + jy = (1 + x)2 + y2 (8.31)
or
5
This subsection establishes the Jast column in Table 8 .l. It may be skipped without Joss of continuity.
308 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
2
1.4~
1.2~
1.1-
1~
0.7
(a) (b)
(x - ~~ 2 r + y
1
6
1t is a1so possible to plot the 1oci of constan! phases of G0 (s) on the Grp1ane. The plot consists of a
fami1y of circles called constant N-loci. The p1ot of constan! M- and N-1oci on the log magnitude-phase
plot is called the Nichols chart. They are not used in this text and will not be discussed.
Im G(jw)
M=l.I
M=l
M= 1.2
M= 1.3
M=¡
M=0.7
12dB
Phase margin
/
/
45°
-1
1
60°
309
310 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
where wg is the gain-crossover frequency of G1(s). Note that wc is defined for G0 (s),
whereas wg is defined for G1(s). Although (8.32) is developed for the polar plot in
Figure 8.28, it is true in general, and is often used in frequency-domain design. For
example, if an overall system is required to respondas fast as possible or, equiva-
lently, to have a bandwidth as large as possible, then we may search for a compen-
sator C(s) so that the loop transfer function G 1(s) = C(s)G(s) has a gain-crossover
frequency as large as possible.
The high-frequency specification on G0 (s) can also be translated into that
of G 1(s) = C(s)G(s). If G(s) is strictly proper and if C(s) is proper, then
IG1(jw)l << 1, for large w. Thus we have
. )1
IGo ( JW 1 G,(Jw) 1 IG ( . )1
= 1 + G,(Jw) = ' JW
for large w. Hence the specification of G0 (jw) < E, for w 2: wd, can be translated
to IG1(jw)l < E, for w 2: wd.
The preceding discussion is tabulated in the last column of Table 8.1. We see
that there are three sets of specifications on accuracy (steady-state performance) and
speed of response (transient performance). The first set is given in the time domain,
the other two are given in the frequency domain. The first two sets are specified for
overall systems, the last one is specified for loop transfer functions in the unity
feedback configuration. lt is important to mention that even though specifications
are stated for loop transfer functions, the objective is still to design a good overall
system.
and
<t C(jw)G(jw) = <t C(jw) + <t G(jw)
Thus in the design, if the Bode plot of G(s) does not meet the specifications, we
simply add the Bode plot of C(s) to it until the sum meets the specifications. On the
other hand, if we use the polar plot of G(s) to carry out the design, the polar plot of
G(s) is of no use in subsequent design because the polar plot of C(s)G(s) cannot
easily be drawn from the polar plot of G(s). Therefore the polar plot is less often
used. In the log magnitude-phase plot, the frequency appears as a parameter on the
plot, thus the summation of C(jw) and G(jw) involves summations of vectors and
is not as convenient as in the Bode plot. Thus, the Bode plot is most often used in
the frequency-domain design.
dB dB
-20db/decade
201ogiK l=a
a v
-40 db/decade
0.1 10 0.1
-40 db/decade
(a) (b)
which implies KP = lOa/ 20 • Thus the position-error constant can be easily obtained
from the plot.
Every type 1 transfer function can be expressed as
k(l + b 1s)(1 + b 2 s) · · ·
G (s) = ---'------'-'-'-----"'--'----
s(l + a 1s)(l + a 2 s)(1 + a 3s) · · ·
Clearly, its position-error constant is infinity. At very low frequencies, the Bode gain
plot is govemed by
This is a straight line with slope - 20 dB / decade. We extend the straight line to
intersect with the vertical axis at, say, a dB and intersect with the horizontal axis at,
say, w 1 radians per second. The vertical axis passes through w = 1, thus (8.33)
becomes
•
20 log 1 ~v 1 = a
which implies Kv = lOa/ 20 . The gain of (8.33) is O dB at w = w 1, thus we have
O= 20 log ~~:/
which implies Kv = w 1 • Thus, the velocity-error constant of type 1 transfer functions
can al so be easily obtained from the Bode gain plot. In conclusion, from the leftmost
asymptote of Bode gain plots, the constants KP and Kv can be easily obtained. Once
KP' Kv, the phase margin, and the gain margin are read out from the measured data,
we can then proceed to the design.
Before discussing specific design techniques, we review the problem once again.
Given a plant with transfer function G(s), the objective is to design an overall system
to meet a set of specifications in the time domain. Because the design will be carried
out by using frequency plots, the specifications are translated into the frequency
domain for the overall transfer function G0 (s), as shown in Table 8.1. For the unity-
feedback configuration shown in Figure 8.1, the specifications for G0 (jw) can be
further translated into those for the loop transfer function G 1(s) = G(s)C(s) as shown
in Table 8.1. Therefore the design problem now becomes: Given a plant with Bode
plot G(jw), find a compensator C(s) in Figure 8.1 such that the Bode plot of C(s)G(s)
will meet the specifications on position- or velocity-error constant, phase margin,
gain margin, gain-crossover frequency, and high-frequency gain. If this
is successful, all we can hope for is that the resulting overall system G0 (s) =
G 1(s)/(1 + G 1(s)) would be a good control system. Recall that the translations of
8. 7 DESIGN ON BODE PLOTS 313
the specifications in Table 8.1 are developed mainly from quadratic transfer func-
tions; they may not hold in general. Therefore, it is always advisable to simulate the
resulting overall system to check whether it really meets the design specifications.
The search for C(s) is essentially a trial-and-error process. Therefore we always
start from a simple compensator and, if we are not successful, move to a more
complicated one. The compensators used in this design are mainly of the following
four types: (a) gain adjustment (amplification or attenuation), (b) phase-lag compen-
sation, (e) phase-lead compensation, and (d) lag-lead compensation. Before pro-
ceeding, we mention a useful property. Consider the Bode plots shown in Figure
8.30. It is assumed that the plant has no open right-half-plane poles nor open right-
half-plane zeros. Under this assumption, the phase can be estimated from the slopes
of the asymptotes of the gain plot. If a slope is - 20 dB / decade, the phase will
approach - 90°. If a slope is -40 dB / decade, the phase will approach - 180°. If a
slope is -60 dB/decade, the phase will approach -270°. Because of this property,
if the slope of the asymptote at the gain-crossover frequency is - 60 dB / decade, as
shown in Figure 8.30(a), then the phase will approach -270° and the phase margin
will be negative. Consequently the feedback system will be unstable. On the other
hand, if the slope of the asymptote at the gain-crossover frequency is - 20
dB/decade as shown in Figure 8.30(b), then the phase margin is positive. Ifthe slope
of the asymptote at the gain-crossover frequency is - 40 dB / decade, the phase
margin can be positive or negative. For this reason, if it is possible, the asymptote
at the gain-crossover frequency is designed to have slope - 20 dB / decade. This is
the case in almost every design in the remainder of this chapter.
dB dB
-60 db 1decade
e
--------~----~------------•w
o
Phase margin > O
~-~--
-270° -270°
(a) (b)
8. 7. 1 Gain Adjustment
The simplest possible compensator C(s) is a gain k. The Bode gain plot of
C(s)G(s) = kG(s) is 20 log /k/ + 20 log /G(}w)/. Thus, the introduction of gain
k will simply shift up the Bode gain plot of G(s) if /k/ > 1, and shift it down if
/k/ < l. If gain k is positive, its introduction will not affect the phase plot of G(s).
For sorne problems, it is possible to achieve a design by simply shifting a Bode gain
plot up or down.
Example 8. 7. 1
Consider the unity-feedback system shown in Figure 8.31. Let the plant transfer
function be G(s) = 1/s(s + 2) and Jet the compensator C(s) be simply a constant
k. Find a gain k such that the loop transfer function kG(s) will meet the following:
l. Position error ::5 10%.
2. Phase margin ?: 60°, gain margin?: 12 dB.
3. Gain-crossover frequency as large as possible.
The plant is of type 1, thus its position-error constant KP is infinity and its
positionerror/1/(1 + KP)/iszeroforanyk.TheBodeplotofG(s) = 1/s(s + 2)
or
1 0.5
G(s)
is shown in Figure 8.32 with the solid lines. The gain plot crosses the 0-dB line
roughly at 0.5; thus the gain-crossover frequency is wg = 0.5 rad/s. The phase
margin is then measured from the plot as 76°. To find the gain margin, we must first
find the phase-crossover frequency. Because the phase approaches the - 180° line
asymptotically as shown, it intersects the line at w = oo, and the phase-crossover
frequency wP is infinity. The gain plot goes down to - oo dB with slope
-40 dB/decade as w ~ oo. Thus, the gain margin at wP = oo is infinity. Thus, the
Bode plot of G(s) meets the specifications in (1) and (2). If we do not require the
specification in (3), then there is no need to introduce any compensator and the
design is completed. It is important to point out that no compensator means C(s) =
k = 1 and that the unity feedback is still needed as shown in Figure 8.31.
Compensator Plant
dB
-20
2 5 10 20 50 100
w
... ' '
...
... ...' '
... ...." ..,.
... ...
Compensated 1 ---40 db/decade
U ncompensated
e 1
1
0.5 2 10 20 50 100
w
1\w'=l.l5
1 g
First we use the example in Figure 8.31 to show that adjustment of a gain alone
sometimes cannot achieve a design.
316 CHAPTER 8 FREQUENCY-DOMAlN TECHNIQUES
Example 8.8. 1
Considera plant with transfer function G(s) = 1/s(s + 2). Find a compensator C(s)
in Figure 8.1 such that C(s)G(s) will meet the following: (1) velocity error s 10%,
(2) phase margin 2:: 60°, and (3) gain margin 2:: 12 dB.
First we choose C(s) = k and see whether or not the design is possible. The
loop transfer function is
k
G 1(s) = kG(s) = ) (8.34)
s(s + 2
k
Kv = lim sG 1(s) (8.35)
s->0 2
• In order to meet (1), we require
dB
(}
1
0.1 1 1 JO
--~~------------4+--------------H----+-----4----r-------_.ro
--------
ro ~=
~--p
------- --lsoo ------ T---------- T--------
in Figure 8.33. We plot only the asymptotes. We see that the gain-crossover fre-
quency roughly equals w8 = 4.2 rad/ s, and the phase margin roughly equals 26°.
The phase-crossover frequency is wP = oo, and the gain margin is infinity. Although
the gain margin meets the specifications, the phase margin does not. If we restrict
e(s) to be k, the only way to increase the phase margin is to decrease k. This,
however, will violate (1). Thus for this problem, it is not possible to achieve the
design by adjusting k alone.
Rz + es
E 2 (s) 1 + R 2 es 1 + aT 1s
e 1 (s) (8.38)
E 1 (s) 1 + (R 1 + R2 )es 1 + T 1s
R¡ + Rz + -
Cs
where
R2
O< a·=
. R1 + R2
< I (8.39)
and
(8.40)
The pole and zero of C 1(s) are -1/T 1 and -1/aT 1• Because a < 1, the zero is
farther away from the origin than the pole, as shown in Figure 8.34(b). For any
w ::::: O, the phase of e 1(s) equals (} - cfJ. Because cfJ ::::: O, the phase of e 1 (s) is
negative for al! w 2: O. Thus the network is called a phase-lag network.
Rl lms
R2 r A
ez
(1)
Res
Ic ¡_ -1
aT1
-1
TI
o
(a) (b)
dB
/
/
/
1 //
aT1 //
20Ioga
(} ' , -20dB/decade
9~ -----------------------------
Now we plot the Bode plot of C 1(s). The comer frequency of the pole is 1/T1•
We draw two asymptotes from l/T1, one with slope -20 dB/decade. The comer
frequency ofthe zero is 1/aT1• We draw two asymptotes from 1/aT1, one with slope
20 dB/decade. Note that the comer frequency 1/aT1 is on the right-hand side of the
comer frequency 1/T 1, because a < l. The summation of these yields the Bode gain
plot of C 1(s) as shown in Figure 8.35. For w very small, the Bode gain plot is a
horizontalline with gain 1 orO dB. For w very large, C 1(s) can be approximated by
(1 + aT1s)/(1 + T1s) = aT1s/T 1s = a; thus its Bode gain plot is a horizontalline
with gain a or 20 log a dB. Because a < 1, 20 log a is a negative number. Thus the
phase-lag network introduces an attenuation of 20 llog al. The Bode phase plot of
C 1 (s) can be similarly obtained as shown in Figure 8.35. We see that the phase is
negative for all w as expected.
A phase-lag network has two parameters a and T 1• The amount of attenuation
introduced by the network is determined by a. In employing a phase-lag network,
we use mainly its attenuation property. The phase characteristic will not be used
except the phase at 10/aT1, ten times the right-hand-side comer frequency. From
Figure 8.6, we see that the phase at 10/aT1 is at most 5.7°. Now we shall redesign
the problem in Example 8.8.1.
Considera plant with transfer function G(s) = 1/s(s + 2). Find a compensator C(s)
in Figure 8.1 such that C(s)G(s) will meet (1) velocity error :S 10%, (2) phase margin
2::: 60°, and (3) gain margin 2::: 12 dB.
8.8 PHASE-LAG COMPENSATION 319
k
Kv = lim sG1(s) = lim skC 1(s)G(s)
s~o s~o 2
Thus, in order to meet (1), we require, as in (8.36), k::::: 20. The Bode plot of
20·---
s(s + 2)
is plotted in Figure 8.33 with the solid lines. lts gain-crossover frequency is w8
4.5 rad/s and its phase margin is 25°. The phase margin, however, is required to be
at least 60°. This will be achieved by introducing a phase-lag network. First we
search for a new gain-crossover frequency that has a phase margin of 60° plus 6°
(this will be explained later), or 66°. This can be found by drawing a horizontalline
with 66° phase margin. lts intersection with the phase plot yields the new gain-
crossover frequency.lt is read from Figure 8.33 as w;
= 0.9. We then draw a vertical
line upward to the gain plot. We see that if the gain is attenuated by 20 dB at
w << w;, then the new gain plot will pass through w;
= 0.9 at the 0-dB line. A
phase-lag network will introduce an attenuation of 20 /log a/, thus we set
20 log a = -20
w;
aT 1 10
In this case, the phase-lag network has at most a phase lag of 5.7° at w;,
as shown
in Figure 8.35. Thus the phase of G(s) at w;
will be reduced by roughly 6° after
introducing C(s). This is the reason for adding 6° to the required phase margin in
320 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
7
The design on Bode plots is carried out mostly by measurements. It is difficult to differentiate between
s.r and 6° on a plot. Therefore we need not be concerned too much about accuracy in this method.
8. 9 PHASE-LEAD COMPENSATION 321
To conclude this section, we remark that the use of a phase-lag network will
reduce the gain-crossover frequency. Consequently the bandwidth of the unity-feed-
back system may be reduced. For this reason, this type of compensation will make
a system more sluggish.
In this section we introduce a network that has a positive phase for every w > O.
Consider the network shown in Figur€ 8.36(a). lt is built by using two resistors, one
capacitor, andan amplifier. The impedance of the parallel connection of R 1 and the
capacitor with impedance 1/Cs is
1
R.-
1 Cs
1 R 1Cs +
R1 +-
Cs
Thus the transfer function from e 1 to e2 in Figure 8.36(a) is
E (s) R + R2 R2
Cz(s) := -2 - = 1 · ----= ---
E1(s) R2 R1
Rz + ---'--
R1Cs + (8.43)
R1 + R2 R2 + R 1R 2Cs + bT2 s
R2 R1 + R2 + R 1R 2Cs 1 + T2 s
where
(8.44)
and
(8.45)
Ims
O)
Amplifier
with gain
-1
A-1 o
Res
T2 bT2
(a) (b)
The pole and zero of Cis) are plotted in Figure 8.36(b). The phase of C 2 (s) equals
(} - cjJ as shown. Because (} > cjJ for all positive w, the phase of C2 (s) is positive.
Thus it is called a phase-lead network. Note that C 2 (0) = l.
The comer frequency of the zero is 1/bT2; the comer frequency of the pole is
1/T2 • Because b > 1, 1/bT2 < 1/T2 and the comer frequency of the zero is on the
left-hand side of that of the pole. Thus the Bode gain plot of Cis) is as shown
in Figure 8.37. The gain at low frequencies is 1 or O dB. For large w, we have
Cis) = bT2 s/T2s = b, thus the gain is b or 20 log b dB as shown. Unlike the phase-
lag network, the phase of the phase-lead network is essential in the design, therefore
we must compute its phase. The phase of (1 + bT2 s)/(1 + T2s) at s = jw equals
_
1
bT2w - T2w
c/J(w) = tan- 1bT2 w- tan- 1T2 w =tan
1 + bT~w2
Thus we have
bT2w- T2w
tan c/J(w) = (8.46)
1 + bT 22 w2
Since the phase plot is symmetric with respect to the midpoint of 1/bT2 and l/T2 , as
shown in Figure 8.37, the maximum phase occurs at the midpoint. Because of the
logarithmic scale, the midpoint wm is given by
or
(8.47)
dB
201ogb
lOiogb
(}
90° - - - - - - - - - - - - - - - - - - - - - - - _-_-_-_--- -
------------------------~~---
(b - 1)
(b - l)T2 wm Vb b - 1
tan c/>m
1 + bT~w~ 1 + 1 2vb
which implies
b - 1
and b
+ sin c/>m
sin c/>m (8.48)
b + 1 sin c/>m
We see that the larger the constant b, the larger the maximum phase cf>m· However,
the network in Figure 8.36 also requires a larger amplification. In practice, constant
bis seldom chosen to be greater than 15. We mention that the gain equals 10 log b
at w = wm, as shown in Figure 8.37.
The philosophy of using a phase-lead network is entirely different from that of
using a phase-lag network. A phase-lag network is placed far away from the new
gain-crossover frequency so that its phase will not affect seriously the phase margin.
A phase-1ead network, on the other hand, must be p1aced so that its maximum phase
will contribute wholely to the phase margin. Therefore, wm should be placed at the
new gain-crossover frequency. To achieve this, however, is not as simple as in the
design of phase-lag networks. The procedure of designing phase-1ead networks is
explained in the following:
Step 1: Compute the position-error or ve1ocity-error constant from the specifica-
tion on steady-state error.
Step 2: Plot the Bode p1ot of kG(s), the plant with the required position- or veloc-
ity-error constant. Determine the gain-crossover frequency w8 and phase-
crossover frequency wP. Measure the phase margin c/> 1 and gain margin
from the plot.
Step 3: lf we decide to use a phase-lead compensator, calculate 1/J = (required
phase margin) - c/> 1 • The introduction of a phase-lead compensator will
shift the gain-crossover frequency to the right and, consequently, decrease
the phase margin. To compensate for this reduction, we add e, say 5°, to
1/J. Compute c/>m = 1/1 + e.
Step 4: Compute constant b from (8.48), which yields phase cf>m·
Step S: If we place this maximum phase at & 8 or, equivalen tiy, set &m = & 8 ,
because the network has positive gain, the gain-crossover frequency of
C2 (s)G(s) will be shifted to the right and the maximum phase will not
appear at the new gain-crossover frequency. For this reason, we must com-
pute first the new gain-crossover frequency before placing wm. We draw a
horizontalline with gain -10 log b. Its intersection with the Bode gain
plot of kG(s) yields the new gain-crossover frequency, denoted by w~.
Measure the phase margin c/>2 of kG(s) at this frequency. lf c/> 1 - c/>2 > e,
choose a 1arger e in Step 3 and repeat Steps 4 and 5. If c/> 1 - c/>2 < e, go
to the next step.
324 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
Step 6: Set wm = w; and compute T2 from (8.47). If the resulting system satisfies
all other specifications, the design is completed. The network can then be
realized as shown in Figure 8.36(a).
Example 8.9.1
We shall redesign the system discussed in the preceding section by using a phase-
lead network. Considera plant with transfer function G(s) = 1/s(s + 2). Find a
compensator C(s) in Figure 8.1 such that C(s)G(s) will meet (1) velocity error ::5
10%, (2) phase margin ;::: 60°, and (3) gain margin ;::: 12 dB.
Now we shall choose C(s) as kC2 (s), where C2 (s) is given in (8.43). Because
C2 (0) = 1, the velocity-error constant Kv of C(s)G(s) is
k
Kv = lim sG 1(s) = lim skCz(s)G(s)
.. s~o s~o 2
Thus we require k ;::: 20 in order to meet the specification in (1). The Bode plot of
kG(s) = 20/ s(s + 2) is plotted in Figure 8.38 with the solid lines. The phase margin
c/> 1 is 26°. The required phase margin is 60°. Thus we have rf; = 60 - 26 = 34°.
If we introduce a phase-lead network, the gain-crossover frequency will increase
and the corresponding phase margin will decrease. In order to compensate for this
reduction, we choose arbitrarily (} = S Then we have cl>m = 34° + so = 39°. This
0
•
dB
w'g =6.s
20 w~ = 6.3 j -- Phase-lead compensator
0.1 0.2 0.5 1 ----~YI~; i 20 so 100
-20
0+-----+-~~~-r~++----~~~~~~rHr---7+--~~~++~~(!)
1
0.356 -
-26~ ~· =
. = 4.2
(!)g
-6.44 dB
1 11
11
11
11
_;;.:::,. ..... ~
..... .....
- 1-=16.6
0.0605
' .....
''
Bode p1ot of
kC (s)G(s)
-7.7dBI 11 ', 2
1 11 ....
8 11
11
1 1
1 1
1
1 43°
oo+-----+-~~~-r++++----~--+-+,-~~rH~---+--1-~1-++~~(!)
:~
1 1¡ 10 100
260 1 1¡
)-, i- - H. ._ - Compensated
t JTI~
18o
170
which implies
1 1
Tz = - - = 0.06
Vbw~ \15.9 x 6.8 16.5
1
1 -0.009
-0.961
Res Res Res
-2 -1 o -2 -16.67 o
Step 1.6
response /Gain
1.4 A Phaselead
/;~ase 1~":__ _ _j_~~ compensator
iJ~--: - -- - - - --=-C~c=-~---
1.2
0.8
0.6 ¡1 1/
1 1
0.4
¡ 11
1 /1
0.2 ¡ //
!/
o E---~--~----~--~----~--~--~----~--~----
0 4 4 6 8 lO 12 14 16 18 20
Figure 8.39 Various designs.
(shown with the dotted line) has the smallest overshoot among the three desigps; its
rise time and settling time are also smallest. Therefore it is the best design among
the three.
The phase-lead compensation does not always yield a good design. For example,
if the phase in the neighborhood of the gain-crossover frequency decreases rapidly,
then the reduction of the phase margin dueto a phase-lead compensator will offset
the phase introduced by the compensator. In this case, the specification on phase
margin cannot be met. Thus, the phase-lead compensation is not effective if the
phase at the gain-crossover frequency decreases rapidly. To conclude this section,
we compare phase-lag and phase-lead networks in Table 8.3.
8.1 O PROPORTIONAL-INTEGRAL (PI) COMPENSATORS 327
l. The pole is closer to the origin than the The zero is closer to the origin than the
zero. Its phase is negative for every pole. Its phase is positive for every
positive w. positive w.
2. Shifts down the gain-crossover Shifts up the gain-crossover frequency;
frequency; consequently, decreases the consequently, increases the bandwidth
bandwidth and the speed of response. and the speed of response.
3. Placed one decade below the new gain- Placed over the new gain-crossover
crossover frequency to reduce the effect frequency to add the maximum phase
of the network on the phase margin. on the phase margin.
4. No additional gain is needed. Additional gain is needed.
5. Design can be achieved in one step. Design may require trial and error.
In addition to the phase-lag and phase-lead networks, we may also use a network
with transfer function
1 + aT1s 1 + bT2 s
C 3 (s) = · ---'=-
1 + T 1s 1 + T2 s
in the design. The transfer function is the product of the transfer function of a phase-
lag network and that of a phase-lead network. Thus it is called a lag-lead network.
In the design, we use the attenuation property of the phase-lag part and the positive
phase of the phase-lead part. The basic idea is identical to those discussed in the
preceding two sections and will not be repeated.
where e = k/ k;. This is a special case of the phase-lag network shown in Figure
8.34(b) with the pole located at the origin. The phase of C3 (s) is clearly negative for
all positive w. We shall use this compensator to redesign the problem in the preceding
two sections. Although PI controllers are a special case of phase-lag networks, the
procedure for designing phase-lag networks cannot be used here. Instead we will
use the idea in designing phase-lead networks in this problem.
328 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
Example 8. 1O. 1
Considera plant with transfer function G(s) = 1/s(s + 2). Find a PI compensator
C 3 (s) in Figure 8.1 such that C 3 (s)G(s) will meet (1) velocity error :5 10%, (2) phase
margin 2: 60°, and (3) gain margin 2: 12 dB.
The transfer function of C 3 (s)G(s) is
k;(1 + es)
C/s)G(s) = ....:...;__-----'-
k;(l + es)
(8.51)
s s(s + 2) 2s 2 (1 + 0.5s)
which has two poles at s = O. Thus it is of type 2, and the velocity error is zero for
any k; (see Section 6.3.2). The same conclusion can also be reached by using Table
8.2. For a type 2 transfer function, the velocity-error constant Kv is infinity. Thus,
its velocity error 11/K vi is zero for any k¡.
We first assume e = O and plot in Figure 8.40 the Bode plot of
k¡
(8.52)
with k;/2 = l. The gain-crossover frequency is wg = 1 and the phase margin can
be read from the plot as -27°. Because the phase plot approaches the - 180° line
at w = O, the phase-crossover frequency is wP = O and the gain margin is negative
infinity. Changing k; will shift the gain plot up or down and, simultaneously, shift
the gain-crossover frequency to the right or left. However, the phase margin is always
__ wP
----~-----r---r--------r-~r--------r--~----~0
0.1 0.2 2 10 20
------=-:;;;~::_j -180°------------
negative and the gain margin remains negative infinity. Thus, if e = O, the unity-
feedback system is unstable for any k¡ and the design is not possible.
If k; > O ande > O, the Bode phase plot of Cis)G(s) is given by
= -l80o + <):: ( l + es )
1 + 0.5s
If e< 0.5, then (1 + es)/(1 + 0.5s) is a phase-lag network and its phase is negative
for all positive w. In this case, the phase in (8.53) is always smaller than -180°.
Thus, the phase margin is negative, and the design is not possible. If e > 0.5, then
(1 + es)/(1 + 0.5s) is a phase-lead network, and it will introduce positive phases
into (8.53). Thus the design is possible. We write
1 + es + e X 0.5s
(8.54)
1 + 0.5s l + 0.5s
with e> e
1 and compare it with (8.43), then = b and (8.48) can be used to compute
c.Now the phase margin of C3 (s)G(s) is required to be at Jeast 60°, therefore we
have
k¡(1 + 6.95s)
(8.55)
2s 2(1 + 0.5s)
The comer frequencies of the pole and zero of the phase-lead network are, respec-
tively, 1/0.5 2 and 1/6.95 = 0.14. Thus the maximum phase of the network
occurs at
and equals 60°. Figure 8.41 shows the Bode plot of (8.55) with kj2 = l. We plot
only the asymptotes for the gain plot. Because the phase of 1/s 2 is - 180°, the phase
of (8.55) is simply the phase of (1 + 6.95s)/(1 + 0.5s) shifted down by 180°. From
the plot, we see that the gain-crossover frequency is roughly 3 radians per second.
lf we draw a verticalline down to the phase plot, the phase margin can be read out
as 25°. This is less than the required 60°. Thus if kj2 = 1 or k; = 2, C 3 (s)G(s) in
(8.55) is not acceptable.
330 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
If we increase k¡, the gain plot will move up, the gain-crossover frequency will
shift to the right, and the corresponding phase margin will decrease. On the other
hand, decreasing k; will move down the gain plot and shift the gain-crossover fre-
quency to the left. Thus the phase margin will increase. Now e in the phase-lead
network (1 + cs)/(1 + 0.5s) is chosen so that the phase is 60° at wm = 0.53. Thus,
we shall choose k; so that the gain-crossover frequency equals wm. The gain at wm
is roughly 22 dB, so we set
which implies
k
__!
2
= 10- 22 120 = o.08
If we choose k; = 2 X 0.08 = 0.16, then Cis)G(s) in (8.55) will meet the speci-
fication on phase margin.
The phase plot approaches the -180° line at w = O and w = oo, as shown in
Figure 8.41. Thus, there are two phase-crossover frequencies. Their corresponding
gain margins are, respectively, - oo and oo. It is difficult to see the physical meaning
of these gain margins from Figure 8.41, so we shall plot the Nyquist plot of
C 3 (s)G(s), where the phase and gain margins are originally defined. Figure 8.42
0.53 1
----------r-----------,_---+---+--~--------+-------~w
0.1 10
Im Im
/
/
' '
;11
/
' ,..
\
1
\
\
1
1
1
e B 1
[_---
--*-+-------------1~Re --~----------.-.-*-~------------+-~Re
,A G¡(A)
1
1 1
1 1
1 1
t 1
/
1
1 _,JI
1 , /
1 ,
~---- G¡(B)
We see that the Nyquist plot encircles the critica! point ( -1, O) once in the coun-
terclockwise direction and once in the clockwise direction. Therefore, the number
of net encirclements is zero. The loop transfer function in (8.57) has no open right-
half-plane pole, so we conclude from the Nyquist stability criterion that the unity-
feedback system in Figure 8.1 is stable. From the Nyquist plot, we see that the phase
margin is 60°. There are two gain-crossover frequencies with gain margins oo and
- oo. If there are two or more phase-crossover frequencies, we shall consider the one
that is closest to ( -1, 0). In this case, we consider the phase-crossover frequency
at w = oo; its gain margin is oo. Thus C3(s)G(s) a1so meets the requirement on gain
margin and the design is comp1eted. In conclusion, if we introduce the following PI
compensator
0.16(1 + 6.95s)
C 3 (s) = - - - ' - - - - - ' -
s
the unity-feedback system in Figure 8.1 will meet all specifications. The response
of this system is shown in Figure 8.39 with the dashed-and-dotted lines. lts response
is s1ower than the one using the phase-1ag compensator and much slower than the
one using the phase-lead network. Therefore, there is no reason to restrict compen-
sators to PI form for this problem.
332 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
.. 4.
transfer functions are not exact. Therefore, it is advisable to simulate the re-
sulting systems after the design .
If a plant transfer function has open right-half-plane poles, then its Bode plot
may have two or more phase and gain margins. In this case, the use of phase
and gain margins becomes complex. For this reason, the Bode-plot design
method is usually limited to plants without open right-half-plane poles.
5. In this chapter, we often use asymptotes of Bode gain plots to carry out the
design. This is done purposely because the reader can see better the plots and
the design ideas. In actual design, one should use more accurate plots. On the
other hand, because the relationships between phase and gain margins and time
responses are not exact, design results using asymptotes may not differ very
much from those using accurate plots.
6. The method is a trial-and-error method. Therefore, a number of trials may be
needed to design an acceptable system.
7. In the Bode-plot design method, the constraint on actuating signals is not con-
sidered. The constraint can be checked only after the completion of the design.
If the constraint is not met, we may have to redesign the system.
8. The method is essentially developed from the Nyquist plot, which checks
whether or not a polynomial is Hurwitz. In this sense, the design method con-
siders only poles of overall systems. Zeros are not considered.
PROBLEMS
8. 1. Plot the polar plots, log magnitude-phase plots, and Bode plots of
10 20
and
(s - 2) s(s + 1)
S - 5
b. G(s)
s(s + l)(s + 10)
10
c. G(s) = - - - - =2- - - - -
(s + 2)(s + 8s + 25)
8.3. a. Consider the Bode gain plot shown in Figure P8.3. Find all transfer func-
tions that have the gain plot.
dB
20
Figure P8.3
b. If the transfer functions are known to be stable, find all transfer functions
that have the gain plot.
c. If the transfer functions are known to be minimum phase, find all transfer
functions that have the gain plot.
d. If the transfer functions are known to be stable and of minimum phase, find
all transfer functions that have the gain plot. Is this transfer function unique?
8.4. Consider the three Bode plots shown in Figure P8.4. What are their transfer
functions?
dB dB
40 40
20 20
(J) (J)
5 5
Phase Phase
90°
o 10 100 1000
90°
180°
(a) (b)
Figure P8.4
334 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
dB
10 100 1000
(e)
Figure P8.4 (Continued)
8.5. Consider
k(l + 0.5s)
G(s) =
s(bs + l)(s + 10)
lts Bode plot is plotted in Figure P8.5. What are k and b?
dB
(ú
0.1 10 100
-20
e
90°
0.1 0.2
(ú
100
-90°
Figure P8.5
r PROBLEMS 335
-90° - - - -~-----..,
10
-180° L--+---<~-+----+--+---+-- ro
1 10 1 10 Hz
Figure P8.6
8.7. Use the Nyquist criterion to determine the stability of the system shown in
Figure 8.17(a) with
2s + 10
a. G1(s)
(s + 1)(s 1)
2s + 1
b. G 1(s)
s(s - 1)
c. G (s)
1
= __ __.:__+ __:__
100(s 1)
s(s - l)(s + 10)
8.8. Consider the unity-feedback system shown in Figure 8.17(b ). If the polar plot
of G1(s) is of the form shown in Figure P8.8, find the stability range for the
following cases:
a. G1(s) has no open right-half-plane (RHP) pole and zero.
6 5
Figure P8.8
336 CHAPTER 8 FREQUENCY-DOMAIN TECHNIQUES
G1(s) -
S+2
- (s 2 + 3s + 6.25)(s - 1)
using (a) the Routh test, (b) the root-locus method and (e) the Nyquist stability
criterion.
8.10. What are the gain-crossover frequency, phase-crossover frequency, gain mar-
gin, phase margin, position-error constant, and velocity-error constant for each
of the transfer functions in Problem 8.2?
8.11. Repeat Problem 8.10 for the Bode plots shown in Fig. P8.4.
8.12. Consider the unity-feedback system shown in Figure 8.1. The Bode plot of the
plant is shown in Figure P8.12. Let the compensator C(s) be a gain k. (a) Find
the largest k such that the phase margin is 45 degrees. (b) Find a k such that
the gain margin is 20 dB.
dB
40
10 20 100
w
-20
-40
-60
(}
0.1
w
Figure P8. 12
8.13. Consider the system shown in Figure 8.1. The Bode plot of the plant is shown
in Figure P8.13. The compensator C(s) is chosen as a gain k. Find k to meet
(1) phase margin ::2::60°, (2) gain margin ::2::10 dB, and (3) position error :525%.
PROBLEMS 337
dB
40
20
e
0.1 10
Figure P8. 13
8.14. The Bode plot of the plant in Figure 8.1 is shown in Figure P8.14. Find
a phase-1ag network to meet (1) phase margin :::::60° and (2) gain margin
:::::10 dB.
------r-------~------+-------~----~ w
Figure P8.14
8.15. Consider the control of the depth of a submarine discussed in Problem 7.8.
Designan overall system to meet (1) position error :510%, (2) phase margin
. !
~60°, and (3) gain margin ~ 10 dB. Compare the design with the one in Prob-
lem 7.8.
8. 16. Consider the ship stabilization problem discussed in Problem 7.14. Design an
overall system to meet (1) position error ::=;J5%, (2) phase margin ~60°,
(3) gain margin ~10 dB, and (4) gain-crossover frequency ~10 rad/s.
8.17. Consider the problem of controlling the yaw of the airplane discussed in Prob-
lem 7.15. Designan overall system to meet (1) velocity error ::=;JO%, (2) phase
margin ~30%, (3) gain margin ~6 dB, and (4) gain-crossover frequency as
large as possible.
8.18. Consider the plant transfer function given in Problem 7.16, which is factored
as
300
G(s) = - - - - - - - - - - - -
s(s + 0.225)(s + 3.997)(s + 179.8)
·•
Design an overall system to meet (1) position error = O, (2) phase margin
~55°, (3) gain margin ~6 dB, and (4) gain-crossover frequency is not smaller
than that of the uncompensated plant. [Hint: Use a lag-lead network.]
8.19. a. Plot the Bode plot of
S - 1
G (s) = --::----
s2(s + 10)
What are its phase margin and gain margin?
b. Compute G0 (s) = G(s)/(1 + G(s)). Is G (s) stable?
0
c. Is it always true that the unity-feedback system is stable if the phase and
gain margins of its loop transfer function are both positive?
Thelnward
Approach Choice
of Overall Transfer
Functions
9.1 INTRODUCTION
In the design of control systems using the root-locus method or the frequency-domain
method, we first choose a configuration and a compensator with open parameters.
We then search for parameters such that the resulting overall system will meet design
specifications. This approach is essentially a trial-and-error method; therefore,
we usually choose the simplest possible feedback configuration (namely, a unity-
feedback configuration) and start from the simplest possible compensator-namely,
a gain (a compensator of degree 0). If the design objective cannot be met by searching
the gain, we then choose a different configuration or a compensator of degree 1
(phase-lead or phase-lag network) and repeat the search. This approach starts from
interna! compensators and then designs an overall system to meet design specifica-
tions; therefore, it may be called the outward approach.
In this and the following chapters we shall introduce a different approach, called
the inward approach. In this approach, we first search for an overall transfer function
to meet design specifications, and then choose a configuration and compute the
required compensators. Choice of overall transfer functions will be discussed in this
chapter. The implementation problem-namely, choosing a configuration and com-
puting the required compensators-will be discussed in the next chapter.
Considera plant with proper transfer function G(s) = N(s)/D(s) as shown in
Figure 9 .l. In the inward approach, the first step is to choose an overall transfer
function G0 (s) from the reference input r to the plant output y to meet a set of
339
340 CHAPTER 9 THE INWARD APPROACH-CHOICE OF OVERALL TRANSFER FUNCTIONS
r- -------,
~
1
r 1
~
~:·~
1
1 Ga(s) 1
_ _j
L---
THEOREM 9.1
Considera plant with proper transfer function G(s) = N(s)/D(s). Then G0 (s) is
implementable if and only if G0 (s) and
T(s) : = Go(s)
G(s)
are proper and stable. •
We discuss first the necessity of the theorem. Consider, for example, the con-
figuration shown in Figure 9.2. Noise, which may enter into the intput and output
terrninals of each block, is not shown. If the closed-loop transfer function from r to
y is G0 (s) and if there is no plant leakage, then the closed-loop transfer function from
r to u is T(s). Well-posedness requires every closed-loop transfer function to be
proper, thus T(s) and G0 (s) must be proper. Total stability requires every closed-loop
transfer function to be stable, thus G0 (s) and T(s) must be stable. This establishes
the necessity of the theorem. The sufficiency of the theorem will be established
constructively in the next chapter.
G(s) = N(s)
D(s)
We assume that the numerator and denominator of each transfer function have no
common factors. The equality G0 (s) = G(s)T(s) or
N0 (S) = N(s) . N 1(s)
D 0 (s) D(s) D¡(s)
implies
deg D 0 (s) - deg N0 (s) = deg D(s) - deg N(s) + (deg D 1(s) - deg N 1(s))
Thus if T(s) is proper, that is, deg D 1(s) 2:: deg N¡(s), then we have
deg D 0 (s) - deg N0 (s) 2:: deg D(s) - deg N(s) (9.1)
• Conversely, if (9.1) holds, then deg D¡(s) 2:: deg N,(s), and T(s) is proper.
Stability of G0 (s) and T(s) requires both D 0 (s) and D,(s) to be Hurwitz. From
N¡(s) G (s) NJs) D(s)
T(s) = - = -0 - = - - . -
D¡(s) G(s) D 0 (s) N(s)
we see that if N(s) has closed right-half-plane (RHP) roots, and if these roots are not
canceled by N0 (s), then D¡(s) cannot be Hurwitz. Therefore, in order for T(s) to be
stable, all the closed RHP roots of N(s) must be contained in NJs). This establishes
the following corollary.
COROLLARY 9. 1
Considera plant with proper transfer function G(s) = N(s)/D(s). Then G0 (s) =
N 0 (s)/D0 (s) is implementable if and only if
(a) deg D 0 (s) - deg N0 (s) 2:: deg D(s) - deg N(s) (pole-zero excess inequality).
(b) All closed RHP zeros of N(s) are retained in NJs) (retainment of non-
minimum-phase zeros).
(e) D 0 (s) is Hurwitz. •
As was defined in Section 8.3.1, zeros in the closed RHP are called non-mini-
mum-phase zeros. Zeros in the open left half planeare called minimum-phase zeros.
Poles in the closed RHP are called unstable potes. We see that the non-minimum-
phase zeros of G(s) impose constraints on implementable G0 (s) but the unstable
poles of G(s) do not. This can be easily explained from the unity-feedback config-
uration shown in Figure 9.3. Let
N(s)
G(s) =- C(s)
D(s)
be respectively the plant transfer function and compensator transfer function. Let
9.2 IMPLEMENTABLE TRANSFER FUNCTIONS 343
L~'H~G(>)1
y
~
C(s) 1
G0 (s) = N 0 (s)/D0 (s) be the overall transfer function from the reference input r to
the plant output y. Then we have
NJs) C(s)G(s)
G (s) = - - = ----'--'-'-- (9.2)
o D 0 (s) + C(s)G(s) D(s)Dc(s) + N(s)NJs)
We see that N(s) appears directly as a factor of N 0 (s). If a root of N(s) does not
appear in NJs ), the only way to achieve this is to introduce the same root in D(s )D Js)
+ N(s)Nc(s) to cancel it. This cancellation is an unstable pole-zero cancellation if
the root of N(s) is in the closed right half s-plane. In this case, the system cannot be
totally stable and the cancellation is not permitted. Therefore all non-minimum-phase
zeros of G(s) must appear in N 0 (s). The poles of G(s) or the roots of D(s) are shifted
to D(s)DJs) + N(s)Nc(s) by feedback, and it is immaterial whether D(s) is Hurwitz
or not. Therefore, unstable poles of G(s) do not impose any constraint on G0 (s), but
non-minimum-phase zeros of G(s) do. Although the preceding assertion is developed
for the unity-feedback system shown in Figure 9.3, it is generally true that, in any
feedback configuration without plant leakage, feedback will shift the poles of the
plant transfer function to new locations but will not affect its zeros. Therefore the
non-minimum-phase zeros of G(s) impose constraints on GJs) but the unstable poles
of G(s) do not.
Example 9.2. 1
Consider
(s + 2)(s - 1)
G(s)
s(s 2 - 2s + 2)
Then we have
G0 (s) = 1 Not implementable, because it violates (a) and (b) in Corollary 9.1.
S + 2
Go(s) - (s + )(s + ) Not implementable, meets (a) and (e) but violates (b).
3 1
S - 1
Not implementable, meets (a) and (b), violates (e).
s(s + 2)
S - 1
Implementable.
(s + 3)(s + 1)
344 CHAPTER 9 THE INWARD APPROACH-CHO\CE OF OVERALL TRANSFER FUNCT\ONS
S - 1
Implementable.
(s + 3)(s + 1)2
(2s - 3)(s - 1)
Implementable.
(s + 2) 3
(2s - 3)(s - l)(s + 1)
Implementable.
(s + 2) 5
Exercise 9.2.1
Exercise 9.2.2
From the preceding examples, we see that if the pole-zero excess inequality is
met, then all poles and all minimum-phase zeros of G0 (s) can be arbitrarily assigned.
To be precise, all poles of G0 (s) can be assigned anywhere inside the open left half
s-plane (to insure stability). Other than retaining all non-minimum-phase zeros of
G(s), all minimum-phase zeros of G (s) can be assigned anywhere in the entire
0
arbitrarily assign poles as well as minimum-phase zeros so long as they meet the
pole-zero excess inequality.
To conclude this section, we mention that if G0 is implementable, it does not
mean that it can be implemented using any configuration. For example, G0 (s) =
1/(s + 1f is implementable for the plant G(s) = 1/s(s - 1). This G0 (s), however,
cannot be implemented in the unity-feedback configuration shown in Figure 9.3; it
can be implemented using sorne other configurations, as will be discussed in the
next chapter. In conclusion, for any G(s) and any implementable G0 (s), there exists
at least one configuration in which Gs(s) can be implemented under the preceding
four constraints.
with an > O and n ~ m, is said to achieve asymptotic tracking if the plant output
y(t) tracks eventually the reference input r(t) without an error, that is,
Clearly if G0 (s) is not stable, it cannot track any reference signal. Therefore, we
require G0 (s) to be stable, which in tum requires a¡ > O for all i 1 • Thus, the denom-
inator of G0 (s) cannot have any missing term ora term with a negative coefficient.
Now the condition for G0 (s) to achieve asymptotic tracking depends on the type of
r(t) to be tracked. The more complicated r(t), the more complicated G0 (s). From
Section 6.3.1, we conclude that if r(t) is a step function, the conditions for G0 (s) to
achieve tracking are G0 (s) stable and a 0 = {30 • If r(t) is a ramp function, the con-
ditions are G0 (s) stable, a 0 = {30 ; and a 1 = {3 1 . If r(t) = af, an acceleration function,
then the conditions are G0 (s) stable, a 0 = {30 , a 1 = {3 1, and a 2 = {32 . If r(t) = O,
the only condition for y(t) to track r(t) is GJs) stable. In this case, the output may
be excited by nonzero initial conditions, which in tum may be excited by noise or
disturbance. To bring y(t) to zero is called the regulating problem. In conclusion,
the conditions for G0 (s) to achieve asymptotic tracking are simple and can be easily
met in the design.
Asymptotic tracking is a property of G0 (s) as t ~ oo ora steady-state property
of G0 (s). It is not concemed with the manner or the speed at which y(t) approaches
r(t). This is the transient performance of G 0 (s). The transient performance depends
on the location of the poles and zeros of G0 (s). How to choose poles and zeros to
meet the specification on transient performance, however, is not a simple problem.
1
Also, they can aii be negative. For convenience, we consider only tbe positive case.
346 CHAPTER 9 THE INWARD APPROACH---CHOICE OF OVERALL TRANSFER FUNCTIONS
Exercise 9.2.3
What types of reference signals can the following systems track without an error?
S+ 5
a.
+ 2s 2 + 8s
s3 + 5
8s + 5
b. s3
+ 2s 2 + 8s + 5
2s 2 + 9s + 68
c. s3
+ 2s 2 + 9s + 68
[Answers: (a) Step functions. (b) Ramp functions. (e) None, because it is not
stable.]
The performance of a control system is generally specified in terms of the rise time,
settling time, overshoot, and steady-state error. Suppose we have designed two sys-
tems, one with a better transient performance but a poorer steady-state-performance,
the other with a poorer transient performance but a better steady-stage performance.
The question is: Which system should we use? This difficulty arises from the fact
that the criteria consist of more than one factor. In order to make comparisons, the
criteria may be modified as
J : = k 1 X (Rise time) + k 2 X (Settling time)
(9.4)
+ k3 X (Overshoot) + k4 X (Steady-state error)
where the k; are weighting factors and are chosen according to the relative importance
of the rise time, settling time, and so forth. The system that has the smallest J is
called the optimal system with respect to the criterion J. Although the criterion is
9.3 VARIOUS DESIGN CRITERIA 347
reasonable, it is not easy to track analytically. Therefore more trackable criteria are
used in engineering.
We define
e(t) : = r(t) - y(t)
1t is the error between the reference input and the plant output at time t as shown
in Figure 9.4. Because an error exists at every t, we must consider the total error in
[0, oo). One way to define the total error is
This is called the integral of absolute error (IAE). In this case, a small 1 2 will imply
a small e(t). Other possible definitions are
(9.7)
and
(9.8)
The former is called the integral of square error (ISE) or quadratic error, and the
latter the integral of time multiplied by absolute error (ITAE). The ISE penalizes
large errors more heavily than small errors, as is shown in Figure 9.4. Because of
the unavoidable large errors at smalt t due to transient responses, it is reasonable not
to put too much weight on those errors. This is achieved by multiplying t with je(t)j.
Thus the ITAE puts less weight on e(t) for t small and more weight on e(t) for t
large. The total errors defined in 12> 1 3 , and 1 4 are all reasonable and can be used in
design.
Although these criteria are reasonable, they should not be used without consid-
ering physical constraints. To illustrate this point, we consider a plant with transfer
function G(s) = (s + 2)/s(s + 3). Because G(s) has no non-minimum-phase zero
and has a pole-zero excess of 1, G0 (s) = a/(s + a) is implementable for any positive
a. We plot in Figure 9.5(a) the responses of G0 (s) dueto a unit-step reference input
for a = 1 (solid line), a = 10 (dashed line), anda = 100 (dotted line). We see that
the larger a is, the smaller 1 2 , 13 , and 14 are. In fact, as a approaches infinity, 1 2 , 13 ,
and 14 all approach zero. Therefore an optimal implementable G0 (s) is a/(s + a)
with a = oo.
As discussed in Section 6. 7, the actuating signa! of the plant is usually limited
by
ju(t)l :S M for all t ~ O (9.9) ~
This arises from limited operational ranges of linear models or the physical con-
straints of devices such as the opening of val ves or the rotation of rudders. Clearly,
the larger the reference input, the larger the actuating signa!. For convenience, the
u(t) in (9.9) will be assumed to be excited by a unit-step reference input and the
constant Mis proportionally scaled. Now we shall check whether this constraint will 411
be met for all a. No matter how G0 (s) is implemented, if there is no plant leakage,
the closed-loop transfer function from the reference input r to the actuating signa! u
is given by
G (s) 1 a(s + 3)
U(s) = T(s)R(s) = -0 - · - = - - - - - (9.11)
G(s) s (s + 2)(s + a)
This response is plotted in Figure 9.5(b) for a = 1, 10, and 100. This can be obtained
by analysis or by digital computer simulations. For this example, it happens that
ju(t)lmax = u(O) = a. For a = 100, u(O) is outside the range of the plot. We see
9.3 VARIOUS DESIGN CRITERIA 349
1.2 r------,--~--~----.-----.----~-----,----,
a= 100
¡'' / -----.. . .-- - - - - - - - - - - - - - - - - - - - - - - -
i 1
i 1
0.8 ; 1 a = 10
i 1
i 1
0.6' 1
0.4 1
0.2
o
o 0.5 1.5 2 2.5 3 3.5 4
(a)
d(t)
10¡
9;
:
8
i
7 ~
i1
6 ;1
i1
5 ;1
i 1
4' a = 10
\ 1
1
3 1
1
2 1
a= 100
1
',
a
'•
o ------
o 0.5 1.5 2 2.5 3 3.5 4
(b)
that the larger a is, the larger the magnitude of the actuating signa!. Therefore if a
is very large, the constraint in (9.9) will be violated.
In conclusion, in using the performance indices in (9.6) to (9.8), we must include
the constraint in (9.9). Otherwise we can make these indices as small as desired and
the system will always be saturated. Another possible constraint is to limit the band-
width of resulting overall systems. The reason for limiting the bandwidth is to avoid
amplification of high-frequency noise. lt is believed that both constraints willlead
to comparable results. In this chapter we discuss only the constraint on actuating
signals.
1 ,J
"
350 CHAPTER 9 THE INWARD APPROACH--CHOICE OF OVERALL TRANSFER FUNCTIONS
.
9.4 QUADRATIC PERFORMANCE INDICES
In this section we discuss the design of an overall system to minimize the quadratic
performance index
for all t ::::: O, and for sorne constant M. Unfortunately, no simple analytical method
is available to design such a system. Furthermore, the resulting optimal system may
not be linear and time-invariant. If we limit our design to linear time-invariant sys-
tems, then (9.12) must be replaced by the following quadratic performance index
Loo u2(t)dt
and the optimal system that minimizes the criterion is the one with u = O. From
these two extreme cases, we conclude that if q in (9.13) is adequately chosen, then
the constraint in (9 .12b) will be satisfied. Hence, although we are forced to use the
quadratic performance index in (9.13) for mathematical convenience, if q is properly
chosen, (9.13) is an acceptable substitution for (9.12).
where q is a positive constant, r is the reference signa!, y is the output, and u is the
actuating signa!. Before proceeding, we first discuss the spectral factorization.
Consider the polynomial
Q(s) : = D(s)D(- s) + qN(s)N(- s) (9.16)
lt is formed from the denominator and numerator of the plant transfer function and
the weighting factor q. lt is clear that Q(s) = Q( -s). Hence, if s 1 is a root of Q(s),
so is - s 1 • Since all the coefficients of Q(s) are real by assumption, if s 1 is a root of
Q(s), sois its complex conjugate sf. Consequently all the roots of Q(s) are symmetric
with respect to the real axis, the imaginary axis, and the origin of the s-plane, as
shown in Figure 9.6. We now show that Q(s) has no root on the imaginary axis.
Consider
Q(jw) = D(jw)D(- jw) + qN(jw)N(- jw)
(9.17)
= ID(jw)l 2 + qiN(jw)l 2
The assumption that D(s) and N(s) have no common factors implies that there exists
no w0 such that D(jw0 ) = O and N(jw0 ) = O. Otherwise s 2 + w6 would be a
common factor of D(s) and N(s). Thus if q # O, Q(jw) in (9.17) cannot be zero for
any w. Consequently, Q(s) has no root on the imaginary axis. Now we shall divide
the roots of Q(s) into two groups, those in the open left half plane and those in the
open right half plane. If all the open left-half-plane roots are denoted by D 0 (s), then,
because of the symmetry property, all the open right-half-plane roots can be denoted
by D 0 ( - s). Thus, we can always factor Q(s) as
Q(s) = D(s)D(- s) + qN(s)N(- s) = D 0 (s)D0 ( - s) (9.18)
where D0 (s) is a Hurwitz polynomial. The factorization in (9.18) is called the spectral
factorization.
With the spectral factorization, we are ready to discuss the optimal overall trans-
fer function. The optimal overall transfer function depends on the, reference signa!
r(t). The more complicated r(t), the more complicated the optimal overall transfer
function. We discuss in the following only the case where r(t) is a step function.
Ims
X d X
o
Res
-e -b -a a b e
X -d X
where q > O, and r(t) = 1 for t :::::: O, that is, r(t) is a step-reference signal.
Solution First we compute the spectral factorization:
where D 0 (s) is a Hurwitz polynomial. Then the optimal overall transfer function is
given by
qN(O) N(s)
(9.19)
The proof of (9.19) is beyond the scope of this text; its employment, however,
is very simple. This is illustrated by the following example.
Example 9 .4. 1
Consider a plant with transfer function
N(s) 1
G(s) (9.20)
D(s) s(s + 2)
Find G0 (s) to minimize
Clearly we have q = 9,
D(s) = s(s + 2) D(-s) -s( -s + 2)
and
N(s) N( -s)
We compute
Q(s) : = D(s)D(- s) + qN(s)N(- s)
s(s + 2)(- s)(- s + 2) + 9 · 1· 1 (9.22)
= -s 2 (-s 2 + 4) + 9 = s4 - 4s 2 + 9
9.4 QUADRATIC PERFORMANCE INDICES 353
as shown in Figure 9.7. The two roots in the left column are in the open right half
s-plane; the two roots in the right column are in the open left half s-p1ane. Using the
two left-half-plane roots, we form
Do(s) (s + \13eJ24o)(s + \13e-J24o)
s2 + V3(ei240 + e-J24o)s + 3 (9.23)
Ims
Exercise 9.4.1
.. torizations. One way to carry out the factorization is to compute all the roots of Q(s)
and then group all the left-half-plane roots, as we did in (9.23). This method can be
easily carried out if software for solving roots of polynomials is available. For ex-
ample, if we use PC-MATLAB to carry out the spectral factorization of Q(s) in
(9.22), then the commands
q=[1 o -4 o 9];
r= roots(q)
yield the following four roots:
r= -1.5811 +0.7071i
-1.5811-0.7071i
1.5811 +0.7071i
1.5811 -o. 7071 i
The first and second roots are in the open left half plane and will be used to form
DJs). The command
poly([r( 1) r(2)])
yields a polynomial of degree 2 with coefficients
1.0000 3.1623 3.0000
This is D (s). Thus the use of a digital computer to carry out spectral factorizations
0
is very simple.
We now introduce a method of carrying out spectral factorizations without solv-
ing for roots. Consider the Q(s) in (9.22). lt is a polynomial of degree 4. In the
spectral factorization of
(9.26)
the degrees of polynomials D0 (s) and D 0 ( - s) are the same. Therefore, the degree of
D0 (s) is half of that of Q(s), or two for this example. Let
9.4 QUADRATIC PERFORMANCE INDICES 355
D 0 (s) = b0 + b 1s + b2 s 2 (9.27)
where h; are required to be all positive. 2 If any one of them is zero or negative, then
D 0 (s) is not Hurwitz. Clearly, we have
Do( -s) = b0 + b 1( -s) + h 2 ( -s) 2 = b0 - b 1s + b2s 2 (9.28)
and if
(9.30)
then
(9.31)
2
Al so, they can all be negative. For convenience, we consider only the positive case.
- 356 CHAPTER 9 THE INWARD APPROACH--CHOICE OF OVERALL TRANSFER FUNCTIONS
Then
and
D 0 (S)D0 (-s) = b6 + (2b 0 b2
Equating (9.32) and (9.34) yields
(9.35a)
(9.35b)
(9.35c)
and
b~ = - a6 (9.35d)
(9.36b)
b~ 1 ) = Y2b 0 b~0l - a2
We then use this b~ l to compute b2 as
1
3
1f we compute b2 = (a 2 + bf)/2b0 and b 1 = (b~ - a 4 )/2b3 iteratively, the process will diverge.
9.4 QUADRATIC PERFORMANCE INDICES 357
Example 9.4.2
Let
D0 (s) = b0 + b 1s + b 2 s2 + b3s 3
lts constant term and leading coefficient are simply the square roots of the corre-
sponding coefficients of Q(s):
b0 = V2s = 5 b3 = v¡=¡j = V4 = 2
The substitution of these into (9.36) yie1ds
b1 \/10b 2 + 41
b2 \120 + 4b 1
Now we shall solve these equations iteratively. Arbitrarily, we choose b 2 as b~0 l
O and compute
Exercise 9.4.2
lt is also required that the actuating signal due to a unit-step reference input meet
the constraint lu(t)l ::::: 3, for all t;:::: O. Arbitrarily, we choose q = 100 and compute
Q(s) = s(s + 2)(- s)(- s + 2) + 100 · 1 · 1 = s4 - 4s 2 + 100
lts spectral factorization can be computed as, using (9.31),
D 0 (s) = s 2 + v24s + 10 = s2 + 4.9s + 10
Thus the quadratic optimal transfer function is
Y(s) qN(O) N(s) 100 . 1 1
=--=--·--= ---.
R(s) D 0 (0) D 0 (S) 10 s2 + 4.9s + 10
10
s 2
+ 4.9s + 10
The unit-step response of this system is simulated and plotted in Figure 9.8(a). lts
rise time, settling time, and overshoot are 0.92 s, 1.70 s, and 2.13%, respectively.
Although the response is quite good, we must check whether or not its actuating
signal meets the constraint. No matter what configuration is used to implement G0 (s),
if there is no plant leakage, the transfer function from the reference signal r to the
actuating signal u is
o o o
5 5 10 5
u(t) u(t) u(t)
10 2 3
0.8
o 5 o 5 o 5
(a) (b) (e)
q = 100 0.64 9
Rise time =0.92 6.21 2.01
Sett1ing time= 1.70 10.15 2.80
Overshoot =2.13% 0% 0.09%
u(O+) =10 0.8 3
Figure 9.8 Responses of quadratic optimal systems.
The unit-step response of T(s) is simulated and also plotted in Figure 9.8(a). We see
that u(O+) = 10 and the constraint lu(t)l ::::; 3 is violated. Because the largest mag-
nitude of u(t) occurs at t = o+, it can also be computed by using the initial-value
theorem (see Appendix A). The response u(t) dueto r(t) 1 is
10s(s + 2)
U(s) = T(s)R(s) = s2 + 4.9s + 10 S
Thus the constraint lu(t)l ::::; 3 is not met and the selection of q = 100 is not
acceptable. 4
Next we choose q = 0.64 and repeat the design. The optimal transfer function
is found as
0.64 X 1 0.8
0.8 s 2
+ V5.6 s + 0.8 s 2
+ 2.4s + 0.8
4 It is shown by B. Seo [51] that if a plant transfer function is of the form (h 1s + h0 )/ s(s + a), with
b0 # O, then the maximum magnitude of the actuating signa! of quadratic optimal systems occurs at
t = o+ and iu(t)l :5 u(O+) = Vq.
360 CHAPTER 9 THE INWARD APPROACH---CHOICE OF OVERALL TRANSFER FUNCTIONS
Its unit-step response and the actuating signal are plotted in Figure 9.8(b). The re-
sponse is fairly slow. Because lu(t)l :S u(O+) = 0.8 is much smaller than 3, the
system can be designed to respond faster. Next we try q = 9, and compute
Q(s) = s(s + 2)(- s)(- s + 2) + 9 · 1 · 1
Its spectral factorization, using (9.31), is
D 0 (S) = s2 + \!'ÍÜ s + 3 s2 + 3.2s + 3
Thus the optimal transfer function is
qN(O) N(s) 9 · 1 1 3
19.38)
Go(s) = D (0) . D (s)
0 0
= -3-. s 2 + 3.2s + 3 s 2
+ 3.2s + 3
and the transfer function from r to u is
G (s) 3s(s + 2)
T(s) = -0 - = -=----'------'-
G(s) s2 + 3.2s + 3
Their unit-step responses are plotted in Figure 9.8(c). The rise time of y(t) is 2.01
seconds, the settling time is 2.80 seconds, and the overshoot is 0.09%. We also have
iu(t)i :S u(O+) = T(oo) = 3, for all t. Thus this overall system has the fastest response
under the constraint lu(t)l :S 3.
From this example, we see that the weighting factor q is to be chosen by trial
and error. We choose an arbitrary q, say q = q0 , and carry out the design. After the
completion of the design, we then simulate the resulting overall system. lf the re-
sponse is slow or sluggish, we may increase q and repeat the design. In this case,
the response will become faster. However, the actuating signal may also become
larger and the plant may be saturated. Thus the choice of q is generally reached by
a compromise between the speed of response and the constraint on the actuating
signal. ·
Optimality is a fancy word because it means "the bes t." However, without
introducing a performance index, it is meaningless to talk about optimality. Even if
a performance index is introduced, if it is not properly chosen, the resulting system
may not be satisfactory in practice. For example, the second system in Figure 9.8 is
optimal with q = 0.64, but it is very slow. Therefore, the choice of a suitable
performance index is not necessarily simple.
Exercise 9.4.3
Given a plant with transfer function G(s) = (s + 2)/s(s - 2), find a quadratic
optimal system under the constraint that the magnitude of the actuating signal due
to a unit step reference input is less than 5.
[Answer: G0 (s) = 5(s + 2)/(s 2 + 7s + 10).]
9.5 THREE MORE EXAMPLES 361
In this section we shall discuss three more examples. Every one of them will be
redesigned in latter sections and be compared with quadratic optimal design.
Example 9.5.1
The weighting factor q is to be chosen so that the actuating signal u(t) dueto a unit-
step reference input meets Ju(t)J ::5 10 for t 2::: O. First we choose q = 9 and compute
Q(s) = D(s)D(- s) + qN(s)N(- s)
s(s 2 + 0.25s + 6.25) · ( -s)(s 2 - 0.25s + 6.25) + 9 · 2 · 2 (9.41)
6 4 2
= - s - 12.4375s - 39.0625s + 36
The spectral factorization of (9.41) can be carried out iteratively as discussed in
Section 9.4.2 or by solving its roots. As a review, we use both methods in this
example. We first use the former method. Let
D 0 (S) = b0 + b 1s + b2s 2 + b 3s 3
Its constant term and leading coefficient are simply the square roots of the corre-
sponding coefficients of Q(s):
b0 = V36 = 6 b3 = VF1l = VÍ = 1
The substitution of these into (9.36) yields
b1 v' 12b2 + 39.0625
b2 Y2b 1 - 12.4375
Now we shall solve these equations iteratively. Arbitrarily, we choose b2 as b~0 l =
O and compute
b¡ 6.25 6.49 6.91 7.30 7.53 7.65 7.70 7.73 7.75 7.75
b2 o 0.25 0.73 1.18 1.47 1.62 1.69 1.72 1.74 1.74 1.75
362 CHAPTER 9 THE INWARD APPROACH-CHOICE OF OVERALL TRANSFER FUNCTIONS
We see that they converge to the solution b 1 = 7.75 and b 2 = 1.75. Thus we have
Q(s) = D (s)D
0 0
s) with
(-
(9.42)
s3 + 1.75s 2 + 7.75s + 6
For this overall transfer function, it is found by computer simulation that lu(t)i :::; 3,
for t 2: O. Thus we may choose a larger q. We choose q = 100 and compute
Q(s) D(s)D(- s) + lOON(s)N(- s)
- s6
- 12.4375s4 - 39.0625s 2 + 400
Now we use the second method to carry out the spectral factorization. We use
PC-MATLAB to compute its roots. The command
r = roots([ -1 O -12.4375 O -39.0625 O 400])
y(l)
J.2r-----~----~------~----~----~------~----~----~
/ Quadratic optimal
0.8
0.6
0.4
0.2
2 3 4 5 6 7 8
Figure 9.9 Responses of various designs of (9.39).
9.5 THREE MORE EXAMPLES 363
yields
r= -0.9917 + 3.0249i
-0.9917- 3.0249i
0.9917 + 3.0249i
0.9917- 3.0249i
1.9737
-1.9737
The first, second, and last roots are in the open left half plane. The command
poly([r(1) r(2) r(6)])
yields [1.000 3.9571 14.0480 20.0000]. Thus we have Da<s) = s3 + 3.957s 2
+ 14.048s + 20 and the quadratic optimal overall transfer function is
20
G (s) - -3 - : : - - - - --;;------ (9.43)
0 - s + 3.957s + 14.048s + 20
2
For this transfer function, the maximum amplitude of the actuating signal due to a
unit-step reference input is 10. Thus we cannot choose a larger q. The unit-step
response of G0 (s) in (9.43) is plotted in Figure 9.9 with the solid line. The response
appears to be quite satisfactory.
Exar:nple 9.5.2
Consider a plant with transfer function
S + 3
(9.44)
G(s) = s(s - 1)
(9.45)
where the weighting factor has been chosen as 100. We first compute
Q(s) = D(s)D(- s) + qN(s)N(- s)
= s(s- 1)(-s)(-s- 1) + 100(s + 3)(-s + 3)
= s4
- 101s + 900
2
D (s) = (s
0
+ 9.5459)(s + 3.1427) = s 2 + 12.7s + 30
and the quadratic optimal system is given by
qN(O) N(s) 10(s + 3)
(9.46)
Go(s) = D0 (Ü) . D0 (S) = s 2 + l2.7s + 30
Its response dueto a unit-step reference input is shown in Figure 9.10(a) with the
solid line. The actuating signal due to a unit-step reference input is shown in Figure
9.10(b) with the solid line; it has the property Ju(t)J :::::; 10 for t;:::: O.
y(t) u(t)
1.2f-----------------,
0.8
0.6
Computer simulation
o
-2
0~---------------~t-4~----------~-----~
o 0.1 0.2 0.3 0.4 0.5 0.6 o 0.1 0.2 0.3 0.4 0.5 0.6
(a) (b)
Example 9.5.3
Consider a plant with transfer function
S - 1
(9.47)
G(s) = s(s - 2)
1t has a non-minimum-phase zero. To find the optimal system to minimize the quad-
ratic performance index in (9.45), we compute
Q(s) s(s- 2)(-s)(-s- 2) + 100(s- 1)(-s- 1) = s4 - 104s 2 + 100
(s + 10.1503)(s - 10.1503)(s + 0.9852)(s - 0.9852)
ThuswehaveD0 (s) = s 2 + 11.14s + 10and
- 10(s - 1)
(9.48)
9.5 THREE MORE EXAMPLES 365
Its unit-step response is shown in Figure 9.11 with the solid line. By computer
simu1ation we also find lu(t)l :S 10 for t 2::: O if the reference input is a unit-step
function.
y(t)
1.5~------~------~--------~------~--------~------~
e- ITAE optimal
/ / : ..: ......:::.:::.c:::-=------:_:-:....:-;_;-~-'-'-:;...,:=-----......----~-----i
1/
1 / Quadratic optimal
0.5
¡/
t!
¡/ Computer simulation
¡/
o 1
-0.5
-1
-1.5 o 2 3 4 5 6
These equations are similar to (7.11) through (7.13), thus the root-locus method can
be directly applied. The root loci of (9.50) for G(s) = 1/s(s + 2) are plotted in
5
This section may be skipped without loss of continuity.
366 CHAPTER 9 THE INWARD APPROACH-GHOICE OF OVERALL TRANSFER FUNCTIONS
lms
/
'
Figure 9.12. The roots for q = 0.64, 4, 9, and 100 are indicated as shown. We see
that the root loci are symmetric with respect to the imaginary axis as well as the real
axis. Furthermore the root loci will not cross the imaginary axis for q > O. Although
the root loci reveal the migration of the poles of the quadratic optimal system, they
do not tell us how to pie k a specific set of poles to meet the constraint on the actuating
signal.
We discuss now the poles of G (s) as q ~ oo. It is assumed that G(s) has n poles
0
Ims Ims
x----"'t~n /
_..x--- ~~!!_
' 4
/ 1 ' - / 1 ' \
1 \ 3 1 1 \5---
1 1 1 x 1 --~ ¡¡;
1 1 \ 1 ---- \8
--?*"----+----*_L._-. Res ---~------~------~.--Res
\ 1
\ 1
1 X
\
x
1
' 'x._ X/
/
' /
/
---- 'X---X'
n-m=3 n-m=4
In this section we discuss the design of control systems to minimize the integral of
time multiplied by absolute error (ITAE) in '(9.8). For the quadratic overall system
1
G (s) - - - , - - - - -
0 - s2 + 2~s + 1
the ITAE, the integral of absolute error (IAE) in (9.6), and the integral of square
error (ISE) in (9.7) as a function of the damping ratio ~are plotted in Figure 9.14.
The ITAE has largest changes as ~varíes, and therefore has the best selectivity. The
ITAE also yields a system with a faster response than other criteria, therefore Graham
and Lathrop [33] chose it as their design criterion. The system that has the smallest
IT AE is called the optimal system in the sense of ITAE or the ITAE optimal system.
Consider the overall transfer function
This transfer function contains no zeros. Because G0 (0) = 1, if G0 (s) is stable, then
the position error is zero, or the plant output will track asymptotically any step-
reference input. By analog computer simulation, the denominators of ITAE optimal
systems were found to assume the forms listed in Table 9.1. Their poles and unit-
step responses, for w0 = 1, are plotted in Figures 9.15 and 9.16. We see that the
optimal poles are distributed evenly around the neighborhood of the unit circle. We
also see that the overshoots of the unit-step responses are fairly large for large n.
These systems are called the ITAE zero-position-error optimal systems.
J4 (ITAE)
o '-----+-----+-----+------ '
0.4 0.8 1.2
Figure 9.14 Comparison of various design criteria.
368 CHAPTER 9 THE INWARD APPROACH-CHOICE OF OVERALL TRANSFER FUNCTIONS
S+ Wo
s 2 + 1.4w0 s + w6
s 3 + 1.75W¡¡s 2 + 2.15w5s + w6
s4 + 2.lw0 s 3 + 3.4w5s 2 + 2.7w6s + wó
s + 2.8w0 s 4 + 5.0w5s 3 + 5.5w6s 2 + 3.4wós + wg
5
with respect to the ITAE criterion. The transfer function has one zero; their coeffi-
cients, however, are constrained so that G (s) has zero position error and zero ve- 0
locity error. This system will track asymptotically any ramp-reference input. By
analog computer simulation, the optimal step responses of G0 (s) in (9.52) are found
as shown in Figure 9.17. The optimal denominators of G0 (s) in (9.52) are listed in
Table 9.2. The systems are called the ITAE zero-velocity-error optimal systems.
p
- +}
/
o
- +j
/
- +j
/
- +}
1 1
/
/o /o
1 1 1 1
o o ---1----10 --l-<J-----1 o
-1 -1 -1 -1
5 10 15
Normalized time
Figure 9.16 Step responses of ITAE optimal systems with zero position error.
5 6
Normalized time
Figure 9.17 Step responses of ITAE optimal systems with zero velocity error.
370 CHAPTER 9 THE INWARD APPROACH--CHOICE OF OVERALL TRANSFER FUNCTIONS
s 2 + 3.2w0 s + w6
s + 1.75w0 s 2 + 3.25w6s + w¡i
3
1.5
O)
V>
0:
o
~
O)
'-< l. O
o.
2
Cl'l
0.5
Normalized time
Figure 9.18 Step responses of ITAE optimal systems with zero acceleration error.
9.6 ITAE OPTIMAL SYSTEMS (34) 371
s 3 + 2.97w0 s 2 + 4.94w5s + w6
4
s + 3.7lw0 s 3 + 7.88w5s 2 + 5.93w6s + wó
s5 + 3.8lw0 s 4 + 9.94w5s 3 + 13.44w&s 2 + 7.36w;ls + w6
s6 + 3.93w0 s 5 + 11.68wi\s4 + 18.56w6s 3 + l9.3w;5s 2 + 8.06w6s + w8
9.6. 1 Applications
In this subsection we discuss how to use Tables 9.1 through 9.3 to design ITAE
optimal systems. These tables were developed without considering plant trans-
fer functions. For example, for two different plant transfer functions such as
1/s(s + 2) and 1/s(s - 10), the optimal transfer function G0 (s) can be chosen as
s2 + 1.4w0 s + wi5
The actuating signals for both systems, however, will be different. Therefore w 0 in
both systems should be different. We shall use the constraint on the actuating signal
as a criterion in choosing w0 . This will be illustrated in the following examples.
Example 9.6.1
Go(s) = 2
w6 2
s + 1.4w0 s + w0
lt is implementable. Clearly the larger the w0 , the faster the response. However, the
actuating signal will also be larger. Now we shall choose Wo to meet iu(t)i :5 3. The
transfer function from r to u is
U(s) G0 (S) wi5s(s + 2)
T(s) : = -
R(s) G(s)
372 CHAPTER 9 THE INWARD APPROACH---CHOICE OF OVERALL TRANSFER FUNCTIONS
y(t) u(t)
1.4/------,,...-::-,-~-~-----~--,
}.
1.2 2.5
''
1.5
0.8
ITAE zero- velocity-error
0.6
0.5
o
\ /
-0.5
,_/
o~------------------~---~t-JL-~------------------~-L.
o 4 10 4 10 o
(a) (b)
Figure 9.19 Step responses of (9.54) (with solid lines) and (9.55) (with dashed lines).
Consequently, we have
%s(s + 2) 1
U(s) = T(s)R(s) =
s2 + 1.4w0 s + w6 S
Exercise 9.6.1
Consider a plant with transfer function 2/s 2 • Find an optimal system to minimize
the ITAE criterion under the constraint lu<t)l ::5 3.
6
If the largest magnitude of u(t) does not occur at t = O, then its analytical computation will be com-
plicated. lt is easier to find it by computer simulations.
9.6 ITAE OPTIMAL SYSTEMS (34) 373
Example 9.6.2
Consider the problem in E?Cample 9.6.1 with the additional requirement that the
velocity error be zero. A possible overall transfer function is, from Table 9.2,
3.2w0 s + w6
Go(s) = 2 2
s + 3.2w0 s + w0
However, this is not implementable because it violates the pole-zero excess inequal-
ity. Now we choose from Tab1e 9.2 the transfer function of degree 3:
3.25w6s + w~
Go(s) = 3 2 2 3
s + 1.75w0 s + 3.25w0 s + w0
This is implementable and has zero velocity error. Now we choose w 0 so that the
actuating signal due to a unit-step reference input meets iu(t)l :S 3. The transfer
function from r to u is
G (s) (3.25w6s + w~)s(s + 2)
T(s) = -0 - = -:--'----"--:---".:........:...---=---"------=-
G(s) s + 1.75w0 s 2 + 3.25w6s + w~
3
lts unit-step response is shown in Figure 9.19(b) with the dashed 1ine. We see that
the largest magnitude of u(t) does not occur at t = o+. Therefore, the procedure in
Example 9.6.1 cannot be used to choose w 0 for this problem. By computer simula-
tion, we find that if álo = 0.928, then /u(t)/ .53. Por this ál0 , Cb(s) becomes
2. 799s + O. 799
G (s) - -3 - : : - - -------- (9.55)
0 - s + 1.624s 2 + 2.799s + 0.799
This is the ITAE zero-velocity-error optimal system. Its unit-step response is plotted
in Figure 9.19(a) with the dashed line. lt is much more oscillatory than that of the
ITAE zero-position-erroroptimal system. The corresponding actuating signal is plot-
ted in Figure 9.19(b).
Example 9.6.3
Consider the plant transfer function in (9.39), that is,
2
G(s) = -s(-s2=---+-0.-2-5s-+-6.-2-5)
By computer simulation, we find that if w 0 = 2.7144, then lu(t)l ::; u(O) 10 for
all t 2: O. Thus the ITAE optimal system is
G (s) - 20 (9.56)
o - s3 + 4.75s 2 + 15.84s + 20
Its unit-step response is plotted in Figure 9.9 with the dashed line. Compared with
the quadratic optimal design, the ITAE design has a faster response and a smaller
overshoot. Thus for this problem, the ITAE optimal system is more desirable.
Example 9.6.4
Consider the plant transfer function in (9.44) or
S + 3
G(s) =
s(s - l)
Find an ITAE zero-position-error optimal system. It is also required that the actuating
signal u(t) due to a unit-step reference input meet the constraint lu(t)l ::; lO, for
t 2: O. The pole-zero excess of G(s) is l and G(s) has no non-minimim-phase zero;
therefore, the ITAE optimal transfer function
G0 (S) = - - " - -
wo
(9.57)
S+ Wo
is implementable. We find by computer simulation that if w 0 = 10, then G0 (s) meets
the design specifications. lts step response and actuating signal are plotted in Figure
9.1 O with the dashed line. They are almost indistinguishable from those of the quad-
ratic optimal system. Because G (s) does not contain plant zero (s + 3), its imple-
0
Go(s) = 2
w6 2 (9.58)
s + l.4w 0 s + w0
lt has pole-zero excess larger than that of G(s) and is implementable. We find by
computer simulation that if w0 = 24.5, then the G (s) in (9.58) or
0
600.25
G (s) - - ------ (9.59)
o 2
- s + 34.3s + 600.25
meets the design specifications. lts step response and actuating signal are plotted in
Figure 9.10 with the dotted lines. The step response is much faster than the ones of
(9.57) and the quadratic optimal system. However, it has an overshoot of about 4.6%.
9. 7 SELECTION BASEO ON ENG/NEERING JUOGMENT 375
Example 9.6.5
Find an ITAE zero-position-error optimal system. It is also required that the actuating
signal u(t) due to a unit-step reference input meet the constraint /u(t)/ ::::; 10, for
t :=::: O. This plant transfer function has a non-minimum-phase zero and no ITAE
standard form is available to carry out the design. However, we can employ the idea
in [34] and use computer simulation to find its ITAE optimal transfer function as [54]
-lO(s - 1)
(9.60)
s 2
+ 5.1s + 10
under the constraint lu(t)l ::::; 10. We mention that the non-minimum-phase zero
(s - 1) of G(s) must be retained in G0 (s), otherwise Gjs) is not implementable. lts
step response is plotted in Figure 9.11 with the dashed line. It has a faster response
than the one of the quadratic optimal system in (9.48); however, it has a larger
undershoot and a larger overshoot. Therefore it is difficult to say which system is
better.
In the preceding sections, we introduced two criteria for choosing overall transfer
functions. The first criterion is the minimization of the quadratic performance index.
The main reason for choosing this criterion is that it renders a simple and straight-
forward procedure to compute the overall transfer function. The second criterion is
the minimization of the integral of time multiplied by absolute error (ITAE). It was
chosen in [33] because it has the best selectivity. This criterion, however, does not
render an analytical method to find the overall transfer function; it is obtained by
trial and error and by computer simulation. In this section, we forego the concept of
minimization or optimization and select overall transfer functions based on engi-
neering judgment. We require the system to have a zero position error anda good
transient performance. By a good transient performance, we mean that the rise and
settling times are small and the overshoot is also small. Without comparisons, it is
not possible to say what is small. Fortunately, we have quadratic and ITAE optimal
systems for comparisons. Therefore, we shall try to find an overall system that has
a comparable or better transient performance than the quadratic or ITAE optimal
system. Whether the transient performance is comparable or better is based on en-
gineering judgment; no mathematical criterion will be used. Consequently, the se-
lection will be subjective and the procedure of selection is purely trial and error.
376 CHAPTER 9 THE INWARD APPROACH-CHOICE OF OVERALL TRANSFER FUNCTIONS
Example 9.7.1
Example 9.7.2
Example 9.7.3
We have designed a quadratic optimal system in (9.48) andan ITAE optimal system
in (9.60). Their step responses are shown in Figure 9.11. Now we find, by using
computer simulation, that the response of
-10(s - 1)
Go(s) = -(s-+--=--Vlo----=10=-)....:..2 (9.65)
lies somewhere between those of (9.48) and (9.60) under the same constraint on the
actuating signal. Therefore, (9.65) can also be chosen asan overall transfer function.
This chapter introduced the inward approach to design control systems. In this ap-
proach, we first find an overall transfer function to meet design specifications and
then implement it. In this chapter, we discussed only the problem of choosing an
overall transfer function. The implementation problem is discussed in the next
chapter.
The choice of an overall transfer function is not entirely arbitrary; otherwise we
may simply choose the overall transfer function as l. Given a plant transfer function
G(s) = N(s)/D(s), an overall transfer function G0 (s) = N (s)/D (s) is said to be
0 0
implementable if there exists a configuration with no plant leakage such that G0 (s)
can be built using only proper compensators. Furthermore, the resulting system is
required to be well posed and totally stable-that is, the closed-loop transfer function
of every possible input-output pair of the system is proper and stable. The necessary
and sufficient conditions for G0 (s) to be implementable are that (1) G0 (s) is stable,
(2) GJs) contains the non-minimum-phase zeros of G(s), and (3) the pole-zero ex-
cess of G (s) is equal to or larger than that of G (s ). These constraints are not stringent;
0
poles of G0 (s) can be arbitrarily assigned so long as they alllie in the open left half
s-plane; other than retaining all zeros outside the region C in Figures 6.13 or 7 .4,
all other zeros of G0 (s) can be arbitrarily assigned in the entire s-plane.
In this chapter, we discussed how to choose an implementable overall system
to minimize the quadratic and ITAE performance índices. In using these performance
índices, a constraint on the actuating signa! or on the bandwidth of resulting systems
must be imposed; otherwise, it is possible to design an overall system to have a
performance index as small as desirable and the corresponding actuating signa! will
approach infinity. The procedure of finding quadratic optimal systems is simple and
straightforward; after computing a spectral factorization, the optimal system can be
readily obtained from (9 .19). Spectral factorizations can be carried out by iteration
without computing any roots, or computing all the roots of (9 .16) and then grouping
the open left half s-plane roots. ITAE optimal systems are obtainable from Tables
9.1 through 9.3. Because the tables are not exhaustive, for sorne plant transfer func-
tions (for example, those with non-minimum-phase zeros), no standard forms are
available to find ITAE optimal systems. In this case, we may resort to computer
simulation to find an ITAE optimal system.
In this chapter, we also showed by examp1es that overall transfer functions that
have comparable performance as quadratic or IT AE optimal systems can be obtained
by computer simulation without minimizing any mathematical performance index.
It is therefore suggested that after obtaining quadratic or ITAE optimal systems, we
may change the parameters of the optimal systems to see whether a more desirable
system can be obtained. In conclusion, we should make full use of computers to
carry out the design.
We give sorne remarks conceming the quadratic optimal design to conclude this
chapter.
l. The quadratic optimal system in (9.19) is reduced from a general formula in
Reference [ 1O]. The requirement of implementability is included in (9 .19). If no
9.8 SUMMARY AND CONCLUDING REMARKS 379
such requirement is included, the optimal transfer function that minimizes (9 .15)
with r(t) = 1 is
(9.66)
where N+ (s) is N(s) with all its right-half-plane roots reftected into the left half
plane. In this case, the resulting overall transfer function may not be imple-
mentable. For example, if G(s) = (s - 1)/s(s + 1), then the optimal system
that minimizes
(9.67)
with q = 9 is
3(s + 1)
2
s + 4s + 3
which does not retain the non-minimum-phase zero and is not implementable.
For this optimal system, J can be computed as J = 3. See Chapter 11 of Ref-
erence [12] for a discussion of computing J. The implementable optimal system
that minimizes J in (9.67) is
-3(s - 1)
2
s + 4s + 3
For this implementa~e G0 (s), J can b~ computed as J = 21. lt is considerably
larger than the J for G0 (s). Although G0 (s) has a smaller performance index, it
cannot be implemented.
2. If r(t) in (9.15) is a ramp function, that is r(t) = at, t :2= O, then the optimal
system that minimizes (9.15) is
N(s) ( k2 ) qN(O) N(s)
G0 (S) = q(k 1 + k 2s) - - = 1 + -s -- · -- (9.68)
D0(S) k¡ D0(0) D0(s)
where
and d
kz = ds
[N( -s)
Do( -s)
J
s=O
and
The optimal system in (9.68) is not implementable because it violates the pole-
zero excess inequality. However, if we modify (9.68) as
- ( k 2 ) qN(O) N(s)
1 (9.69)
Go(s) = + k/ D (Ü) D
0 0
(S) + ESn+i
- 380 CHAPTER 9 THE INWARD APPROACH--CHOICE OF OVERALL TRANSFER FUNCTIONS
where n : = deg D 0 (s) and E is a very small positive number, then G0 (s) will be
implementable. Furthermore, for a sufficiently small E, D 0 (s) + Esn+l is Hur-
witz, and the frequency response of G0 (s) is very close to that of G0 (s) in (9.68).
Thus (9.69) is a simple and reasonable modification of (9.68). 7
3. The quadratic optimal design can be carried out using transfer functions or using
state-variable equations. In using state-variable equations, the concepts of con-
trollability and observability are needed. The optimal design requires solving an
algebraic Riccati equation and designing a state estimator (see Chapter 11). For
the single-variable systems studied in this te.JSt, the transfer function approach
is simpler and intuitively more transparent. The state-variable approach, how-
ever, can be more easily extended to multivariable systems.
PROBLEMS
9.1. Given G(s) = (s + 2)/(s - 1), is Gjs) = 1 implementable? Given G(s) =
(s - 1)/(s + 2), is G0 (s) = 1 implementable?
9.2. Given G(s) = (s + 3)(s - 2)/s(s + 2)(s - 3), which of the following G0 (s)
are implementable?
S - 2 S + 3 S- 2
s(s + 2) (s + 2)(s - 3) (s + 2) 2
(s + 4)(s - 2) s 2
-::------
s3 + 4s + 2
9.3. Considera plant with transfer function G(s) = (s + 3)/s(s - 2).
a. Find an implementable overall transfer function that has all poles at - 2
and has a zero position error.
b. Find an implementable overall transfer function that has all poles at - 2
and has a zero velocity error. Is the choice unique? Do you have to retain
s + 3 in G0 (s)? Find two sets of solutions: One retains s + 3 and the other
does not.
9.4. Considera plant with transfer function G(s) = (s - 3)/s(s - 2).
a. Find an implementable overall transfer function that has all poles at - 2
and has a zero position error.
b. Find an implementable overall transfer function that has all poles at - 2
and has a zero velocity error. Is the choice unique if we require the degree
of G0 (s) to be as small as possible?
7
This modification was suggested by Professor Jong-Lick Lin of Cheng Kung University, Taiwan.
PROBLEMS 381
9.5. What types of reference signals will the following GJs) track without an error?
-5s - 2
a. Go(s) = - s2 - 5s - 2
4s 2 + s + 3
b. Go(s) = ss + 3s4 + 4s2 + s + 3
- 2s 2 + 154s + 120
4
c. GJs) = s + l4s 3 + 71s 2 + 154s + 120
9.6. Consider two systems. One has a settling time of 10 seconds and an overshoot
of 5%, the other has a settling time of 7 seconds andan overshoot of 10%. Is
it possible to state which system is better? Now we introduce a performance
indexas
J = k 1 · (Settling time) + k2 · (Percentage overshoot)
If k 1 = k2 = 0.5, which system is better? If k 1 = 0.8 and k2 = 0.2, which
system is better?
9.7. Is the function
9.11. Consider the design problem in Problem 7.14 ora plant with transfer function
0.015
G (s) =s-:2::--+-0-.-1-1s-+-0-.-3
Show
a0 b'6
a2 2b0 b2 - bi
a4 2b0 b4 - 2b 1b 3 + b~
9.15. Considera plant with transfer function s/(s 2 - 1). Designan overall system
to minimize the quadratic performance index in (9.15) with q = l. Does the
optima1 system have zero position error? If not, modify the overall system to
yield a zero position error.
9.16. Consider a plant with transfer function G(s) = 1/s(s + 1). Find an imple-
mentable transfer function to minimize the IT AE criterion and to have zero
position error. It is also required that the actuating signa! due to a unit-step
reference input have a magnitude less than 10.
9.17. Repeat Problem 9.16 with the exception that the overall system is required to
have a zero velocity error.
9.18. RepeatProblem9.16forG(s) = 1/s(s- 1).
9.19. Repeat Problem 9.17 for G(s) = 1/s(s - 1).
9.20. Find an ITAE zero-position-error optimal system for the plant given in Problem
9.8. The magnitude of the actuating signa! is required to be no larger than the
one in Problem 9.8.
PROBLEMS 383
9.21. Find an ITAE zero-position-error optimal system for the plant in Problem 9.11.
The real part of the poles of the optimal system is required to equal that in
Problem 9 .11.
9.22. Is it possib1e to obtain an ITAE optimal system for the plant in Problem 9.12
from Table 9.1 or 9.2? If yes, what will happen to the plant zero?
9.23. Repeat Problem 9.22 for the plant in Problem 9.14.
9.24. a. Considera plant with transfer function G(s) = (s + 4)/ s(s + 1). Design
an ITAE zero-position-error optimal system of degree l. It is required that
the actuating signa1 due to a unit-step reference input have a magnitude less
than 10.
b. Considera plant with transfer function G(s) = (s + 4)/ s(s + 1). Design
an IT AE zero-position-error optimal system of degree 2. It is required that
the actuating signal dueto a unit-step reference input have a magnitude less
than 10.
c. Compare their unit-step responses.
9.25. Consider the generator-motor set in Figure 6.1. lts transfer function is assumed
to be
300
G(s) = s4 + 184s 3 + 760s 2 + 162s
lt is a type 1 transfer function. Design a quadratic optimal system with q
25. Designan ITAE optimal system with u(O+) = 5. Plot their poles. Are
there many differences?
9.26. Considera plant with transfer function 1/s 2 • Find an optima1 system with zero
velocity error to minimize the ITAE criterion under the constraint lu(t)l :5 6.
[Answer: (6s + 2.5)/(s3 + 2.38s 2 + 6s + 2.5).]
9.27. If software for computing step responses is available, adjust the coefficients of
the quadratic optimal system in Problem 9.8, 9.11, 9.12, 9.14, or 9.15 to see
whether a comparable or better transient performance can be obtained.
.....
¡.i
Implementation-
Linear Algebraic
Method
1O. 1 INTRODUCTION
The first step in the design of control systems using the inward approach is to find
an overall transfer function to meet design specifications. This step was discussed
in Chapter 9. Now we discuss the second step-namely, implementation of the
chosen overall transfer function. In other words, given a plant transfer function G(s)
and an implementable Gjs), we shall find a feedback configuration without plant
leakage and compute compensators so that the transfer function of the resulting
system equals G (s). The compensators used must be proper and the resulting system
0
384
10.2 UNilY-FEEDBACK CONFIGURATION-MODEL MATCHING 385
feedback configuration can be used to achieve any pole placement but not any model
matching. The other two configurations, however, can be used to achieve any model
matching. In addition to model matching and pole placement, this chapter also stud-
ies robust tracking and disturbance rejection.
The idea used in this chapter is very simple. The design is carried out by match-
ing coefficients of compensators with desired polynomials. If the denominator D(s)
and numerator N(s) of a plant transfer function have common factors, then it is not
possible to achieve any pole placement or any model matching. Therefore, we require
D(s) and N(s) to have no common factors orto be coprime. Under this assumption,
the conditions of achieving matching depend on the degree of compensators. The
larger the degree, the more parameters we have for matching. If the degree of com-
pensators is large enough, matching is always possible. The design procedures in
this chapter are essentially developed from these concepts and conditions.
which implies
GJs) + C(s)G(s)G0 (s) C(s)G(s)
and
(10.2)
We use examples to illustrate its computation and discuss the issues that arise in its
implementation.
Example 10.2. 1
Consider a plant with transfer function 1/s(s + 2). The optimal system that mini-
mizes the ITAE criterion and meets the constraint lu(t)l :::; 3 was computed in (9.54)
as 3/(s 2 + 2.4s + 3). If we implement this G0 (s) in Figure 10.1, then the
compensator is
3
s2 + 2.4s + 3
C(s)
Example 10.2.2
Considera plant with transfer function 2/(s + 1)(s - 1). If G0 (s) 2/(s 2 + 2s
+ 2), then
2
s2 + 2s + 2
C(s)
1~s -
2
(s + 1). ( 1 - s2 + 2s + 2)
2
s2
+ 2s + 2 (s + 1)(s - 1)
2 s 2 + 2s s(s + 2)
(s + 1)(s - 1) . s 2 + 2s + 2
l 0.2 UNITY-FEEDBACK CONFIGURATION-MODEL MATCHING 387
(s+ l)(s-1) 2 y
s(s + 2) (s+l)(s-1)
C(s) G(s)
The compensator is proper. The implementation is plotted in Figure 10.2. The im-
plementation has a stable pole-zero cancellation and an unstable pole-zero cancel-
latían between C(s) and G(s).
As discussed in Chapter 6, noise and/ or disturbance may enter a control system
at every terminal. Therefore we require every control system to be totally stable. We
compute the closed-loop transfer function from p to y shown in Figure 10.2:
2
Y(s) (s + 1)(s - 1)
Gyp(s) : = -
P(s)
1 + (s +
----~~--~
1)(s - 1) 2
s(s + 2) (s + 1)(s - 1)
2
(s + 1)(s 1) 2s(s + 2)
s(s + 2) + 2 (s - 1)(s + l)(s 2 + 2s + 2)
s(s + 2)
lt is unstable. Thus the output will approach infinity if there is any nonzero, no
matter how small, disturbance. Consequently, the system is not totally stable, and
the implementation is not acceptable.
Exercise 10.2. 1
Consider a plant with transfer function 1/s 2 • Implement G0 (s) = 6/(s 2 + 3.4s
+ 6) in the unity-feedback configuration. Is the implementation acceptable?
[Answer: C(s) = 6s/(s + 3.4), unacceptable.]
Example 10.3. 1
Consider a plant with transfer function
1
G(s) = s(s + 2)
and consider the unity-feedback configuration shown in Figure 10.1. If the compen-
sator C(s) is a gain of k (a transfer function of degree 0), then the overall transfer
function can be computed as
kG(s) k
Go(s) = 1 + kG(s) s 2
+ 2s + k
This G0 (s) has two poles. These two poles cannot be arbitrarily assigned by choosing
a value for k. For example, if we assign the two poles at - 2 and - 3, then the
denominator of G0 (s) must equal
s2 + 2s + k = (s + 2)(s + 3) = s2 + 5s + 6
Clearly, there is no k to meet the equation. Therefore, if the compensator is of degree
O, it is not possible to achieve arbitrary pole placement. 1
Next Jet the compensator be proper and of degree 1 or
C(s) 0 B 1
= --"----'-
+ B s
A0 + A 1s
with A 1 o/= O. Then the overall transfer function can be computed as
C(s)G(s)
G (s) - ---=-'--;_:_-
o 1 + C(s)G(s) s(s+ 2)(A 1s + A 0) + B 1s + B0
B 1s + B 0
A 1s 3 + (2A 1 + A 0 )s 2 + (2A 0 + B 1 )s + B0
This G0 (s) has three poles. We show that all. these three poles can be arbitrarily
assigned by choosing a suitable C(s). Let the denominator of G 0 (s) be
D 0 (s) = s3 + F2 s 2 + F 1s + F0
1
The root loci of this problem are plotted in Figure 7.5. If C(s) = k, we can assign the two poles only
along the root loci.
10.3 UNITV-FEEDBACK CONFIGURATION-POLE PLACEMENT BY MATCHING COEFFICIENTS 389
where F¡ are entirely arbitrary. Now we equate the denominator of G0 (s) with D0 (s)
or
A 1s 3 + (2A 1 + Aa)s 2 + (2Aa + B 1 )s + Ba = s3 + F2 s 2 + F 1s + Fa
Matching the coefficients of like power of s yields
A¡= 2A 1 + Aa = F2 2Aa + B1 = F1
which imply
A1 = 1 Aa = F2 - 2A 1 B1 = F1 - 2F2 + 4A 1
For example, if we assign the three poles G0 (s) as -2 and -2 ± 2j, then D0 (s)
becomes
D0 (s) s3 + F2 s 2 + F 1s + Fa = (s + 2)(s + 2 + 2j)(s + 2 - 2j)
s 3
+ 6s 2
+ 16s + 16
We mention that if a complex pole is assigned in D 0 (s), its complex conjugate must
also be assigned. Otherwise, D 0 (s) will have complex coefficients. For this set of
poles, we have
A1 = 1 Aa = 6 - 2 = 4 B 1 = 16 - 2 · 6 + 4 = 8 Ba = 16
and the compensator is
8s + 16
C(s) = _s_+_4_
This compensator will place the poles of G0 (s) at -2 and -2 ± 2j. To verify this,
we compute
8s + 16
C(s)G(s) S+ 4 s(s + 2)
G0 (s)
+ C(s)G(s) 8s + 16 1
1 + S + 4 s(s + 2)
8s + 16 8s + 16
s(s + 2)(s + 4) + 8s + 16 s 3
+ 6s 2 + 16s + 16
Indeed G0 (s) has poles at -2 and -2 ± 2j. Note that the compensator also intro-
duces zero (8s + 16) into G0 (s). The zero is obtained from solving a set of equations,
and we have no control over it. Thus, pole placement is different from pole-and-
zero placement or model matching.
This example shows the basic idea of pole placement in the unity-feedback
configuration. 1t is achieved by matching coefficients. In the following, we shall
extend the procedure to the general case and also establish the condition for achieving
390 CHAPTER 1O IMPLEMENTATION-UNEAR ALGEBRAIC METHOD
pole placement. Consider the unity-feedback system shown in Figure 10.1. Let
N(s) B(s)
G(s) = - C(s) = -
D(s) A(s)
and deg N(s) ::5 deg D(s) = n. The substitution of these into (10.1) yields
B(s) N(s)
-·--
C(s)G(s) A(s) D(s)
+ C(s)G(s) + B(s) . N(s)
1
A(s) D(s)
which becomes
N0 (S) B(s)N(s)
(10.4}
D 0 (s) A(s)D(s) + B(s)N(s)
Given G(s), if there exists a proper compensator C(s) = B(s)/A(s) so that all
poles of G0 (s) can be arbitrarily assigned, the design is said to achieve arbitrary pole
placement. In the placement, if a complex number is assigned as a pole, its complex
conjugate must also be assigned. From (10.4), we see that the pole-placement prob-
lem is equivalent to solving
A(s)D(s) + B(s)N(s) = D 0 (s) (10.5)
In this equation, D(s) and N(s) are given, D 0 (s) is to be chosen by the designer. The
questions are: Under what conditions will solutions A(s) and B(s) exist? and will the
compensator C(s) = B(s)/A(s) be proper? First we show that if D(s) and N(s) have
common factors, then D 0 (s) cannot be arbitrarily chosen or, equivalently, arbitrary
pole placement is not possib~. For example, if D(s) a~ N(s) both contain the factor
(s - 2) or D(s) = (s - 2)D(s) and N(s) = (s - 2)N(s), then (10.5) becomes
This implies that D 0 (s) must contain the same common factor (s - 2). Thus, if N(s)
and D(s) have common factors, then not every root of D 0 (s) can be arbitrarily as-
10.3 UNITY-FEEDBACK CONFIGURATION-POLE PLACEMENT BY MATCHING COEFFICIENTS 391
signed. Therefore we assume from now on that D(s) and N(s) have no common
factors.
Because G(s) and C(s) are proper, we have deg N(s) ~ deg D(s) = n and
deg B(s) ~ deg A(s) = m, where deg stands for "the degree of." Thus, D 0 (s) in
(10.5) has degree n + mor, equivalently, the unity-feedback system in Figure 10.1
has (n + m) number of poles. We develop in the following the conditions under
which all (n + m) number of poles can be arbitrarily assigned.
If deg C(s) = O (that is, C(s) = k, where k is a real number), then from the
root-locus method, we see immediately that it is not possible to achieve arbitrary
pole placement. We can assign poles only along the root loci. If the degree of C(s)
is 1, that is
B(s)
C(s)
A(s)
then we have four adjustable parameters for pole placement. Thus, the larger the
degree of the compensator, the more parameters we have for pole placement. There-
fore, if the degree of the compensator is sufficiently large, it is possible to achieve
arbitrary pole placement.
Conventionally, the Diophantine equation is solved directly by using polyno-
mials and the solution is expressed as a general solution. The general solution, how-
ever, is not convenient for our application. See Problem 10.19 and Reference [41].
In our application, we require deg B(s) ~ deg A(s) to insure propemess of compen-
sators. We also require the degree of compensators to be as small as possible. Instead
of solving (10.5) directly, we shall transform it into a set oflinear algebraic equations.
We write
and
(10.7b)
where D¡, N¡, A¡, B; are all real numbers, not necessarily nonzero. Because deg D0 (s)
= n + m, we can express D 0 (s) as
(10.8)
(A 0 + A 1s + · · · + Amsm)(D0 + D 1s + · · · + Dnsn)
+ (B 0 + B 1s + · · · + Bmsm)(N0 + N 1s + · · · + Nnsn)
= F0 + F 1s + F 2 s 2 + · · · + Fn+msn+m
392 CHAPTER lO IMPLEMENTATION-LJNEAR ALGEBRAIC METHOD
which becomes, after grouping the coefficients associated with the same powers
of s,
(A 0 D 0 + B 0 N 0 ) + (A 0 D 1 + B 0 N 1 + A 1D 0 + B 1N 0 )s + · · ·
+ (AmDn + BmNn)sn+m = F 0 + F 1s + F 2 s 2 + · · · + Fn+msn+m
Matching the coefficients of like powers of s yields
AoDo + BoNo Fo
A0 D 1 + B 0 N 1 + A 1D 0 + B 1N 0 F1
Do No : o o o o Ao
D¡ N¡ Do No Bo Fo
o o A¡ F¡
~cm:= Dn Nn Dn-1 Nn-1 Do No !!_l Fz (10.9)
o o Dn Nn Dt N¡
A: Fn+m
o o o o l Dn Nn Bm
Thus in order to achieve arbitrary pole placement, the degree of compensators in the
unity-feedback configuration must be n - 1 or higher. If the degree is less than
n - 1, it may be possible to assign sorne set of poles but not every set of poles.
Therefore, we assume from now on that m 2:: n - l.
With m 2:: n - 1, it is shown in Reference [15] that the matrix Sm has a full
row rank if and only if D(s) and N(s) are coprime or have no common factors. We
10.3 UNITY-FEEDBACK CONFIGURATION-POLE PLACEMENT BY MATCHING COEFFICIENTS 393
which implies
(10.11)
Thus if Fn+m #O, then Am #O and the compensator C(s) = B(s)/A(s) is proper.
Note that if m = n - 1, the solution of (10.9) is unique and for any desired poles,
there is a unique proper compensator to achieve the design. If m 2: n, then the
solution of (10.9) is not unique, and sorne parameters of the compensator can be
used, in addition to arbitrary pole placement, to achieve other design objective, as
will be discussed later.
If G(s) is biproper and if m = n - 1, then Sm in (10.9) is a square matrix and
the solution of (10.9) is unique. In this case, there is no guarantee that An-l # O
and the compensator may become improper. See Reference [15, p. 463.]. If m 2: n,
then solutions of (10.9) are not unique and we can always find a strictly proper
compensator to achieve pole placement. The preceding discussion is summarized as
theorems.
THEOREM 1O. 1
Consider the unity-feedback system shown in Figure 10.1 with a strictly proper
plant transfer function G(s) = N(s)/D(s) with deg N(s) < deg D(s) = n. It is
assumed that N(s) and D(s) are coprime. If m 2: n - 1, then for any polynomial
D 0 (s) of degree (n + m), a proper compensator C(s) = B(s)/A(s) of degree m
exists to achieve the design. If m = n - 1, the compensator is unique. If m 2:
n, the compensators are not unique and sorne of the coefficients of the
compensators can be used to achieve other design objectives. Furthermore, the
compensator can be determined from the linear algebraic equation in (10.9). •
THEOREM 10.2
Consider the unity-feedback system shown in Figure 10.1 with a biproper plant
transfer function G(s) = N(s)/D(s) with deg N(s) = deg D(s) = n. It is
assumed that N(s) and D(s) are coprime. If m 2: n, then for any polynomial D 0 (s)
of degree (n + m), a proper compensator C(s) = B(s)/A(s) of degree m exists
to achieve the design. If m = n, and if the compensator is chosen to be strictly
proper, then the compensator is unique. If m 2: n + 1, compensators are not
unique and sorne of the coefficients of the compensators can be used to achieve
other design objectives. Furthermore, the compensator can be determined from
the linear algebraic equation in (10.9). •
394 CHAPTER 1O IMPLEMENTATION-LINEAR ALGEBRAIC METHOD
Example 10.3.2
Consider a plant with transfer function
N(s) s - 2 -2 + s +O s2 o
B 0 + B 1s
C(s) = ---"-----"-
A0 + A 1s
[~i
-2 1
1
1
o 0
1 i-1
o
1
1
1
1 o
-2
o][A
1
B0
A1
]
17
[15]
7
(10013)
1
o 1
1 1 O B1 1
79 62 ' 58
Ao = -
3 3 3
Thus the compensator is
62 58
--- -s
3 3 -(58s + 62) - (19o3s + 2007).
C(s) (10014)
79 3s + 79 S + 26.3
-+S
3
and the resulting overall system is, from (10.4),
( - 632 - 538 s) (s - 2)
B(s)N(s) + 62)(s - 2)
- (58s
s 3
+ 7s 2
+ 17s + 15 3(s + 7s 2 + 17s + 15)
3
Note that the zero 58s + 62 is solved from the Diophantine equation and we have
no control over it. Because GJO) = 124/45, if we apply a unit-step reference input,
the output will approach 124/45 = 20760 See (4025)0 Thus the output ofthis overall
system will not track asymptotically step-reference inputso
10.3 UNITY-FEEDBACK CONFIGURATION-POLE PLACEMENT BY MATCHING COEFFICIENTS 395
The design in Example 10.3.2 achieves pole placement but not tracking of step-
reference inputs. This problem, however, can be easily corrected by introducing a
constant gain k as shown in Figure 10.3. If we choose k so that kG0 (0) =
k X 124/45 = 1 or k = 45/124, then the plant output in Figure 10.3 will track
any step-reference input. We call the constant gain in Figure 10.3 a precompensator.
In practice, the precompensator may be incorporated into the reference input r by
ca1ibration or by resetting. For example, in temperature control, if r0 , which corre-
sponds to 70°, yields a steady-state temperature of 67° and r 1 yields a steady-state
temperature of 70°. We can simply change the scale so that r0 corresponds to 67°
and r 1 corresponds to 70°. By so doing, no steady-state error will be introduced in
tracking step-reference inputs.
Example 10.3.3
Consider a plant with transfer function 1/s 2 . Find a compensator in Figure 10.1 so
that the resulting system has all poles at s = - 2. This plant transfer function has
degree 2. If we choose a compensator of degree m = n - 1 = 2 1 = 1, then
we can achieve arbitrary pole placement. Clearly we have
D 0 (s) = (s + 2? = s 3 + 6s 2 + 12s + 8
From the coefficients of
1+ O· s + O· s 2
s2 O + O · s + l · s2
and D 0 (s), we form
Its solution is
A0 = 6 1 8 12
Thus the compensator is
12s + 8
C(s)
S + 6
396 CHAPTER 1O IMPLEMENTATION-LINEAR ALGEBRAIC METHOD
Note that the zero 12s + 8 is solved from the Diophantine equation and we have
no control over it. Because the constant and s terms of the numerator and denomi-
nator of G0 (s) are the same, the overall system has zero position error and zero
velocity error. See Section 6.3.1. Thus, the system will track asymptotically any
ramp-reference input. For this problem, there is no need to introduce a precompen-
sator as in Figure 10.3. The reason is that the plant transfer function is of type 2 or
has double poles at s = O. In this case, the unity-feedback system, if it is stable,
will automatically have zero velocity error.
From the preceding two examples, we see that arbitrary pole placement in the
unity-feedback configuration can be used to achieve asymptotic tracking. If a plant
transfer function is of type O, we need to introduce a precompensator to achieve
tracking of step-reference inputs. In practice, we can simply reset the reference input
rather than introduce the precompensator. If G(s) is of type 1, after pole placement,
the unity-feedback system will automatically track step-reference inputs. If G(s) is
of type 2, the unity-feedback system will track ramp-reference inputs. If G(s) is of
type 3, the unity-feedback system will track acceleration-reference inputs. In pole
placement, sorne zeros will be introduced. These zeros will affect the transient re-
sponse of the system. Therefore, it is important to check the response of the resulting
system before the system is actually built in practice.
To conclude this section, we mention that the algebraic equation in (10.9) can
be solved by using MATLAB. For example, to solve (10.13), we type
a=[-1 -200;01 -1 -2;1 001;001 O];b=[15;17;7;1];
a\b
Then MA TLAB will yield
26.3333
-20.6667
1.0000
-19.3333
This yields the compensator in (10.14).
Exercise 10.3. 1
Exercise 10.3.2
1
G(s) = - -
s2 - 1
Designa proper compensator C(s) anda gain k such that the overall system in Figure
10.3 has all po1es located at - 2 and will track asymptotically step-reference inputs.
S - -(58s + 62)
2.1
45 (s + 1)(s - 0.9)
3s + 79
G0 (S)
124 S - 2.1 -(58s + 62)
1 +
(s + 1)(s - 0.9) 3s + 79 (10.16)
45 - 58s 2 + 59.8s + 130.2
=-·
124 3s 3 + 21.3s 2 + 65s + 59.1
(10.17)
Both the plant and compensator have degree 2, therefore the unity-feedback system
in Figure 10.1 has four poles. We assign the four poles arbitrarily as -3, -3,
- 2 ± j, then we have
(s + 3) 2 (s + 2 - jl)(s + 2 + }1)
4 3
(10.18)
s + 10s + 38s 2 + 66s + 45
The compensator that achieves this pole placement can be sol ved from the following
linear algebraic equation
-1 -2 o o 1
1 o o Ao 45
o Bo
1
1
o 1 -1 -2 1
1 o 66
1
A¡
1 o o !-1
1
-2
B¡
38 (10.19)
o o 1 o 1
1
1
o 1 10
1 A2
o o o o 1
1 o 1
B2
This equation has six unknowns and five equations. After deleting the first column,
the remaining square matrix of order 5 in (10.19) still has a full row rank. Therefore
A 0 can be arbitrarily assigned. (See Appendix B). If we assign it as zero, then the
compensator in ( 10.17) has a pole at s = O or becomes type l. With A 0 = O, the
solution of (10.19) can be computed as B 0 = -22.5, A 1 = 68.83, B 1 = -78.67,
A 2 = 1, and B 2 = -58.83. Therefore, the compensator in (10.17) becomes
- (58.83s
C(s) = ____: ___ +_
78.67
__ s +_ 2
22.5)
___:.
(10.20)
s(s + 68.83)
and the overall transfer function is
We first check the stability of G0 (s). If G0 (s) is not stableJt cannot track any signal.
The application of the Routh test to the denominator of G0 (s) yields
50.856 47.25
1/10.1 10.1 80.76 [O 42.86 47.25]
10.1/42.86 s2 42.86 47.25 [O 69.63]
J' 69.63
1 47.25
All entries in the Routh table are postttve, therefore G0 (s) is stable. Because
G (0)
0
= 47.25/47.25 = 1, the overall system with the perturbed plant transfer
function still tracks asymptotically any step-reference input. In fact, because the
compensator is of type 1, no matter how large the changes in the coefficients of the
plant transfer function, so long as the unity-feedback system remains stable, the
system will track asymptotically any step-reference input. Therefore the tracking
property of this design is robust.
Exercise 10.3.3
Exercise 10.3.4
Given a plant with transfer function 1/(s - 1), design a cornpensator of degree 1
so that the poles of the unity-feedback systern in Figure 10.3 are - 2 and - 2. In
this design, do you have freedorn in choosing sorne of the coefficients of the corn-
pensator? Can you choose cornpensator coefficients so that the systern in Figure 10.3
will track asyrnptotically step-reference inputs with k = 1? In this design, will the
systern rernain stable and track any step-reference input after the plant transfer func-
tion changes to 1/(s - 1.1)?
[Answers: [(5 - a)s + (4 + a)]/(s + a); yes; yes by choosing a = O; yes.]
l
placement can be computed from
3 o 0
1 o 3 B0 157
o][A ] [300
o
~ ~: 2~.7
-1
o 1
10.3 UNITY-FEEDBACK CONFIGURATION-POLE PLACEMENT BY MATCHING COEFFICIENTS 401
as
20.175s + 100
C(s) = s + 3.525
y(t) u(t)
1.2 25
20
/
0.8 ¡, 15
1 ~ Model matching
1 10
1
""'Poie placement
1 ~
0.2
o t -5
o 0.2 0.4 0.6 0.8 1.2 1.4 o 0.2 0.4 0.6 0.8 1.2 1.4
(a) (b)
Figure 10.4 (a) Unit-step response. (b) Actuating signal.
402 CHAPTER 1O IMPLEMENTATION-LINEAR ALGEBRAIC METHOD
That is, the same compensator is applied to the reference input and plant output to
generate the actuating signal. Now we shall generalize it to
as shown in Figure 10.5(a). This is the most general form of compensators. We call
e 1(s)feedforward compensator and e2 (s)feedbackcompensator. Let e 1(s) and e 2 (s)
be
L(s) M(s)
e 1(s) =-
A 1(s)
e2(s) =-
Az(s)
where L(s), M(s), A 1(s), and A 2 (s) are polynomials. In general, A 1(s) and A2 (s) need
not be the same. It tums out that even if they are chosen to be the same, the two
compensators can be used to achieve any model matching. Furthermore, simple and
straightforward design procedures can be developed. Therefore we as sume A 1(s) =
A2 (s) = A(s) and the compensators become
L(s) M(s)
e 1(s) =- (10.25)
A(s) A(s)
and the configuration in Figure 10.5(a) becomes the one in Figure 10.5(b). If A(s),
which is yet to be designed, contains unstable roots, the signal at the output of
L(s)/A(s) will grow to infinity and the system cannot be totally stable. Therefore the
configuration in Figure lO.S(b) cannot be used in actual implementation. If we move
L(s)/ A(s) into the feedback loop, then the configuration becomes the one shown in
Figure IO.S(c). This configuration is also not satisfactory for two reasons: First, if
L(s) contains unstable roots, the design will involve unstable pole-zero cancellations
and the system cannot be totally stable. Second, because the two compensators
L(s)/A(s) and M(s)/L(s) have different denominators, if they are implemented using
operational amplifier circuits, they will use twice as many integrators as the one to
be discussed immediately. See Section 5.6.1. Therefore the configuration in Figure
10.5(c) should not be used. If we move M(s)/L(s) outside the loop, then the resulting
system is as shown in Figure 10.5(d). This configuration should not be used for the
same reasons as for the configuration in Figure 10.5(c). Therefore, the three config-
urations in Figures 10.5(b), (e), and (d) will not be used in actual implementation.
10.4 TWO-PARAMETER COMPENSATORS 403
-r y
(a)
-r y
(b)
(e)
-~ :+r-. -.,
r M(s)
f----
u
G (s)
y
~ A(s)
(d)
Figure 10.5 Various two-parameter configurations.
is a 1 X 2 rational matrix; it has two inputs r and y and one output u, and can be
plotted as shown in Figure 10.6. The minus sign in (1 0.27) is introduced to take care
of the negative feedback in Figure 10.6. Mathematically, this configuration is no
different from the ones in Figure 10.5. However, if we implement C(s) in (10.27)
as a unit, then the problem of possible unstable pole-zero cancellation will not arise.
Furthermore, its implementation will use the minimum number of integrators. There-
fore, the configuration in Figure 10.6 will be used exclusively in implementation.
We call the compensator in Figure 10.6 a two-parameter compensator [63]. The
configurations in Figure 10.5 are called two-degree-of-freedom structures in [36].
We can use the procedure in Section 5.6.1 to implementa two-parameter com-
pensator as a unit. For example, consider
We first expand4t as
From its coefficients and (5.44), we can obtain the following two-dimensional state-
variable equation realization
15.2
[ O
1] [x 1(t)J + [ 752 -1491.28] [r(t)]
O x 2 (t) 9000 -9000 y(t)
From this equation, we can easily plot a basic block diagram as shown in Figure
1O. 7. It consists of two integrators. The block diagram can be built using operational
amplifier circuits. We note that the adder in Figure 10.6 does not correspond to any
adder in Figure 10.7.
r 1 y
- L(s) G(s)
1
L_ - - - _j
r ¡------------------------¡
y
L _______ _ __j
Exercise 10.4. 1
Now we show that this can be used to achieve any model matching. For convenience,
we discuss only the case where G(s) is strictly proper.
Problem Given G(s) = N(s)/D(s), where N(s) and D(s) are coprime, deg N(s) <
deg D(s) = n, and given an implementable G0 (s) = N 0 (s)/D0 (s), find proper com-
pensators L(s)/A(s) and M(s)/A(s) such that
N 0 (S) L(s)N(s)
= -- = (10.29)
D 0 (s) A(s)D(s) + M(s)N(s)
Procedure:
Step 1: Compute
G0 (S) = N0 (S)
(10.30)
N(s) D 0 (s)N(s)
where Np(s) and Dp(s) are coprime. Since Njs) and D 0 (s) are coprime by
assumption, common factors may exist only between N0 (s) and N(s). Can-
cel all common factors between them and denote the rest NP(s) and DP(s).
Note that if N 0 (s) = N(s), then Dp(s) = D 0 (s) and Np(s) = l. Using (10.30),
we rewrite (10.29) as
Np(s)N(s) L(s)N(s)
(10.31)
Go(s) = Dp(s) A(s)D(s) + M(s)N(s)
From this equation, one might be tempted to set L(s) = NP(s) and to solve
for A(s) and M(s) from Dp(s) = A(s)D(s) + M(s)N(s). Unfortunately, the
resulting compensators are generally not proper. Therefore, sorne more
manipulation is needed.
Now we set
L(s) = Np(s)Dp(s) (10.33)
If we write
and
F(s) : = Dp(s)Dp(s) = F0 + F 1s + F2 s 2 + ··· + Fn+msn+m (10.36)
with m :::::: n - 1, then A(s) and M(s) in (10.34) can be solved from the
following linear algebraic equation:
Do No ! o o o o Ao
1
D¡ N¡! Do No Mo Fo
1
1
1
1
o o A¡ F¡
Dn Nn! Dn-1
1
Nn-1 Do No ~1 Fz (10.37)
o o 111 Dn Nn D¡ N¡
1
1
1
1
A"~ Fn+m
o o 1
! o o Dn Nn Mm
The solution and (10.33) yield the compensators L(s)/A(s) and M(s)/ A(s).
This completes the design.
Now we show that the compensators are proper. Equation (10.37) becomes
(10.9) if M; is replaced by B;. Thus, Theorem 10.1 is directly applicable to (10.37).
Because deg N(s) < deg D(s) and deg F(s) :::::: 2n -1, Theorem 10.1 implies the
existence of M(s) and A(s) in (10.37) or (10.34) with deg M(s) ::::::: deg A(s). Thus,
the compensator M(s)/ A(s) is proper. Furthermore, (10.34) implies
deg A(s) = deg [Dp(s)DpCs)] - deg D(s) = deg F(s) - n
Now we show deg L(s) ::::::: deg A(s). Applying the pole-zero excess inequality of
G0 (s) to (10.32) and using (10.33), we have
deg [DpCs)DpCs)] - (deg N(s) + deg L(s)) :::::: deg D(s) - deg N(s)
which implies
deg L(s) ::::::: deg [DP(s)DpCs)] - deg D(s) = deg A(s)
Thus the compensator L(s)/A(s) is also proper.
_ The design always involves the pole-zero canc~ation of DP(s). The polynomial
DpCs), however, is chosen by the designer. Thus if DpCs) is chosen to be Hurwitz or
to have its roots lying inside the region C shown in Figure 6.13, then the two-
parameter system in Figure 10.6 is totally stable. The condition for the two-parameter
configuration to be well posed is 1 + G(oo)C2 (oo) 'i' O where C 2 (s) = M(s)/ A(s).
This condition is always met if G(s) is strictly proper and C 2 (s) is proper. Thus the
system is well posed. The configuration clearly has no plant leakage. Thus this design
408 CHAPTER 10 IMPLEMENTATION-LINEARALGEBRAIC METHOD
Example 10.4. 1
Consider the plant with transfer function
N(s) s + 3
G(s) =- = ---
D(s) s(s - 1)
studied in (9.44). lts ITAE optimal system was found in (9.59) as
600.25
G (s) - -2 = - - - - - - - (10.38)
o - s + 34.3s + 600.25
Note that the zt:ro (s + 3) of G(s) does not appear in G0 (s), thus the design will
involve the pole-zero cancellation of s + 3. Now we implement G0 (s) by using the
two-parameter configuration in Figure 10.6. We compute
3 :1 o 0
1
1 :
1
o 3O][A
M0 ] 703.15
[1800.75]
o ¡-1
1
1 A1 37.3
o : O M1
The solution is A(s) = A 0 + A 1s = 3 + s and M(s) = M0 + M 1s = 600.25 +
35.3s. Thus the compensator is
Example 10.4.2
Consider the same plant transfer function in the preceding example. Now we shall
implement its quadratic optimal transfer function G0 (s) = 10(s + 3)/(s 2 + 12.7s
+ 30) developed in (9.46). First we compute
(This issue will be discussed further in the next section.) Thus we have
and
F(s) = Dp(s)Dp(s) = (s 2 + 12.7s + 30)(s + 3)
= s 3 + 15.7s 2 + 68.1s + 90
The polynomials A(s) and M(s) can be solved from
~]¡;~] ¡:~.1]
3 1
1
1
o
1
1
o
[-I
1
1
1 (10.41)
o ! -1
1
1 A1 15.7
o 1 1
1 o MI 1
as A(s) = A 1s + A 0 = s + 3 and M(s) = M 1s + M 0 = 13.7s + 30. Thus, the
compensator is
10(s + 3) _13.7s + 30]
[ s+3 s+3 (10.42)
_ 13.7s + 30]
[ 10 S + 3
This completes the desi~. Note that C 1(s) reduces to 10 because Dp(s) was chosen
as s + 3. For different DP(s), C 1(s) is not a constant, as is seen in the next section.
Example 10.4.3
Considera plant with transfer function G(s) = 1/s(s + 2). Implement its ITAE
optimal system G0 (s) = 3/(s 2 + 2.4s + 3). This G0 (s) was implemented by using
the unity-feedback system in Example 10.2.1. The design had the pole-zero cancel-
lation of s + 2, which was dictated by the given plant transfer function. Now we
41 Q CHAPTER 1O IMPLEMENTATION-LINEAR ALGEBRAIC METHOD
implement G0 (s) in the two-parameter configuration and show that the designer has
the freedom of choosing canceled poles. First we compute
G0 (s) 3 =: Nis)
N(s) (s 2 + 2.4s + 3) · 1 Dis)
Exercise 10.4.2
N(s)NP(s)DpCs) N(s)L(s)
DP(s)DpCs) A(s)D(s) + M(s)N(s)
to insure that the resulting compensators are proper. Because DP(s) is completely
canceled in G 0 (s), the tracking of r(t) by the plant output y(t) is not affected by the
choice of DP(s). Neither is the actuating signal affected by DP(s), because the transfer
function from r to u is
where DpCs) does not appear directly or indirectly. Therefore the choice of DP(s)
does not affect the tracking property of the overall system and the magnitude of the
actuating sign~
Although DP(s) does not appear in G0 (s), it will appear in the closed-loop transfer
functions of sorne input/output pairs. We compute the transfer function from the
disturbance input p to the plant output y in Figure 10.6:
Y(s) G(s)
H(s) : = -
P(s) 1 + G(s)M(s)A- \s) (10.44)
N(s)A(s) N(s)A(s)
A(s)D(s) + M(s)N(s)
We see thatDP(s) appears directly inH(s); it ~so affectsA(s) through the Diophantine
equation in (10.34). Therefore the choice of DP(s) will affect the disturbance rejection
property of the system. This problem will be studied in this section by using
examples.
In this section, we also study the effect of DP(s) on the stability range of the
overall system. As discussed in Section 6.4, the plant transfer function G(s) may
change due to changes of the load, power supplies, or other reasons. Therefore it is
of practica! interest to see how much the coefficients of G(s) may change before the
overall system becomes unstable. The larger the region in which the coefficients of
G(s) are permitted to change, the more robust the overall system is. In the following
examples, we also study this problem.
2
May be skipped without loss of continuity.
412 CHAPTER 1O IMPLEMENTATION-LINEAR ALGEBRAIC METHOD
Example 10.5. 1
N(s)A(s) (s + 3)(s + 3) S + 3
H(s) = = ---,---'------'--'---'---- (10.45)
Dp(s)Dp(s)
2
(s + 12.7s + 30)(s + 3) s2 + 12.7s + 30
+ 30), using the same procedure as in Example 10.4.2, we
If DP(s) is chosen as (s
can compute the compensator as
(s + 3)(s + 5.025)
H (s) - --:::---'-----'-.:..____ _____:__ (10.47)
- (s 2 + 12.7s + 30)(s + 30)
Next we choose DP(s) = s + 300, and compute the compensator as
10(s + 300) _ 288.425s + 3000]
[C¡(s) -Cz(s)] = [ s + 25.275 S + 25.275
(10.48)
H(s) -
(s + + 25.275)
3)(s
---=----'-----'--'----------'-- (10.49)
- (s 2 + 12.7s + 30)(s + 300)
Now we assume the disturbance p to be a unit-step function and compute the plant
outputs for the three cases in (10.45), (10.47), and (10.49). The results are plotted
in Figure 10.8 with the solid line _!or DP(s) = s + 3, the dashed line for Dp(s) =
~ + 30, and the dotted line for Dp(s) = s + 300. We see that the system with
Dp(s) = s + 300 attenuates the disturbance most. We plot in Fi~re 10.9 the am-
plitude characteristics of the three H(s). The one corresponding to DP(s) = s + 300
again has the best attenuation pr~erty for all w. Therefore, we conclude that, for
this example, the fas ter the root of DP(s ), the better the disturbance rejection property.
Now we study the robustness property of the system. First we consider the case
DP(s) = s + 300 with the compensator in (10.48). Suppose that after the imple-
l 0.5 EFFECT OF Op(s) ON DISTURBANCE REJECTION AND ROBUSTNESS 413
y(t)
0.12~------.----.---.--~----~--~--~--~---.
0.1
0.08
0.06
0.04
S+ 30
------------------------
s+300
With this perturbed transfer function, the transfer function from r to y becomes
L(s)N(s)
A(s)D(s) + M(s)N(s)
1 H(jw) 1
dB
-10
D/s) =s+3
-20~~------------
-30
S+ 30
----- -- ~
-40 S+ 300
------------------------------------------
----------------
-50
-60
-70
is a Hurwitz polynomial. The application of the Routh test to (10.51) yields the
following stability conditions:
and
3000(3 + E2Á
3840 + 25.275E¡ + 288.425E2 - > 0 (10.52b)
312.7 + E¡
(See Exercise 4.6.3.) These conditions can be simplified to
if E¡ > -117.7
and
15~--~--~--~-.-.~--~--~--.
10 s+3
5
S+ 30
-5L---~--~--~--~--~--~-~L_~E¡
-400 -300 -200 -100 o 100 200 300
Figure 1O. 1O Effect of canceled poles on stability range.
10.5 EFFECT OF Op(s) ON DISTURBANCE REJECTION AND ROBUSTNESS 415
Example 10.5.2
Considera plant with transfer function G(s) = N(s)/D(s) = (s - 1)/s(s - 2).
Implement its quadratic optimal system G0 (s) = -10(s - 1)/(s 2 + 11.14s + 10)
in the two-parameter configuration. First, we compute
Then we have
and
l
The polynomials A(s) and M(s) can be so1ved from
~~
-1 1
1
1
o 0
1
1
1
1
o !-2
o - 1
1
o][At:!_q
A1
] = [ 10 +1013
11.14{3
11.14 + f3
1
o1
1 O M1 1
[-~
-1 o
¡-1 -1 -1
1
o
o
-2
o
r
-1
1
o
-1
o
o
o
o
o
2
-f]
416 CHAPTER 1O IMPLEMENTATION-UNEAR ALGEBRAIC METHOD
Thus we have
-1
o
o
1
-1
o
o
2
-2][
o 10
1
4
11.14
+ 10~11.14~
+ ~
l ¡-23.14-10~
36.28
1
+
22.14~]
23.14~
L(s) _ M(s)J
[C 1(s)
[ A(s) A(s)
(10.54)
This completes the implementation of the quadratic ~timal system. Note that no
matter what value ~ assumes, as long as it is positive, DP(s) = s + ~ will not affect
the tracking property of the overall system. Neither will it affect the magnitude of
the actuating signa!.
Now we study the effect of DP(s) on disturbance rejection. The transfer function
from the disturbance p to the plant output y is
Y(s) N(s)A(s) (s - 1)(s - 23.14 - 22.I4m
H(s) : = P(s) = DP(s)DP(s) = (s 2 + 11.14s + 10)(s + ~) (10.55)
y(t)
5~----~------,-------.------.------,------,
-------------------------- -------------------
s+ 100
-2L------L------~----~-------L------~----~-+
o 2 3 4 5 6
Figure 10.11 Effect of canceled poles on disturbance rejection (time domain).
10.5 EFFECT OF Dp(s) ON DISTURBANCE REJECTION ANO ROBUSTNESS 417
s + 1 (so1id line), DpCs) = s ""±=._ 10 (dashed line), and DP(s) = s + 100 (dotted
line). We see that the choice of DpCs) does affect ~e disturbance rejection property
of the system. A1though the one corresponding to DpCs) = s + 100 has the smallest
steady-state va1ue, its undershoot is the 1argest. We p1ot in Figure 10.12 the amp1itude
characteristics of H(s) for {3 = 1, 10, and 100 respective1y with the so1id 1ine, dashed
line, and dotted 1ine. The one corresponding to {3 = 100 has the 1argest attenuation
for small ~ but it has 1ess attenuation for w ;:::: 2. Therefore, for this examp1e, the
choice of DP(s) is not as clear-cut as in the preceding examp1e. To have a small
steady-state effect, we shou1d choose a 1arge {3. If the frequency spectrum of dis-
turbance 1ies main1y between 2 and 1000 radians per second, then we shou1d choose
a small {3.
Now we study the effect of DpCs) on the robustness of the overall system. Sup-
pose after the imp1ementation of G0 (s), the p1ant transfer function G(s) changes to
N(s) s - 1 + E2
G(s) = =- = -----='- (10.56)
D(s) s(s - 2 + E1)
With this p1ant transfer function and the compensators in ( 10.54 ), the overall transfer
function becomes
L(s)N(s)
A(s)D(s) + M(s)N(s)
with
A(s)D(s) + M(s)N(s)
(s - 23.14 - 22.14{3) · s(s - 2 + E 1) (10.57)
1 H(jw) 1
dB
-80
We compute its stability ranges for the three cases with DP(s) = s + 1,
+ 10, and DpCs) = s + 100. If DP(s) = s + 1, (10.57) becomes
DP(s) = s
12.14 + E¡ > 0
and
_1_0..::....(1_-_E.;z):._ >
(21.14 - 45.28E¡ + 59.42E2) 0
(12.14 + E¡)
20~-~~-~~-~--~--~--~---,
s+ 1
o ¡----------------------------------------------------------------------------1--c?
i s+ 10 1
' \ -::/ '
-20
1 ///~
-40
, Dp(s) = s + 1 __ __.-_/_/.---_./
-60
-80 --------------------------
--~-------,.,.-
-IOOL---~--~-~~-~~-~~-~~-~~E
1
-120 -100 -80 -60 -40 -20 o 20
dashed line and dotted line. We see that, roughly speaking, the larger {3, the larger
the stability range. (Note _!!lat in the neighborhood of E 1 = O and E2 = O, the stability
region correspon<!!_ng to DP(s) = s + .!_00 does not include completely the regions
corresponding to Dp(s_)_ = s + 10 and Dp(s) = s + 1.) Therefore we conclude that
the faster the root of DP(s), the more robust the overall system.
From the preceding two examples, we conclude that the choice of DP(s) in the
two-parameter configuration does affect the disturbance rejection and robustness
properties of the resulting system. For the syste~ in Example 10.5.1, which has no
non-minimum-phase zeros, the faster the root of DP(s), the better the step disturbance
rejection and the more robust the resulting system. Fo~J.he system in Example 10.5.2,
which has a non-minimum-phase zero, the choice of DP(s) is no longer clear-cut. In
conclusion, in using th~ two-parameter configuration to achieve model matching,
although the choice of DP(s) does not affect the tracking property of the system and
the magnitude of the actuating signal, it does affect the disturbance rejection and
robustness properties of the system. '!:!_lerefore, we should utilize this freedom in the
design. No general rule of choosing DP(s) seems available at present. However, we
may always choose it by trial and error.
where yp(t) is the plant output excited by the disturbance p(t) shown in Figure 10.6.
~ the examples in the preceding section, we showed that by choosing the root of
DP(s) appropriately, the effect of step disturbances can be reduced. However, no
matter where the root is chosen, the steady-state effect of step disturbances can ~ever
be completely eliminated. Now we shall show that by increasing the degree of DP(s),
step disturbances can be completely eliminated as t----O>oo.
The transfer function from p to y in Figure 10.6 is
N(s)A(s)
H (s) = --'-'-=,.:.-..:- (10.59)
Dp(s)DpCs)
N(s)A(s) a
Y¡,(s) = H(s)P(s) = ---'-'=---' (10.60)
DP(s)DP(s) s
The application of the final-value theorem to (10.60) yields
. N(s)A(s) a aN(O)A(O)
hm s · ·- (10.61)
s--"o DP(s)Dp(s) s D 0 (0)DP(O)
420 CHAPTER 1O IMPLEMENTATION-UNEAR ALGEBRAIC METHOD
This becomes ~ro for any a if and only if__!V(O) = O or A(O) = O. Note that
DiO) =F- O and Dp(O) =F- O because Dp(s) and Dp(s) are Hurwitz. The constant N(O)
is given and is often nonzero. Therefore, the only way to achieve disturbance rejec-
tion is to design A(s) with A(O) = O. Recall that A(s) is to be solved from the
Diophanti~ equation in (10.34) or the linear algebra~ equation in (10.37). If the
degree of Dp(s) is chosen so that the degree of DP(s)Dp(s) is 2n - 1, where n =
deg D(s), then the solution A(s) is ~ique and we have no control over A(O). How-
ever, if we increase the degree of Dp(s), then solutions A(s) are no longer unique
and we may have the freedom of choosing A(O). This will be illustrated by an
example.
Example 10.5.3
Consider a plant with transfer function G(s) = (s + 3)/s(s - 1). Implement its
quadratic optimal system G0 (s) = 10(s + 3)/(s 2 + 12.7s + 30). This was imple-
mented in Example 1<2:_5.1 by choosing the degree of DP(s) as l. Now we shall
increase the degree of Dp(s) to 2 and repeat the design. First we compute
G0 (S) 10
N(s) s 2
+ 12.7s + 30
Arbitrarily, we choose
(s + 30) 2 (10.62)
Then we have
(s 2 + 12.7s + 30)(s + 30?
s4 + 72.7s 3 + 1692s 2 + 13,230s + 27,000
and
L(s) = Np(s)DP(s) = IO(s + 30) 2
The polynomials A(s) = A 0 + A 1s + A 2 s 2 and M(s) = M 0 + M 1s + M 2s 2 can
be solved from
Ao
' o 3 o o o o 27,000
Mo
-1 o 3 o o 13,230
Al
1 o -1 1 o 3 1692 (10.63)
MI
o o 1 o -1 ---- 72.7
Az
o o o o 1 o
Mz
This has 5 equations and 6 unknowns. Because the first column of the 5 X 6 matrix
is linearly dependent on the remaining columns, A 0 can be arbitrarily assigned, in
particular, assigned as O. With A 0 = O, the solution of (10.63) can be computed as
10.5 EFFECT OF Op(s) ON DISTURBANCE REJECTION ANO ROBUSTNESS 421
y(t)
0.01 ~--~--,----r---.--~----r---.---,----.--~
0.005
-0.005
-0.01
H(s) : = -
Y(s)
=
N(s)A(s) (s + 3)s(s - 15.2)
= --::---'----'--'------'----, (10.65)
P(s) Dp(s)Dp(s) (s2
+ 12.7s + 2
30)(s + 30)
Because H(O) = O, if the disturbance is a step function, the excited plant output will
approach zero as t ~ oo, as shown in Figure 10.14 with the solid line. As a com-
parison, we also show in Figure 10.14 ~ith the dashed line the plant output dueto
a step disturb~e for the design using Dp(s) = s + 300. We see that by increasing
the degree of Dp(s), it is p~ssible to achieve step disturbance rejection. In actual
design, we may try several Dp(s) of degree 2 and then choose~ne which suppresses
most disturbances. In conclusion, by increasing the degree of Dp(s), we may achieve
disturbance rejection.
Exercise 10.5. 1
Consider the configuration shown in Figure 10.15(a) in which G(s) is the plant
transfer function and C 1(s), C 2 (s), and C 0 (s) are proper compensators. This config-
uration introduces feedback from the plant input and output; therefore, it is called
the plant input/ output feedback configuration or plant 1/ O feedback configuration
for short. This configuration can be used to implement any implementable G0 (s).
Instead of discussing the general case, we discuss only the case where
deg D 0 (s) - deg N 0 (s) = deg D(s) - deg N(s) (10.66)
In other words, the pole-zero excess of GJs) equals that of G(s). In this case, we
can always assume C0 (s) = 1 and
L(s)
Cz(s) = M(s) (10.67)
C¡(s) = A(s) A(s)
and the plant I/0 feedback configuration can be simplified as shown in Figure
10.15(b). Note that A(s), L(s), and M(s) in Figure 10.15(b) are different from those
in the two-parameter configuration in Figure 10.6. The two compensators enclosed
by the dashed line can be considered as a two-input, one-output compensator and
must be implemented as a unit as discussed in Section 10.4. The configuration has
two loops, one with loop gain - L(s)/A(s), the other - G(s)M(s)/A(s). Thus its
(a)
-r + u
r- -----.,
y
1 1
_ _j
L- - - -
(b)
Problem Given G(s) = N(s)/D(s), where N(s) and D(s) are coprime and deg N(s)
:S deg D(s). Implementan implementable G0 (s) = N0 (s)/D0 (S) with deg D 0 (s) -
deg N 0 (s) = deg D(s) - deg N(s).
Procedure:
Step 1: Compute
F(s) = 2n - l. Let
L(s) = L0 + L 1s + · · · + Ln_,sn-l (10.740)
and
(10.74c)
Then L(s) and M(s) can be solved from the following linear algebraic
equation:
Do No! o o 1
1
1
o o Lo
1 1
Dl N,:1 Do No 1
1 Mo Fo
1
1 11 • • •
1
1
o o Ll Fl
1
Dn Nn Dn-l Nn-l 1
1 Do No __A!_l__ Fz (10.75)
1
o o Dn Nn
1
1
1
Dl Nl
1
1
------·
1
1
Ln-l Fzn-l
o o o o 1
! Dn Nn Mn-l
Example 10.6. 1
Consider G(s) = (s + 3)/s(s - 1). lmplement its quadratic optimal system G0 (s)
= 10(s + 3)/(s + 12.7s + 30). This problem was implemented in the two-
2
Then we have
A(s) = NP(s)A(s) 10(s + 3)
and, from (10.73),
F(s) = A(s)(Dp(s) - Np(s)D(s)) = (s + 3)[s 2 + 12.7s + 30 - 10s(s - 1)]
= -9s 3 - 4.3s 2 + 98.1s + 90
10.7 SUMMARY ANO CONCLUDING REMARKS 425
[-I
3
1
1
1
1
1
1
1
1
o ! -1
o!
1
1
o
o
This completes the design. ~ote that C 1(s) reduces to a constant because A(s) was
chosen as s + 3. Different A(s) will yield nonconstant C 1(s).
We see that the design using the plant 1/O feedba~ configuration is quite similar
to that of the two-p~ameter configuration. Because A(s) is completely canceled in
G0 (s), the choice of A(s) will not affect the tracking property and actuating signal of
the system. However, its choice may affect disturbance rejection and stability ro-
bustness of the resulting system. The idea is similar to the two-parameter case and
will not be repeated.
Exercise 10.6. 1
Considera plant with transfer function G(s) = 1/s(s - 1). Find compensators in
the plant 1/0 feedback configuration to yield (a) G0 (s) = 4/(s 2 + 2.8s + 4) and
(b) GJs) = (13s + 8)/(s3 + 3.5s 2 + 13s + 8). All canceled poles are to be chosen
at s = -4.
[Answers: (a) L(s)/A(s) = ( -3s - 8.2)/4(s + 4), M(s)/A(s) =
(23s + 16)/4(s + 4). (b) L(s)/A(s) = ( -12s - 3.5)/(13s + 8),
M(s)/A(s) = (17.5s + 8)/(13s + 8).]
to achieve pole placement. If the degree of G(s) is n, the mínimum degree of com-
pensators to achieve arbitrary pole placement is n - 1 if G(s) is strictly proper, or
n if G(s) is biproper. If we increase the degree of compensators, then the unity-
feedback configuration can be used to achieve pole placement and robust tracking.
The two-parameter configuration can be used to achieve any model matching.
In this configuration, generally we have freedom in choosing canceled poles. The
choice of these poles will not affect the tracking property of the system and the
magnitude of the actuating signal. Therefore, these canceled poles can be chosen to
suppress the effect of disturbance and to increase the stability robustness of the
system. If we increase the degree of compensators, then it is possible to achieve
model matching and disturbance rejection.
Finally we introduced the plant input/output feedback configuration. This con-
figuration is developed from state estimator (or observer) and state feedback (or
controller) in state-variable equations. See Chapter 11. The configuration can also
be used to achievf any model matching. For a comparison of the two-parameter
configuration and the plant 1/0 feedback configuration, see References [16, 44].
In this chapter all compensators for pole placement and model matching are
obtained by solving sets of linear algebraic equations. Thus the method is referred
to as the linear algebraic method.
We now compare the inward approach and the outward approach. We intro-
duced the root-locus method and the frequency-domain method in the outward ap-
pro,ach. In the root-locus method, we try to shift the poles of overall systems to the
desired pole region. The region is developed from a quadratic transfer function with
a constant numerator. Therefore, if an overall transfer function is not of the form,
even if the poles are shifted into the region, there is no guarantee that the resulting
system has the desired performance. In the frequency-domain method, because the
relationship among the phase margin, gain margin, and time response is not exact,
even if the design meets the requirement on the phase and gain margins, there is no
guarantee that the time response of the resulting system will be satisfactory. Fur-
thermore, if a plant has open right-half-plane poles, the frequency-domain method
is rarely used. The constraint on actuating signals is not considered in the root-locus
method, nor in the frequency-domain method.
In the inward approach, we first choose an overall transfer function; it can be
chosen to minimize the quadratic or ITAE performance index or simply by engi-
neering judgment. The constraint on actuating signals can be included in the choice.
Once an overall transfer function is chosen, we may implement it in the unity-
feedback configuration. If it cannot be so implemented, we can definitely implement
it in the two-parameter or plant input/ output feedback configuration. In the imple-
mentation, we may also choose canceled poles to improve disturbance rejection
property and to increase stability robustness property of the resulting system. Thus,
the inward approach appears to be more general and more versatile than the outward
approach. Therefore, the inward approach should be a viable altemative in the design
of control systems.
We give a brief history about various design methods to conclude this chapter.
The earliest systematic method to design feedback systems was developed by Bode
10.7 SUMMARY ANO CONCLUDING REMARKS 427
in 1945 [7]. lt is carried out by using frequency plots, which can be obtained by
measurement. Thus, the method is very useful to systems whose mathematical equa-
tions are difficult to develop. The method, however, is difficult to employ if a system,
such as an aircraft, has unstable poles. In order to overcome the unstable poles of
aircrafts, Evans proposed the root-locus method in 1950 [27]. The method has since
been widely used in practice. The inward approach was first discussed by Truxal in
1955 [57]. He called the method synthesis through pole-zero configuration. The
conditions in Corollary 9.1 were developed for the unity-feedback configuration. In
spite of its importance, the method was mentioned only in a small number of control
texts. IT AE optimal systems were developed by Graham and Lathrop [33] in 1953.
Newton and colleagues [48] and Chang [10] were among the earliest to develop
quadratic optimal systems by using transfer functions.
The development of implementable transfer functions for any control configu-
ration was attempted in [12]. The conditions of proper compensators and no plant
leakage, which was coined by Horowitz [36], were employed. Total stability was
not considered. Although the condition of well-posedness was implicitly used, the
concept was not fully understood and the proof was incomplete. lt was found in [ 14]
that, without imposing well-posedness, the plant input/output feedback configura-
tion can be used to imp1ement G (s) = 1 using exclusively proper compensators. lt
0
was a clear violation of physical constraints. Thus the well-posedness condition was
explicitly used in [15, 16] to design control systems. By requiring proper compen-
sators, total stability, well-posedness and no plant leakage, the necessity of the im-
plementability conditions follows immediately for any control configuration. Al-
though these constraints were intuitively apparent, it took many years to be fully
understood and be stated without any ambiguity. A similar problem was studied by
Youla, Bongiomo, and Lu [68], where the conditions for G (s) to be implementable
0
PROBLEMS
10.1. Consider a plant with transfer function 1/s(s + 3). Implement the overall
transfer function G 0 (s) = 4/(s 2 + 4s + 4) in the unity-feedback configu-
ration in Figure 10.1. Does the implementation involve pole-zero cancella-
tions? Do you have the freedom of choosing canceled poles?
10.2. Consider a plant with transfer function 2/ s(s - 3). Can you implement the
overall transfer function G0 (s) = 4/(s 2 + 4s + 4) in the unity-feedback
configuration in Figure 10.1? Will the resulting system be well posed? totally
stable?
10.3. Given G(s) = 1/(s - 1). Can G0 (s) = 1/(s + 1) be implemented in the
unity-feedback configuration in Figure 10.1? Can G0 (s) be implemented in
the single-loop feedback system shown in Figure P10.3 with C 1(s) = 1?
r + y
Figure P10.3
10.4. Considera plant with transfer function 1/s(s + 2). Find a proper compensator
of degree 1 in the unity-feedback configuration such that the overall system
has poles at -1 + j, -1 - j, and -3. What is the compensator? Use the
root-locus method to give an explanation of the result.
10.5. Considera plant with transfer function 1/s(s - 2). Find a proper compensator
of degree 1 in the unity-feedback configuration such that the overall system
has poles at - 1 + j, - 1 - j, and - 3. Will the resulting system track
asymptotically every step-reference input? Is this tracking robust?
10.6. a. Considera plant with transfer function 1/s(s + 2). Designa compensator
in the unity-feedback configuration such that the overall system has the
dominant poles - 1.4 ± j 1.43 and a polea with a = -4. What is u(O+)?
b. Repeat (a) for a = -5. Can you conclude that, for the same dominant
poles, the farther away the pole a, the larger the actuating signal?
10.7. a. Considera plant with transfer function G(s) = 1/s(s + 2) and its quadratic
optimal system G 0 (s) = 3/(s 2 + 3.2s + 3). Can G0 (s) be implemented
in Figure P10.7 by adjusting k 1 and k2 ?
b. Considera plant with transfer function G(s) = (s - 1)/(s 2 - 4) and its
quadratic optimal system G0 (s) = - 1.8(s - 1)/(s 2 + 5.2s + 5). Can
G0 (s) be implemented in Figure P10.7 by adjusting k 1 and k2 ? Will this
optimal system track asymptotically step-reference inputs? Note that if
PROBLEMS 429
Figure P10.7
A(s)D(s) + B(s)N(s) = 1
b. For any two polynomials D(s) and N(s), there exist two polynomials A(s)
and B(s) such that
A(s)D(s) + B(s)N(s) = O
where D(s) and N(s) are coprime. Show that, for any polynomial Q(s),
A(s) = A(s)D0 (s) + Q(s)Á(s) B(s) = B(s)D0 (s) + Q(s)B(s)
is a general solution of the Diophantine equation. For D(s) and N(s) in (a)
and D 0 (s) = s3 + 7 s 2 + 17s + 15, show that the following set of two
polynomials
l
A(s) = - (s 3 + 7s 2 + 17s + 15) + Q(s)( -s + 2)
3
1
B(s) = - - (s + 2)(s3 + 7s 2 + 17s + 15) + Q(s)(s 2 - 1)
3
is a general solution.
d. Show that if Q(s) = (s 2 + 9s + 32)/3, then the degrees of A(s) and B(s)
are smallest possible. Compare the result with the one in Example 10.3.2.
Which procedure is simpler? (lt is true that many engineering problems
can be solved by applying existing mathematical results. However, em-
phasis in mathematics is often different from that in engineering. Mathe-
maticians are interested in existence conditions of solutions and general
forms of solutions. Engineers are interested in particular solutions and in
efficient methods of solving them. This problem illustrates well their dif-
ferences in emphases.)
State Space Design
11.1 INTRODUCTION
In this chapter we discuss the design of control systems using state-variable equa-
tions. We use simple networks to introduce the concepts of controllability and ob-
servability, and we develop their conditions intuitively. We then discuss equivalence
transformations. Using an equivalence transformation, we show how to design pole
placement using state feedback under the assumption of controllability. Because the
concept and condition of controllability are dual to those of observability, design of
full-dimensional state estimators can then be established. We also discuss the design
of reduced-dimensional state estimators by solving Lyapunov eguations.
o;
The con-
nection of state feedback to the output of state estimators is justified by establishing
the separation property. This design is also compared with the transfer function
approach discussed in Section 10.6. Finally, we introduce the Lyapunov stability
theorem and then apply it to establish the Routh stability test.
The state in state-variable equations forms a linear space, called state space;
therefore, the design using state-variable equations is also called state space design.
y ex+ du (11.1 b)
432
11.2 CONTROLLABILITY AND OBSERVABILITY 433
Example 11.2. 1
Consider the network shown in Figure 11.1 (a). The input is a current source, and
the outpl,!t is the voltage across the 2-0 resistor shown. The voltage x across the
capacitor is the only state variable of the network. If x(O) = O, no matter what input
is applied, because of the symmetry of the four resistors, the voltage across the
capacitor is always zero. Therefore, it is not possible to transfer x(O) = O to any
nonzero x. Thus, the state-variable equation describing the network is not control-
lable. If x(O) is different from zero, then it will generate a response across the output
y. Thus, it is possible that the equation is observable, as will be established later.
y
2
2Q
+T
y
u
t -1
u
(a) (b)
Example 11.2.2
Consider the network shown in Figure 11.1 (b ). The input is a current source and the
output y 1 is the voltage across the resistor. The network has two state variables: the
voltage x 1 across the capacitor, and the current x2 through the inductor. Nonzero
x 1(0) and/or xiO) will excite a response inside the LC loop. However, the current
i(t) always equals u(t) and the output always equals 2u(t) no matter what x 1(0) and
x 2 (0) are. Therefore, there is no way to determine the initial state from u(t) and y 1(t).
Thus, the state-variable equation describing the network will not be observable.
Because the LC loop is connected directly to the input, it is possible that the
equation is controllable, as will be shown later. We mention that controllability and
observability depend on what are considered as input and output. If y2 in Figure
11.1 (b) is considered as the output, then the equation will be observable.
434 CHAPTER 11 STATE SPACE DESIGN
x(t) = 1
eA x(O) + L eA(t- rlbu( 7)d7
x(t) - eA
1
x(O) = L [1 + A(t - 7) + A
lt -
2"
7)2
+ · · ·]bu(7)d7
L u(7)d7
Using Theorem B.1, we conclude that for any x(O) and x(t), a solution u(t) exists if
and only if the matrix
(11.2)
has rank n. The matrix has n rows but infinitely many columns. Fortunately, using
the Cayley-Hamilton theorem (see Problem 2.37), we can show that (11.2) has rank
n if and only if the n X n matrix
(11.3)
has rank n. Thus we conclude that (11.1) is controllable if and only if the matrix in
(11.3) has rank n or, equivalently, its determinant is different from zero. The matrix
in ( 11.3) is called the controllability matrix. The first column is b, the second column
is Ab, and the last column is An- 1b. Similarly, we define the n X n matrix
V (11.4)
Its first row is e, second row is cA, and the last row is cAn- 1 • Then the n-dimensional
equation in (11.1) is observable if and only ifthe matrix V has rank n or, equivalently,
11 .2 CONTROLLABILITY AND OBSERVABILITY 435
Example 11.2.3
Consider
N(s) 2s - 1 2s - 1
G(s) (11.5)
D(s) s 2
- 1.5s - (s - 2)(s + 0.5)
(11.6a)
y = [2 -1]x (11.6b)
Because the dimension of (11.6) equals the number of poles of G(s) in (11.5), (11.6)
is a minimal state-variable equation (see Section 2.8). Every minimal equation is
controllable and observable. We demonstrate this fact for the equation in (11.6).
To check controllability, we first compute
1.5
Ab = [ (11.7)
1
~]
1.5
cA = [2 -1] [ [2 2] (11.9)
1
(11.10)
Example 11.2.4
~ ~ Jx + [ ~ Ju
5
(11.12a)
x [ ·
y = [2 1]x (11.12b)
This equation differs from (11.6) only in the output equation; therefore, its control-
lability matrix is the same as (11.8), and (11.12) is controllable. To check observa-
bility, we compu~e
cA = [2 1][~· 5 ~] = [4 2]
V= L:J [! ~] (11.13)
Its determinant is 4 - 4 = O. Thus, (11.12) is not observable. Note that the number
of poles of G(s) in (11.11) is 1 (why?), whereas the dimension of (11.12) is 2. Thus
(11.12) is nota minimal equation. The equation is controllable but not observable.
The observable-form realization of G(s) in (11.11) is
(11.14a)
y = [1 O]x (11.14b)
V [~.5 ~]
=
Because det U = O and det V = 1, where det stands for the determinant, the equation
is observable but not controllable.
5.5.1. This is done for convenience in developing the design procedure in Section
11.4. The controllable-form realization of (11.15) is, as shown in (5.17),
y
[T
= [b 1 b2 b3 b4 ]x
(ll.l6a)
(11.16b)
o
o
with e2 = -a 2 + ai and e3 = -a3 + 2a 1a 2 ai. lt is a triangular matrix; its
determinant is always 1 no matter what the a; are (see (B.2)). Therefore, it is always
controllable, and this is the reason for calling it controllable-form. To check whether
( 11.16) is observable, we must first compute the observability matrix V in ( 11.4)
and then check its rank. An altemative method is to check whether N(s) and D(s)
have common factors or not. If they have common factors, then the equation cannot
be observable. This is the case in Example 11.2.4. If N(s) and D(s) have no common
factors, then (11.16) will be observable as well. Similarly, the observable-form
realization of ( 11.15) is, as discussed in (5.18),
[-a, o
Il m·
1
-a2 o 1
i + (ll.l7a)
-a3 o o
X
-a4 o o
y = [1 o o O]x (ll.l7b)
This equation is always observable. It is controllable if N(s) and D(s) have no com-
mon factors; otherwise, it is not controllable.
Exercise 11.2. 1
Show that the network in Figure 1l.l(a) can be described by the following state-
variable equation
i = -0.75x + O· u y = 0.5x + u
Is the equation controllable? observable?
[Answers: No, yes.]
438 CHAPTER 11 STATE SPACE DESIGN
Exercise 11.2.2
Show that the network in Figure 11.1 (b) with y 1 as the output can be described by
x [~ - ~] x + [~] u
y 1 = [O O]x + 2u
Is the equation controllable? observable?
[Answers: Yes, no.]
Exercise 11.2.3
Show that the network in Figure 11.1 (b) with y 2 as the output can be described by
x = [~ - ~] x + [~] u
y2 = [1 O]x
In addition to (11.3) and (11.4), there are many other controllability and ob-
servability conditions. Although ( 11.3) and ( 11.4) are most often cited conditions in
control texts, they are very sensitive to parameter variations, and consequently not
suitable for computer computation. For a discussion of this problem, see Reference
[15, p. 217]. If state-variable equations are in diagonal or, more generally, Jordan
forro, then controllability can be determined from b alone and observability, from e
alone. See Problems 11.2, 11.3, and Reference [15, p. 209].
Example 11.2.5
Consider the block diagram in Figure 11.2(a) with input u and output y. It consists
of two blocks with transfer functions 1/(s - 1) and (s - 1)/(s + 1). If we assign
the output of the first block as x 1, then we have
X¡ = X¡ + u
11.2 CONTROLLABILITY AND OBSERVABILITY 439
(a)
(b)
Figure 11.2 Pole-zero cancellations.
See Section 5.5.2. The second block has input x 1 and output y, thus we have
Y(s) S - l -2
= 1 +
X 1(s) S + 1 S + 1
Therefore, it can be realized as
y = x2 + x1
Thus, the tandem connection of the two blocks can be described by
y2 = [1 l]x
Because its controllability matrix
U = [b Ab] = [ ~ _ ~J
is nonsingular, the equation is controllable. The observability matrix is
V = [e:J [ -: -:J
which is singular. Thus, the equation is not observable. The equation has two state
variables, but its overall transfer function
1 S - 1
G(s) = - - . - -
o s-1 s+1 S + 1
has only one pole. Therefore, the equation is not minimal and cannot be both con-
trollable and observable. Similarly, we can show that the equation describing Figure
11.2(b) is observable but not controllable. In general, if there are pole-zero cancel-
lations in tandem connection, the state-variable equation describing the connection
cannot be both controllable and observable. If poles of the input block are canceled
by zeros of the output block, the equation cannot be observable; if poles of the output
block are canceled by zeros of the input block, the equation cannot be controllable.
440 CHAPTER ll STATE SPACE DESIGN
(11.180)
y = [O 1]x (11.18b)
For the same network, we now choose the two loop currents :X1 and :X2 shown
in Figure 11.3(b) as state variables. Then the voltages across the inductor, resistor,
and capacitor are respectively 2x1, 2(:X 1 - x2 ) and fh x2 ( r)dr. From the left-hand
side loop, we have
which implies
x1 -x1 + x2 + 0.5u
From the right-hand side loop, we have
which implies
+
~2x 1 -J
o -
j!-2:t 1 ~
A
+T +T + 2Q
1F
+T
2Q
u 1F xz Y u
q) y
(a)
xz
2 -1-1 (b)
-1
Figure 11.3 Network with different choices of state variables.
11 .3 EQUIVALENT STATE-VARIABLE EQUATIONS 441
y = [2 -2]x (11.19b)
Both ( 11.18) and ( 11.19) describe the same network; therefore, they must be related
in sorne way. Indeed, from the currents through the inductor and resistor in Figure
11.3(a) and (b), we have x 1 = x1 and x 2 /2 = x1 - x2 • Thus we have
(11.20)
The square matrix in (11.20) is nonsingular because its determinant is -2. Two
state-variable equations are said to be equivalent if their states can be related by a
nonsingular matrix such as in (11.20). Thus (11.18) and (11.19) are equivalent.
Consider the n-dimensional state-variable equation
x= Ax + bu (11.21a)
y=ex+du (11.21b)
y= eP- 1 x + du
which become
X Ax +bu (11.22a)
y ex + du (11.22b)
with
Equations (11.21) and (11.22) are said to be equivalent, and Pis called an equiva-
lence transformation. For (11.18) and (11.19), we have, from (11.20),
p-l = [1 o]
2 -2
and
442 CHAPTER 11 STATE SPACE DESIGN
and
PAP- 1
[~ -~.5] [~ -0.5]
-0.5 [21 -~J [-1
-1 ~.5] A
Pb o ][0.5] [0.5]
[~ -0.5 0.5 = b
o
eP- 1 = [O l]G -2
o] = [2 -2] =e
and
det (si - A) det (sPP- 1 - PAP- 1) = det [P(sl - A)P- 1]
(11.23)
det P det (si - A) det p-I = det (si - A)
Thus A and A have the same characteristic polynomial and, consequently, the same
eigenvalues. Equivalent state-variable equations also have the same transfer function.
lndeed, the transfer function of (11.22) is
G(s) c(sl- A)-fiJ = eP- 1 [P(sl- A)P- 1] - 1Pb
eP- 1P(sl- A)- 1P- 1Pb = e(sl- A)- 1b = G(s)
Thus any equivalence transformation will not change the transfer function. The prop-
erties of controllability and observability are also preserved. Indeed, using
A 2 = AA = PAP- 1PAP- 1 = PA 2 p-l
and, in general, An = P Anp- 1, we have
u:= [b Ab A 2 b ... An-li)]
[Pb PAP- 1Pb PA2 P- 1Pb · ·. PAn-Ip- 1Pb] (11.24)
2 1
= P[A Ab A b · · · An- b] = PU
Because P is nonsingular, the rank of U equals the rank of U. Thus, (11.22) is
controllable if and only if (11.21) is controllable. This shows that the property of
controllability is invariant under any equivalence transformation. The observability
part can be similarly established.
In this section, we discuss the design of pole placement by using state feedback. We
first use an example to illustrate the basic idea.
11.4 POLE PLACEMENT 443
Example 11.4. 1
and
i 2 = -x2 + lOu
(see Section 5.5.2). Thus, the plant can be described by the following state-variable
equation
y = [1 O]x (11.25b)
Now we introduce feedback from x 1 and x 2 as shown in Figure 11.4(a) with real
constant gains k 1 and k2. This is called state feedback. With the feedback, the transfer
function from r to y becomes, using Mason's formula,
10
s(s + 1) 10
1 +-l0k2 lOk¡
- + --"-- s(s + 1) + l0k2s + l0k 1
S + 1 s(s + 1) (11.26)
10
s2 + (1 + l0k2)s + l0k 1
We see that by choosing k 1 and k2 , the poles of G0 (s) can be arbitrarily assigned
(a) (b)
Because the feedback paths consist of only gains (transfer functions with degree
0), the number of the poles of the resulting system remains the same as the original
plant. Thus, the j.ntroduction of constant-gain state feedback does not increase the
number of poles of the system, it merely shifts the poles of the plant to new positions.
Note that the numerator of G0 (s) equals that of G(s). Thus, the state feedback does
not affect the numerator or the zeros of G(s). This is a general property and will be
established later.
Exercise 11.4. 1
Although Example 11.4.1 illustrates the basic idea of pole placement by state
feedback, the procedure cannot easily be extended to general G(s) = N(s)/D(s). In
the following, we use state-variable equations to discuss the design. Consider the
n-dimensional state-variable equation
x= Ax + bu (11.28a)
y = ex (11.28b)
1 1
L - - - _ _j
y ex (11.30b)
(11.30c)
(11.31)
and
(11.32)
~ [T
-a2 -a3
y
= Ai +bu
ex = [b¡ b2 b3 b4Ji
o
o
1
o
o
Tl· +m· (11.33a)
(11.33b)
446 CHAPTER ll STATE SPACE DESIGN
-¡~
The controllability matrix of (11.33) can be computed as
-a¡
1
U= (11.340)
o o 1
o o o
with e2 = -a2 + ai and e3 = -a3 + 2a 1a 2 - ai. It is triangular and its inverse
is also triangular and equals
ij-1
[ ~ ~1
0
o o o
0
:: ::]
a1
1
(11.34b)
a 3]
1 2
1 a a
O 1 a 1 a2
S:= p-I uu- 1
= [b Ah A2 b A3 b]
0 0 1 a1
(11.36)
[
o o o 1
[-a, 1
-k, -az
o
kz -a3
o
k3
We see that (11.38) is still of the controllable form, so its transfer function from r
to y is
11.4 POLE PLACEMENT 447
b 1s 3 + b 2s 2 + b 3s + b4
G0 (S) = (11.39)
s 4 + (a 1 + k 1)s 3 + (a 2 + k2)s 2 + (a 3 + k3)s + (a 4 + k4)
Now by choosing k¡, the denominator of G0 (s), and consequently the poles of G0 (S),
can be arbitrarily assigned. We also see that the numerator of G0 (s) equals that of
G(s). Thus the state feedback does not affect the zeros of G(s).
To relate k in (11.37) with k in (11.29), we substitute i = Px into (11.37) to
yield
u = r - k:X = r - kPx
Thus we have k = kP. The preceding procedure is summarized in the following.
A'b{~
a¡ az
S:= p-I [b Ab A2 b
1
o 1
o o
a¡
"']
az
a¡
1
(11.40)
It is clear that the preceding procedure can be easily extended to the general
case. There are other design procedures. For example, the following formula
k [O O O 1][b Ab A2 b A3 b] - 1ll(A)
[O O O 1][b Ab A2 b A3 br 1 [A4 + a1A3 + a2 A2 + a3 A + a41]
called the Ackermann formula, is widely quoted in control texts. Note that Ll(s)
is not the characteristic polynomial of A (it is the characteristic polynomial of
(A - bk)), thus ll(A) ~ O. The derivation of the Ackermann formula is more
complex; its computation is no simpler. This is why we introduced the preceding
procedure. The procedure is probably the easiest to introduce but is not necessarily
the best for computer computation. See Reference [15].
448 CHAPTER 11 STATE SPACE DESIGN
Example 11.4.2
[~J = [ ~ - ; J X + L~J u
(11.4la)
y [1 O]x (11.4lb)
G(s) = - 2- -
10 (11.4lc)
s + S
Find the feedback gain k in u = r - kx such that the resulting equation has
- 2 ± j2 as its eigenvalues.
We compute.the characteristic polynomial of A
o 10]
[ 10 o
Thus we have
k = -kP = [3 8] [ o
10
10 ] -]
o
= [3 8]
[o 0.1
0.1]
o = [0.8 0.3]
As we expected, this result is the same as the one in Example 11.4.1. To verify the
result, we compute
si - A + bk = [ ~ ~ J - [~ -1
1 J+ [ 0}0.8
10
0.3] [; -1
S+ 4
J
Thus, the transfer function of the resulting system is
G0 (S) = [1 o{; 1
s+4
-l o]
J 10
~ J[1~ J
= [1 O] z
1 [S + 4 10
S +4s+8 -8 sz + 4s + 8
11.5 QUADRATIC OPTIMAL REGULATOR 449
as expected. We see that the state feedback shifts the poles of G(s) from O and -1
to -2 ± j2. However, it has no effect on the numerator of G(s).
If (A, b) is not controllable, then the matrix in (11.36) is not nonsingular and
(11.28) cannot be transformed into the controllable-form equation in (11.33). In this
case, it is not possible to assign all eigenvalues of (A - bk); however, it is possible
to assign sorne of them. See Problem 11.8 and Reference [15].
To conclude this section, we mention that state feedback gain can be obtained
by using MATLAB. For the problem in Example 11.4.2, we type
a=[O 1;0 -1];b=[0;10];
i=sqrt(-1);
p=[ -2+2*i;-2-2*i];
k= place(a,b,p)
Then k= [0.8 0.3] will appear on the screen. The command acker(a,b,p), which
uses the Ackermann formula, will also yield the same result. But the MA TLAB
manual states that acker(a,b,p) is not numerically reliable and starts to break down
rapidly for equations with dimension larger than 10.
Consider
Ax +bu (11.420)
y ex (11.42b)
In the preceding section, the input u was expressed as r kx, where r is the
reference input and k is the feedback gain, also called the control law. Now we
assume that the reference input r is zero and that the response of the system is excited
by nonzero initial state x(O), which in tum may be excited by externa! disturbances.
The problem is then to find a feedback gain to force the response to zero as quickly
as possible. This is called the regulator problem. If r = O, then (11.29) reduces to
u = - kx, and (11.42a) becomes
x= (A - bk)x (11.43)
erally, the larger the radius, the larger the feedback gain k and the larger the arnpli-
tude of u(t). Therefore, the constraint on u(t) will limit the rate for the response to
approach zero. The most systematic and popular method is to find k to minimize
the quadratic performance index
where the prime denotes the transpose, Q is a symmetric positive semidefinite matrix,
and R is a positive constant. Before proceeding, we digress to discuss positive definite
and semidefinite matrices.
A matrix Q is symmetric if its transpose equals itself, that is, Q' = Q. All
eigenvalues of symmetric matrices are real, that is, no symmetric matrices can have
complex eigenvalues. See Reference [15, p. 566.]. A symmetric matrix is positive
definite if x'Qx > O for all nonzero x; it is positive semidefinite if x'Qx :2: O for all
x and the equality.holds for sorne nonzero x. Then we have the following theorem.
See Reference [15, p. 413.].
THEOREM 11. 1
A symmetric matrix Q of order n is positive definite (positive semidefinite) if
and only if any one of the following conaitions holds:
l. All n eigenvalues of Q are positive (zero or positive).
2. It is possible to decompose Q as Q = N'N, where N is a nonsingular square
matrix (where N is an m X n matrix with O< m< n).
3. All the leading principal minors of Q are positive (all the principal minors
of Q are zero or positive). •
q¡¡ det [q 11
det Q
qz¡
that is, the determinants of the submatrices by deleting the last k rows and the last
k columns for k = 2, 1, O. The principal minors of Q are
det [q 11
qz¡
det [q 11
det Q
q31
11 .5 QUADRATIC OPTIMAL REGULATOR 451
that is, the determinants of all submatrices whose diagonal elements are also diagonal
elements of Q. Principal minors include allleading principal minors but not con-
versely. To check positive definiteness, we check only the leading principal minors.
To check positive semidefiniteness, however, it is not enough to check only the
leading principal minors. We must check all principal minors. For example, the
leading principal minors of
are 1, O, andO, which are zero or positive, but the matrix is not positive semidefinite
because one principal minor is - 1 (which one?).
lf Q is positive semidefinite and R is positive, then the two integrands in (11.44)
will not cancel each other and J is a good performance criterion. The reasons for
choosing the quadratic index in (11.44) are similar to those in (9.13). It yields a
simple analytical solution, and if Q and R are chosen properly, the solution is ac-
ceptable in practice.
lf Q is chosen as e' e, then (11.44) becomes
This performance index is the same as (9.13) with r(t) = O and R = 1/q. Now, if
the state-variable equation in (11.42) is controllable and observable, then the feed-
back gain that minimizes (4.45) is given by
(11.46)
This is called the algebraic Riccati equation. This equation may have one or more
solutions, but only one solution is symmetric and positive definite. The derivation
. of (4.46) and (4.4 7) is beyond the scope of this text and can be found in References
[1, 5]. We show in the following its application.
Example 11.5. 1
Consider the plant with transfer function 1/s(s + 2) studied in Example 9.4.1. lts
controllable form realization is
(11.48a)
y = [0 1]x (11.48b)
K = [k¡¡ kz¡J
kz¡ kzz
k 11
[k
21
k21 J[-2
k22 1
o]_
O
[-2 1J[k11
O O k21
kk2221 J
+ 9 [k
k21
11 21
k J [1][1
k 22 O
O][k
k21
11
kz¡J _
k22
[o]1 ro 1] = [o0 o0 J
Equating the corresponding entries yields
4k 11 - 2k 21 + 9ki 1 = O (11.50a)
2k21 - k 22 + 9k 11 k 21 = O (11.50b)
and
9k~ 1 - 1 = O (11.50c)
From (11.50c), we have k21 = ± 1/3. If k21 = - 1/3, then the resulting K will not
be positive definite. Thus we choose k21 1/3. The substitution of k21 = 1/3 into
(11.50a) yields
2
9ki¡ + 4k¡¡ - - =
3
o
whose solutions are 0.129 and - 0.68. If k 11 = - 0.68, then the resulting K will
not be positive definite. Thus we choose k 11 = 0.129. From (11.50b), we can solve
k22 as 1.05. Therefore, we have
K = [0.129 0.333]
0.333 1.05
which can be easily verified as positive definite. Thus the feedback gain is given by
0.129 0.333]
k = R~ 1 b'K = 9[1 O] [ = [1.2 3] (11.51)
0.333 1.05
and (11.43) becomes
[ -~.2 -~] X
11 .6 STATE ESTIMATORS 453
S + 3.2 3]
det [ _ s = s2 + 3.2s + 3
1
which equals the denominator of the optimal transfer function in (9.24). This is not
surprising, because the performance index in (11.49) is essentially the same as (9.21)
with zero reference input. Therefore the quadratic optimal regulator problem using
state-variable equations is closely related to the quadratic optimal transfer function
in Chapter 9. In fact, it can be shown that D 0 (s) obtained by spectral factorization
in Chapter 9 equals the characteristic polynomial of (A - bk). See Reference [1].
We also mention that the conditions of controllability and observability are essential
here. These condítions are equivalent to the requirement in Chapter 9 that N(s) and
D(s) in G(s) = N(s)/D(s) have no common factors.
The optimal gain in quadratic regulators, also called linear quadratic regulator
or lqr, can be obtained by using MATLAB. For the example, we type
a=[ -2 0;1 O];b=[1 ;O];
q=[O 0;0 1];r= 1/9;
k= lqr(a,b,q,r)
then k= [1.1623 3.000] will appear on the screen. Thus the use of MATLAB is
very simple.
The state feedback in the preceding sections is introduced under the assumption that
all state variables are available for connection to a gain. This assumption may or
may not hold in practice. For example, for the de motor discussed in Figure 11.4,
the two state variables can be generated by using a potentiometer and a tachometer.
However, if no tachometer is available or if it is available but is very expensive and
we have decided to use only a potentiometer in the design, then the state feedback
cannot be directly applied. In this case, we must design a state estimator. This and
the following sections will discuss this problem.
Consider
Ax +bu (11.53a)
y ex (11.53b)
with known A, b, and c. The problem is to use the available input u and output y to
drive a system, called a state estimator, whose output :X approaches the actual state
x. The easiest way of building such an estimator is to simulate the system, as shown
in Figure 11.6. Note that the original system could be an electromechanical one, and
the estimator in Figure 11.6 may be built using operational amplifier circuits. Be-
454 CHAPTER 11 STATE SPACE DESIGN
,---------------,
1 • 1
1 1
1 1
L ------------- _ _j
cause the original :;ystem and the estimator are driven by the same input, their states
x(t) and x(t) should be equal for all t if their initial states are the same. Now if
(11.53) is observable, its initial state can be computed and then applied to the esti-
mator. Therefore, in theory, the estimator in Figure 11.6 can be used, especially if
both systems start with x(O) = x(O) = O. We call the estimator in Figure 11.6 the
open-loop state estimator.
Let the output of the estimator in Figure 11.6 be denoted by x. Then it is
described by
i = Ax + bu (11.54)
Define e(t) : = x(t) - x(t). It is the error between the actual state and the estimated
state at time t. Then it is govemed by
é = Ae (11.55)
Although it is possible to estímate x(O) and then set x(O) = x(O), in practice e(O) is
often nonzero due to estimation error or disturbance. Now if A has eigenvalues in
the open right half plane, then the error e(t) will grow with time. Even if all eigen-
values of A have negative real parts, we have no control over the rate at which e(t)
approaches zero. Thus, the open-loop state estimator in Figure 11.6 is not desirable
in practice.
Although the output y is available, it is not utilized in the open-loop estimator
in Figure 11.6. Now we shall compare it with eX and use the difference to drive an
estimator through a constant vector 1 as shown in Figure 11.7(a). Then the output x
of the estimator is govemed by
i = Ax + bu + 1( y - ex)
11 .6 STATE ESTIMATORS 455
or
x = (A - lc)x + bu + Iy (11.56)
and is replotted in Figure 11.7(b). We see that the estimator is now driven by u as
well as y. We show in the following that if (A, e) is observable, then ( 11.56) can be
designed so that the estimated state x will approach the actual state x as quickly as
desired.
The subtraction of (11.56) from (11.53a) yields, using y = ex,
x- x Ax + bu - (A - lc)x - bu - lcx
(A - lc)x - (A - lc)x = (A - lc)(x - x)
u y
,-----------------,
1 1
1
1 1
L ______________ _j
(a)
u y
1 1
L . - - - - - - - - - - - - _ _j
(b)
Figure 11.7 State estimator.
456 CHAPTER 11 STATE SPACE DESIGN
Now we show that if (A, e) is observable, then the eigenvalues of (A - le) can
be arbitrarily assigned (provided complex-conjugate eigenvalues are assigned in
pairs) by choosing a suitable vector l. If (A, e) is observable, its observability matrix
in (11.4) has rank n. The transpose of (11.4) is
V' = [e' A'e' (A') 2 e'
and is also of rank n. By comparing this with (11.3), we conclude that (A', e') is
controllable. Consequently, the eigenvalues of (A' - e'l') or its transpose (A - le)
can be arbitrarily assigned by choosing a suitable l. This completes the argument.
We list in the following a procedure of designing l. It is a simple modification of
the procedure for pole placement.
3. Compute i' = [a 1 - a 1 a 2 - a 2 a 3 - a 3 a 4 - a 4 ].
4. Compute the equivalence transformation
[;, ~][::,]
o o
S:= p-1
o
(11.58)
a2 a¡ 1
a3 a2 a¡ 1 eA 3
S. Compute 1 = Pi = S - 1i.
Now if the eigenvalues of (11.57) are designed to have large negative real parts
by choosing a suitable 1, then no matter what e(O) is, e(t) will approach zero rapidly.
Therefore, in using the estimator in Figure 11.7, there is no need to estímate x(O).
The state estimator in (11.56) has the same dimensionas (11.53) and is called afull-
dimensional estimator.
y ex (11.59b)
for any x(O), z(O), and u(t). The conditions for (11.60) to be an estimate of Tx are
l. TA- FT = gc
2. h = Tb
3. All eigenvalues ofF have negative real parts.
Indeed, if we define e : = z - Tx, then
e= z- Tx = Fz + gy + hu - T(Ax + bu)
which becomes, after the substitution of z = e + Tx and y = ex,
e= Fe + (FT - TA + gc)x + (h - Tb)u
This equation reduces to, after using the conditions in 1 and 2,
e == Fe
If all eigenvalues ofF have negative parts, then e(t) = eF1e(O) approaches zero for
any e(O). This shows that under the three conditions, (11.60) is an estimate of Tx.
The matrix equation TA - FT = gc is called a Lyapunov equation. Now we list
the design procedure in the following.
p := [;]
(ll.ólb)
is an estimator of x in (11.59).
We give sorne remarks about the procedure. The conditions that the eigenvalues
ofF differ from those of A and that (F, g) be controllable are introduced to insure
that a solution T in TA - FT = gc exists and has full rank. The procedure can
also be used to design a full-dimensional estimator if F is chosen to be of order n.
For a more detailed discussion, see Reference [15]. In this design, we have freedom
in choosing the form ofF. lt can be chosen as one of the companion forms shown
in (2.77); it can also be chosen as a diagonal matrix. In this case, all eigenvalues
must be distinct, otherwise no g exists so that (F, g) is controllable. See Problem
11.2. If F is chosen as a Jordan-form matrix, then its eigenvalues can be repeated.
If F is of Jordan form, all solutions of TA - FT = gc can be parameterized. See
Reference [59]. \Ye mention that Lyapunov equations can be solved in MATLAB
by calling the command lyap.
Example 11.6. 1
Consider the equation in (11.41) or
y = [1 O]x (ll.62b)
1 X [1 O]
or
[O t1 - t2 ] + [4t 1 4t2 ] = [1 O]
P·-
.- [e] [1
T
-
- 0.25
11.7 CONNECTION OF STATE FEEDBACK AND STATE ESTIMATORS 459
~.25 - ~.083 J~
1
We compute
Exercise 11.6. 1
y = ex (11.64b)
The feedback gain is designed for the original state x. Now it is connected to the
estimated state x. Will the resulting system still have the desired eigenvalues? This
will be answered in the following.
The substitution of (11.66) and y = ex into (11.64a) and (11.65) yields
x = Ax + b(r - kx)
and
x (A - le)x + Icx + b(r - kx)
They can be combined as
y = ex = [e o{:J (11.67b)
P= [1 o] =
1 -1
p-I (11.68)
y = [e o{~] (11.69b)
Because any equivalence transformation will not change the characteristic poly-
nomial and transfer function, the characteristic polynomial of (11.67) equals that
11.7 CONNECTION OF STATE FEEDBACK AND STATE ESTIMATORS 461
[e o{ si - ~+ bk si - - : \ IJ -l~ J
+
lc)_ 1 J[~J
(si - A bk)- 1
[e 0] [ (11.71)
o (si- Aa +
where a= (si- A+ bk)- 1bk(sl- A+ lc)- 1• From (11.70), we see that the
eigenvalues of the overall system in Figure 11.8 consist of the eigenvalues of the
state feedback and the eigenvalues of the state estimator. Thus the connection of the
feedback gain to the output of the estimator does not change the original designs.
Thus the state feedback and the state estimator can be designed separately. This is
often referred to as the separation property.
The transferfunction ofthe overall system in Figure 11.8 is computed in (11.71).
It equals ( 11.30c) and has only n poles. The overall system, however, has 2n eigen-
values. Therefore, the state-variable equations in (11.67) and (11.69) are not minimal
equations. In fact, they can be shown to be uncontrollable and unobservable. The
transfer function of the state feedback system with a state estimator in Figure 11.8
equals the transfer function of the state feedback system without a state estimator in
Figure 11.5; thus, the state estimator is hidden from the input r and output y and its
transfer function is canceled in the design. This can be explained physically as
follows. In computing transfer functions, all initial states are assumed to be zero,
therefore, we have x(O) = x(O) and, consequently, x(t) = x(t) for all t and for any
u(t). Thus the estimator does not appear in (11.71). This situation is similar to the
pole-zero cancellation design in Chapter 10.
We have shown the separation property by using the full-dimensional estimator.
The property still holds if we use reduced-dimensional estimators. The proof, how-
ever, is slightly more complicated. See Reference [15].
G (s) - N(s)
o Do(s)
where D 0 (s) has the same degree and the same leading coefficient as D(s). Note that
the numerator of G0 (s) is the same as that of G(s). Clearly GJs) is implementable
for the given G(s) (see Section 9.2) and the linear algebraic methods discussed in
Chapter 10 can be used to implement such G0 (s). In this subsection, we use an
example to compare the design using the state-variable method with that using the
linear algebraic method.
Consider the minimal equation in Example 11.4.2 or
y [1 O]x
with transfer function
N(s) 10
G(s) =- = (11.72)
D(s) s 2
+ s
It is computed in Example 11.4.2 that the feedback gain k = [0.8 0.3] in u =
r - kx will shift the poles of G(s) to -2 ± j2 and that the resulting transfer
function from r to y is
N (s) 10
G (s) = -0 - = -2: : - - - - (11.73)
o D 0 (s) s + 4s + 8
Now if the state is not available for feedback, we must design a state estimator.
A reduced-dimensional state estimator with eigenvalue -4 is designed in Example
11.6.1 as
:i -4z + y - 0.83u
x [ 3y !_ 12z]
lts basic block diagram is plotted in Figure 11.9(a). Now we apply the feedback gain
k to x as shown in Figure 11.9(b). This completes the state-variable design.
In order to compare with the linear algebraic method, we compute the transfer
functions from u to w and y to w of the block bounded by the dashed line in Figure
11.9(b). There is no loop inside the block; therefore, using Mason's formula, we
have
W(s) 1 3
C 1(s) ( -0.83). - - . ( -12). (0.3) - - -
U(s) S + 4 S + 4
and
W(s) (- 12) . (0.3) 1.7s + 3.2
C 2 (s) + 0.3 . 3 + 0.8
Y(s) S + 4 S +4
11 .7 CONNECTION OF STATE FEEDBACK ANO STATE ESTIMATORS 463
u y
r--------------- 1
1 1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
L-------------------~
(a)
w
1
1 1
L _______ l ______________ ~
Figure 11.9 (a) State estimator. (b) State feedback and state estimator.
These two transfer functions are plotted in Figure 11.10(a) and then rearranged
in Figure ll.lO(b ). lt is the plant input/ output feedback configuration studied in
Figure 10.15.
Now we shall redesign the problem using the method in Section 10.6, namely,
given G(s) in (11.72) and implementable G0 (s) in (11.73), find two compensators of
the form
L(s) M(s)
e 1(s) =-
A(s)
C2 (s) = -
A(s)
so that the resulting system in Figure 10.15(b) or Figure 11.10 has G0 (s) as its transfer
464 CHAPTER 11 STATE SPACE DESIGN
(a)
w
r - - - - - - - - - - - - - - - - --,
L - - - - - - - - - - - - - - - - - - - _ _j
(b)
G (S) =
0 10 =: NP(s)
N(s) (s 2 + 4s + 8) · 10 s2 + 4s + 8 DP(s)
[i ~ ~ U~l [1] 1
(11.74)
11.8 LYAPUNOV STABILITY THEOREM 465
The first equation of (11.74) yields IOM0 = 32 or M0 = 3.2. The fourth equation
of (11.74) yields L 1 O. The second and third equations of (11.74) are L 0 +
IOM 1 = 20 and L 0 + L 1 3, which yield L 0 = 3 and M 1 = 1.7. Thus the
compensators are
L(s) 3 M(s) 1.7s +
3.2
C 1(s)
A(s) S+ 4 A(s) S+ 4
They are the same as those computed using state-variable equations.
Now we compare the state-variable approach and the transfer function approach
in designing this problem. The state-variable approach requires the concepts of con-
trollability and observability. The design requires computing similarity transforma-
tions and solving Lyapunov matrix equations. In the transfer function approach, we
require the concept of coprimeness (that is, two polynomials have no common fac-
tors). The design is completed by solving a set of linear algebraic equations. There-
fore, the transfer function approach is simpler in concept and computation than the
state-variable approach. The transfer function approach can be used to design any
implementable transfer function. The design of any implementable transfer function
by using state-variable equations would be more complicated and has not yet ap-
peared in any control text.
A square matrix A is said to be stable if all its eigenvalues have negative real parts.
One way to check this is to compute its characteristic polynomial
á(s) = det (si - A)
We then apply the Routh test to check whether or not á(s) is a Hurwitz polynomial.
If it is, A is stable; otherwise, it is not stable.
In addition to the preceding method, there is another popular method of checking
the stability of A. It is stated as a theorem.
COROLLARY 11.2
All eigenvalues of A have negative real parts if, for any symmetric positive
semidefinite matrix N = n'n with the property that (A, n) is observable, the
Lyapunov equation
A'M + MA = -N (11.75)
lts solution is x(t) = eA x(O). If the eigenvalues of A are c 1, c2 , and c3 , then every
1
component of x(t) is a linear combination of ec 11 , ec21 , and ec31• These time functions
will approach zero as t~oo if and only if every e; has a negative real part. Thus we
conclude that the response of (11.76) dueto any nonzero initial state will approach
zero if and only if A is stable.
We define
V(x) := x'Mx (11.77)
If M is symmetric positive definite, V(x) is positive for any nonzero x and is zero
only at x = O. Thus the plot of V(x) will be bowl-shaped, as is shown in Figure
11.11. Such a V(x) is called a Lyapunov function.
Now we consider the time history of V(x(t)) along the trajectory of (11.76).
Using x' = x'A', we compute,
d d
- V(x(t)) - (x'Mx) = x'Mx + x'Mx
dt dt
x' A'Mx + x'MAx = x' (A'M + MA)x
which becomes, after the substitution of (11.75),
d
- V(x(t)) = - x'Nx (11.78)
dt
If N is positive definite, then dV(x)/ dt is strictly negative for all nonzero x. Thus,
for any initial state x(O), the Lyapunov function V(x(t)) decreases monotonically
until it reaches the originas shown in Figure 11.11(a). Thus x(t) approaches zero as
t~oo and A is stable. This establishes the Lyapunov theorem.
If N is symmetric positive semidefinite, then dV(x)/ dt ::o:; O, and V(x(t)) may not
decrease monotonically to zero. lt may stay constant along sorne part of a trajectory
such as AB shown in Figure 11.11 (b ). The condition that (A, n) is observable,
however, will prevent dV(x(t))/dt = O for all t. Therefore, even if dV(x(t))/dt = O
V(x) V(x)
(a) (b)
for sorne t, V(x(t)) will eventually continue to decrease until it reaches zero as t~oo.
This establishes the corollary. For a more detailed discussion of these results, see
Reference [15].
k3
a¡
s2 a¡ a3 [o a2 - a3 =:e
a¡
J
a¡
k2 = e
S e
e
k¡= a3
a3
Then the Routh test states that the polynomial is Hurwitz if and only if
a 1 >O (11.80)
Now we use Corollary 11.2 to establish the conditions in (11.80). Consider the
block diagram in Figure 11.12 with k; defined in the Routh tab1e. The block diagram
has three loops with loop gains -1/k 3s, - 1/k1 k 2s 2, and - 1/k2k 3s 2, where we
have assumed imp1icitly that all k; are different from O. The loop with loop gain
-1/k 3s and the one with - 1/k1k 2s 2 do not touch each other. Therefore, the char-
acteristic function in Mason's formula is
1 1 1 ) (-1)( -1)
á = 1 - ( - k 3s - k 1k 2s 2 - k 2k 3s 2 + k 3s k 1k 2s 2
Let us consider x 3 as the output, that is, y = x 3 • Then the forward path gain from u
to y is
and, because the path does not touch the loop with loop gain -1/k1k2s 2 , the cor-
responding Ll 1 is,
G(s)
G(s)
s3 + _!_ s2 + k¡ + k3 s + _1_
k3 k¡k2k3 k¡k2k3
From k¡ in the Routh table, we can readily show 1/k3 = a 1 , 1/k 1k 2k 3 = a 3, and
(k 1 + k 3)/k1 k 2k 3 = a 2. Thus, the transfer function from u to y of the block diagram
is
2
a 1s + a 3 _ _
G(s) = - --.!....__-----=:_ __ •• N(s)
(11.81)
s 3 + a 1s 2 + a 2 s + a 3 D(s)
Clearly, N(s) and D(s) have no common factor. Thus G(s) has three poles. Now we
develop a state-variable equation to describe the system in Figure 11.12. We assign
the state variables as shown. Then we have
k 1i= x2
1 k2i2 = -x¡ + x3
k3i3 = -xz - x 3 +u y = x3
These can be expressed in matrix form as
1
o k¡
o
x=
-1
k2
o
-1
o
k3
1
k2
-1
k3
x+ [JJ (11.82a)
y = [O o 1]x (11.82b)
Both (11.81) and (11.82) describe the same block diagram, thus (11.81) is the transfer
function of the state-variable equation in ( 11.82). Because the dimension of ( 11.82)
equa1s the number of po1es of G(s) in (11.81), (11.82) is a minimal realization
11 .8 l YAPUNOV STABILITY THEOREM 469
o o V2
[::,] o
V2
-\12 -\12
k3
V2
k3
d
k2k3 k2k3
with d = - V2(k 3 - k 2 )/k 2 k~, is nonsingular, the pair (A, n) is observable. There-
fore Corollary 11.2 can be used to establish a stability condition for A. It is straight-
forward to verify the following
-1
o o
k2 o
[~ ~]
1 -1
o k2
k¡ k3
-1 o
o
k2 k3
(11.83)
1
o o
[k, o k¡ o
n [~ ~]
-1
+ o k2 o o
k2 k2
o o -1 -1 o
o
k3 k3
Therefore the symmetric matrix
[k, o
M= ~ k2
o ~J
is a solution of the Lyapunov equation in (11.83). Consequently, the condition for
A in (11.82) to be stable is M positive definite or, equivalently,
470 CHAPTER 11 STATE SPACE DESIGN
kl >o
which implies
e a 1
kl =->o _l >o ->0
a3 e al
a 1 >O
which is the same as (11.80). This is one way to establish the Routh stability test.
For a more general discussion, see Reference [15].
This chapter introduced state space designs. We first introduced the concepts of
controllability and observability. We showed by examples that a state-variable equa-
tion is minimal if and only if it is controllable and observable. We then introduced
equivalent state-variable equations; they are obtained by using a nonsingular matrix
as an equivalence transformation. Any equivalent transformation will not change the
eigenvalues of the original equation or its transfer function. Neither are the properties
of controllability and observability affected by any equivalence transformation.
If a state-variable equation is controllable, then it can be transformed, using an
equivalence transformation, into the controllable-form equation. Using this form, we
developed a procedure to achieve arbitrary pole placement by using constant-gain
state feedback. Although state feedback will shift the eigenvalues of the original
system, it does not affect the numerator of the system's transfer function.
If (A, e) is observable, then (A', e'), where the prime denotes the transpose, is
controllable. Using this property, o that if (A, e) is observable, a state
estimator with any eigenvalue can be designed. We also discussed a method of
designing estimators by solv· g Lyapunov equations. The connection of state feed-
back gains to estimated st tes, rather than to the original state, was justified by
establishing the separating property. We then compared the state space design with
the linear algebraic metho developed in Chapter 10. lt was shown that the transfer
function approach is simpler, in both concept and computation, than the state-
variable approach. Finally, we introduced the Lyapunov stability theorem.
To conclude this chapter, we discuss constant gain output feedback. In constant-
gain state feedback, we can assign all n poles arbitrarily. In constant-gain output
feedback of single-variable systems, we can arbitrarily assign only one pole; the
remaining poles cannot be assigned. For example, consider the constant-gain output
feedback system shown in Figure 11.13(a). We can assign one pole in any place.
For example, if we assign it at - 3, then from the root loci shown in Figure 11.13(b),
we can see that the other two poles will move into the unstable region. Therefore,
the design is useless. For constant-gain output feedback, it is better to use the root-
PROBLEMS 471
Ims
(a) (b)
Figure 11.13 (a) Constant gain output feedback. (b) Root loci.
locus method in Chapter 7 to carry out the design. The design using a compensator
of degree 1 or higher in the feedback path is called dynamic output feedback. The
design of dynamic output feedback is essentially the same as the design of state
estimators and the design in Chapter 10. Therefore, it will not be discussed.
PROBLEMS
11. l. Check the controllability and observability of the following state-variable
equations:
a. i = - x + u y = 2x
b. X = [ ~ ~J X + [ ~J U
y = [2 -2]x
C. X = [ =~ ~J + [ ~J X U
y = [1 O O]x
y [2 O]x
472 CHAPTER 11 STATE SPACE DESIGN
is controllable if and only if ,.\ 1 #- ,.\ 2 • Show that the equation is always not
observable.
11.3. Show that the equation
x
y
[ ~~
[2
;J
O]x
x + [~:] u
is controllable if and only if b2 #- O. lt is independent of b 1 • Show that the
equation is always observable.
Find equivalent state-variable equations for the equations in Problem 11.1 (b)
and (e).
11.5. Check the controllability and observability of the equations in Problem 11.4.
Also compute their transfer functions. Does the equivalence transformation
change these properties and transfer functions?
use the procedure in Examp1e 11.4.1 to find the feedback gain such that the
resu1ting system has po1es at - 2, - 3, and - 4.
11.7. Redesign Prob1em 11.6 using the state-variab1e method. Is the feedback gain
the same?
x= [A¿¡ ~::] x + [: Ju 1
Show that the equation is not controllab1e. Show a1so that the eigenva1ues of
A22 will not be affected by any state feedback. If all eigenva1ues of A22 have
negative real parts and if (A 11 , b 1) is controllab1e, then the equation is said
to be stabilizable.
11.9. Consider
x [; ;] x+ [~]u
y [2 -1]x
PROBLEMS 473
Find the feedback gain k in u r - kx such that the resulting system has
eigenvalues at - 2 and - 3.
11.10. Consider
x [ =~ ~] x + [ -~J u
y = [1 O]x
Find the feedback gain k in u = r - kx such that the resulting system has
eigenvalues at - 2 ± 2}.
11.11. Design a full-dimensional state estimator with eigenvalues -3 and -4 for
the state-variable equation in Problem 11.9.
11.12. Design a full-dimensional state estimator with eigenvalues -3 and -4 for
the state-variab1e equation in Problem 11.1 O.
11. 13. Design a reduced-dimensional state estimator with eigenvalue - 3 for the
state-variable equation in Problem 11.9.
11.14. Design a reduced-dimensional state estimator with eigenvalue - 3 for the
state-variable equation in Problem 11.1 O.
11.15. Consider a co~rollable n-dimensional (A, b). Let F be an arbitrary n X n
matrix and let k be an arbitrary n X 1 vector. Show that if the solution T of
AT - TF = bk is nonsingular, then (A - bkT- 1) has the same eigenvalues
as F.
11.16. Connect the state feedback in Problem 11.9 to the estimator designed in Prob-
lem 11.11. Compute the compensators from u to w and from y to w in Figure
11.8. Also compute the overall transfer function from r to y. Does the overall
transfer function completely characterize the overall system? What are the
missing poles?
11.17. Repeat Problem 11.16 by using the estimator in Problem 11.13. Does the
overall transfer function equal the one in Problem 11.16?
11. 18. Connect the state feedback in Problem 11.1 O to the estimator designed in
Problem 11.12. Compute the compensators from u to w and from y to w in
Figure 11.8. Also compute the overall transfer function from r to y. Does the
overall transfer function completely characterize the overall system? What are
the missing poles?
11.19. Repeat Problem 11.18 by using the estimator in Problem 11.14. Does the
overall transfer function equal the one in Problem 11.18?
11.20. Redesign Problem 11.17 using the linear algebraic method in Section 10.6.
Which method is simpler?
11.21. Redesign Problem 11.19 using the linear algebraic method in Section 10.6.
Which method is simpler?
474 CHAPTER 11 STATE SPACE DESIGN
11.22. Check whether the following matrices are positive definite or semidefinite.
[~
-2
~ - ~]
o 1
[~
-2
~ - ~]
o 1
11.23. Compute the eigenvalues of the matrices in Problem 11.22. Are they all real?
From the eigenvalues, check whether the matrices are positive definite or
semidefinite.
11.24. Consider the system in Problem 11.9. Find the state feedback gain to minimize
the quadratic performance index in (11.45) with R = l.
11.25. Consider
Discrete-Time
System Analysis
12.1 INTRODUCTION
r+~
-~1
(a)
(b)
Figure 12_ 1 (a) Analog control system. (b) Digital control system.
'
y(t) y(t)
(a) (b)
y(kT) y(kT)
o 2 3 4
(a) (b)
signal a nonanalog signal. A continuous-time signal usually has the same waveform
as the physical variable, thus it is also called an analog signal.
Systems that receive and generate analog signals are called analog systems.
Systems that receive and generate digital signals are called digital systems. However,
an analog system can be modeled as a digital system for convenience of analysis
and design. For example, the system described by (2.90) is an analog system. How-
ever, ifthe input is stepwise, as shown in Figure 2.23 (which is still an analog signal),
and if we consider the output only at sampling instants, then the system can be
modeled as a digital system and described by the discrete-time equation in (2.92).
This type of modeling is used widely in digital control systems. A system that has
an analog input and generates a digital output, such as the transducer in Problem
3.11, can be modeled as either an analog system or a digital system.
We compare analog and digital techniques in the following:
l. Digital signals are coded in sequences of O and 1, which in terms are represented
by ranges of ':oltages (for example, O from Oto 1 volt and 1 from 2 to 4 volts).
This representation is less susceptible to noise and drift of power supply.
2. The accuracy of analog systems is often limited. For example, if an analog
system is to be built using a resistor with resistance 980.5 ohms and a capacitor
with capacitance 81.33 microfarads, it would be difficult and expensive to obtain
components with exactly these values. The accuracy of analog transducers is
also limited. lt is difficult to read an exact value if it is less than 0.1% of the
full scale. In digital systems, there is no such problem, however. The accuracy
of a digital device can be increased simply by increasing the number of bits.
Thus, digital systems are generally more accurate and more reliable than analog
systems.
3. Digital systems are more flexible than analog systems. Once an analog system
is built, there is no way to alter it without replacing sorne components or the
entire system. Except for special digital hardware, digital systems can often be
changed by programming. If a digital computer is used, it can be used not only
as a compensator but also to collect data, to carry out complex computation,
and to monitor the status of the control system. Thus, a digital system is much
more flexible and versatile than an analog system.
4. Because of the advance of very large scale integration (VLSI) technology, the
price of digital systems has been constantly decreasing during the last decade.
Now the use of a digital computer or microprocessor is cost effective even for
small control systems.
For these reasons, it is often desirable to design digital compensators in control
systems.
Although compensators are becoming digital, most plants are still analog. In order
to connect digital compensators and analog plants, analog signals must be converted
into digital signals and vice versa. These conversions can be achieved by using
12.3 A!D AND D/A CONVERSIONS 479
eout
R=5KQ
OL___ _ _ _ __
(a) (b)
vo
-(x !!_ +
1
2R
X
2
!!_ +
4R
X
3
!!_ +
8R
X
4
_!i_)
16R
E
-(x 12- 1 + x 2 2- 2 + x 3 2- 3 + x 4 2- 4 )E
where E is the supplied voltage, and X; is either 1 or O, closed or open. The bit x0 is
called the sign bit. If x 0 = O; then E > O; if x 0 = 1, then E < O. If x 0 x 1 x 2 x 3 x 4 =
11011, and if E = 10 volts, then
vo = -(1. T 1 + 1. 2- 3 + 1. 2- 4 ) . (-10) = 0.6875
The circuit will hold this value until the next set of X; is received. Thus the circuit
changes a five-bit digital signal x 0 x 1 x 2 x 3 x 4 into an analog signal of magnitude
0.6875, as shown in Figure 12.4(b). Thus the circuit can convert a digital signal into
an analog signal, and is called a D/A con verter. The D /A converter in Figure 12.4
is used only to illustrate the basic idea of conversion; practica! D /A converters
usually use different circuit arrangements so that resistors have resistances closer to
each other and are easier to fabricate. The output of a D /A con verter is discontinuous,
as is shown in Figure 12.4. lt can be smoothed by passing through a low-pass filter.
This may not be necessary if the converter is connected to a plant, because most
plants are low-pass in nature and can act as low-pass filters.
The analog-to-digital conversion can be achieved by using the circuit shown in
Figure 12.5(a). The circuit consists of a D/A converter, a counter, a comparator, and
controllogic. In the conversion, the counter starts to drive the D/A converter. The
output of the converter is compared with the analog signal to be converted. The
counter is stopped when the output of the D/A converter exceeds the value of the
analog signal, as shown in Figure 12.5(b). The value ofthe counter is then transferred
to the output register and is the digital representation of the analog signal.
480 CHAPTER 12 DISCRETE-TIME SYSTEM ANALYSIS
Comparator
Analog signa! u
DIA Converter
We see from Figure 12.5(b) that the A/D conversion cannot be achieved in-
stantaneously; it takes a small amount of time to complete the conversion (for ex-
ample, 2 microseconds for a 12-bit A/D converter). Because of this conversion time,
if an analog signal changes rapidly, then the value converted may be different from
the value intended for conversion. This problem can be resolved by connecting a
sample-and-hold circuit in front of an A/D converter. Such a circuit is shown in
Figure 12.6. The field-effect transistor (FET) is used as a switch; its on and off states
are controlled by a controllogic. The voltage followers [see Figure 3.14(a)] in front
and in back of the switch are used to eliminate the loading problem or to shield the
capacitor from other parts of circuits. When the switch is closed, the input voltage
will rapidly charge the capacitor to the input voltage. When the switch is off, the
capacitor voltage remains almost constan t. Hence the output of the circuit is stepwise
as shown. Using this device, the problem dueto the conversion time can be elimi-
nated. Therefore, a sample-and-hold circuit is often used together with an A/D
con verter.
With A/D and D/A converters, the analog control system in Figure l2.l(a) can
be implemented as shown in Figure 12.l(b). We call the system in Figure 12.l(b) a
digital control system. In the remainder of this chapter, we discuss digital system
analysis; design will be discussed in the next chapter.
[V
1 +
Voltage
follower
Control
logic
I
Figure 12.6 Sample-and-hold circuit.
12.4 THE z-TRANSFORM 481
u(k) y(k)
Consider the discrete-time system shown in Figure 12.7. If we apply an input se-
quence u( k) : = u(kT), k = O, 1, 2, ... , then the system will generate an output
sequence y(k) : = y(kT). This text studies only the class of discrete-time systems
whose inputs and outputs can be described by linear difference equations with con-
stant real coefficients such as
3y(k + 2) + 2y(k + 1) - y(k) = 2u(k + 1) - 3u(k) (12.1)
or
3y(k) + 2y(k - 1) - y(k - 2) = 2u(k - 1) - 3u(k - 2) (12.2)
then its response due to the initial conditions y(- 2) = 1, y(- 1) = -2 and the
unit-step input sequence u(k) = 1, for k = O, 1, 2, ... , and u(k) = O for k < O,
can be computed recursively as
1
y(O) = - [ -2y( -1) + y( -2) + 2u( -1) - 3u( -2)]
3
= 31 [- 2 X ( - 2) + 1 + 2 X 0 - 3 X 0]
5
3
= ~ [ -2 X ~ - 2 + 2] = -
9
10
= 31 [ - 2 -10
X -9- + 35 + 2 - 3
]
= 27
26
and so forth. Thus, the solution of difference equations can be obtained by direct
substitution. The solution obtained by this process is generally not in closed form,
482 CHAPTER 12 DISCRETE-TIME SYSTEM ANALYSIS
and it is difficult to abstract from the solution general properties of the equation. Por
this and other reasons, we introduce the z-transform.
Consider a sequence f(k). The z-transform of f(k) is defined as
1 + r + r
2
+ r
3
+ ··· = 2:o rk 1 - r
(12.4)
where r is a real or complex constant with amplitude less than 1, or lrl < l.
Example 12.4. 1
This holds only if le-aTz- 1 1 < 1 or ie-ar¡ < lzl. This condition, called the region
of convergence, will be disregarded, however, and (12.5) is considered to be defined
for all z except at z = e-aT_ See Reference [18] for a justification.
If a = O, e-akT equals 1 for all positive k. This is called the unit-step sequence,
as is shown .in Figure 12.8(a), and will be denoted by q(k). Thus we have
q(k) 8(k-3)
-.--+--+--L--L--L--L------.k
-2 -1 o 1 2 3 4 5 -1 o 1 2 3 4 5
(a) (b)
1 z
Z[q(k)] = -1---z-_-1 - -z---1
1 z
Z[bk] = ---
1 - bz- 1 z - b
z sin wT
z2 - 2(cos wT)z +
An impulse sequence or a Kronecker sequence is defined as
1 if k = o
. 8(k) = {
o if k *o (12.6a)
8(k - n) = {~ if k
if k*
= n
n
(12.6b)
All sequences, except the impulse sequence, studied in this text will be obtained
from sampling of analog signals. For example, the sequence f(kT) = e-akT in
~xample 12.4.1 is the sampled sequence of f(t) = e -at with sampling period T. Let
F(s) be the Laplace transform of f(t) and F(z) be the z-transform of f(kT). Note that
we must use different notations to denote the Laplace transform and z-transform, or
confÚsion will arise. lf f(kT) is the sample of f(t), then we have
F(z) = Z[f(kT)] = Z[f(t)lt=kT] = Z[[.'i:- 1F(s)Jit=kT] (12.7a)
Example 12.4.2
Consider f(t) e-at. Then we have
F(s) = .:i[f(t)]
s + a
and
z
F(z) Z[f(kT)]
Thus we have
z
Exercise 12.4. 1
From the preceding example and exercise, we see that a polea in F(s) is mapped
into the pole eaT in F(z) by the analog-to-digital transformation in (12.7). This prop-
erty wil~be further established in the nex~subsection. We list in Table 12.1 sorne
pairs of F(s) and F(z). In the table, we use 8(t) to denote the impulse defined for the
continuous-time case in Appendix A and 8(k) _!_o denote the impulse sequence defined
for the discret~time case in (12.6). Because 8(t) is not defined at t = O, 8(k) is not
the sample of 8(t). The sixth and eighth pairs of the table are obtained by using
Z[kf(k)] = - z dF(z)
dz
Z[kbk] = -z!!:__
dz
(-z-) =
z - b
-z. (z-
(z -
b)- z
b?
bz
B(t)
e-Ts 8(t - T)
8(kT)
8((k - n)T) z-n
z
S z - 1
Tz
kT
sz (z- V
e-at e-akT z
s + a Z - e-aT
Tze-aT
te-a' kTe-akT
(s + af (z _ e-aT)2
w z sin wT
sin wt sin wkT
sz + wz z2 - 2z(cos wT) +
S z(z - cos wT)
cos wt cos wkT
sz + wz z2 - 2z(cos wT) +
w ze-aT sin wT
e-ar sin wt e- akT sin wkT
(s + a) 2 + wz z2 - 2ze-aT(cos wT) + e-zar
s + a z2 - ze-aT(cos wT)
e-ateos wt e- akT cos wkT
(s + +
a) 2 w2 z2 - 2ze-aT(cos wT) + e-zar
so that the Laplace transform can be applied. Consider f(kT), for integer k ~ O and
positive sampling period T > O. We define
F*(s)jz=eTs = L f(kT)z-k
k=O
Its right-hand side is the z-transform of f(kT). Thus the z-transform of f(kT) is the
Laplace transform of f*(t) with the substitution of z = eTs or
Z[J(kT)] = H;[J*(t)]jz=eTs (12.10)
for all w. This implies that the imaginary axis of the s-plane is mapped into the unit
circle on the z-plane. If s = a + jw, then
jzj = je<a+jw)TI = leaTIIejwTI = eaT
Imz
3n
T Imz
-1
Thus, a verticalline in the s-plane is mapped into the circle in the z-plane with radius
eaT_ If a < O, the vertical line is in the left half s-plane and the radius of the circle
is smaller than 1; if a > O, the radius is larger than l. Thus the entire open left half
s-plane is mapped into the interior of the unit circle on the z-plane. To be more
specific, the strip between - 7T/T and 7T/T shown in Figure 12.9 is mapped into the
unit circle. The upper and lower strips shown will also be mapped into the unit circle.
We call the strip between - 7T/T and 7T/T the primary strip.
-13
-3-
+ az
2 -1
-13
- 26 -1
3 - - gZ + 1 3z-2
9
then we have
2z- 3 2 32 3 + ...
13 -2 + -z-
F( ) --:::------ = O + -z- 1 -z
z = 3z2 + 2z - 3 9 27
Thus, the inverse z-transform of F(z) is
~· f(2)
13 32
f(O) = O, f(l) = -9, f(3) 27' ... (12.13)
Therefore, the inverse z-transform of F(z) can be easily obtained by direct division.
The inverse z-transform can also be obtained by partial fraction expansion and
table look-up. We use the z-transform pairs in Table 12.1. Although the procedure
is similar to the Laplace transform case, we must make one modification. Instead of
expanding F(z), we expand F(z)/z. For example, for F(z) in (12.12), we expand
F(z) 2z- 3 2z - 3
z z(3z + 2z
2
1) z(3z - l)(z + 1) (12.14)
k¡ k2 k3
-+ 3z - +--
z z + 1
488 CHAPTER 12 DISCRETE-TIME SYSTEM ANALYSIS
with
k¡
2z-3 1
(3z - 1)(z + 1) z=O = =
3
1 =
3
2
3
2z- 31
= _3__ = -21
z(z + 1) z=t 1 4 4
3 3
and
k3 = 2z - 3 1
-5 -5
z(3z - 1) z= _
1
(-1)(-4) 4
f(k) = 38(k) - -7
4
(1)k-
3
- -5 ( -l)k
4
for k = O, 1, 2, 3, .... For example, we have
7 5
f(O) = 3 - - - - = O
4 4
+
f(1) o - -7 o
1
- -
5
- o (- 1) = -
-7
+-
5
=
-7 15 2
4 3 4 12 4 12 3
f(2) =
7 (1) 5 -13
o- 4 9 - 4= o -9-
f(k) f(k-1)
-3 -1 o1 2 3 4 5 -2-1 o1 2 3 4 5
(a) (b)
f(k+ 1)
-3-101234
(e)
Figure 12.10 (a) Sequence. (b) Time delay. (e) Time advance.
It is defined only for f(k) with k::::: O, and f(- 1), f(- 2), ... do not appear in F(z).
Consider f(k - 1). It is f(k) shifted to the right or delayed by one sampling period
as shown in Figure 12.10(b). Its z-transform is
Z[f(k - 1)] = z- 1 k~ 1
J(k)z-k = z- 1 [t( -1)z + k~O f(k)z-k]
(12.16a)
= z- 1[f(- 1)z + F(z)]
This has a simple physical interpretation. F(z) consists of f(k) with k ::::: O. If f(k) is
delayed by one sampling period, f( -1) will rhove into k = O and must be included
in the z-transform of f(k - 1). Thus we add f( -1)z to F(z) and then delay it
(multiplying by z- 1) to yield Z[f(k - 1)]. Using the same argument, we have
Z[f(k - 2)] z- 2 [J(-2)z 2 + f(-l)z + F(z)] (12.16b)
Z[f(k - 3)] 3
z- [J(-3)z 3
+ f(-2)z + f(-1)z + F(z)]
2
(12.16c)
and so forth.
Now we consider f(k + 1). It is the shifting of f(k) to the left (or advancing
by one sampling period) as shown in Figure 12.10(c). Because f(O) is moved to
k = - 1, it will not be included in the z-transform off(k + 1), so it must be excluded
from F(z). Thus, we subtract f(O) from F(z) and then advance it (multiplying by z),
490 CHAPTER 12 DISCRETE-TIME SYSTEM ANALYSIS
Similarly, we have
Z[f(k + 2)] = z2[F(z) - f(O) - f(l)z- 1] (12.17b)
and so forth.
As was discussed ear1ier, its solution can be obtained by direct substitution. The
solution, however, will not be in closed form, and it will be difficu1t to develop from
the solution general properties of the equation. Now we apply the z-transform to
study the equation. The equation is of second order; therefore, the response y(k)
depends on the input u(k) and two initial conditions. To simplify discussion, we
assume that u(k) = O for k :s: O and that the two initial conditions are y( -1) and
y(- 2). The application of the z-transform to (12.18) yields, using (12.16),
Now if y(- 2) = 1, y( -1) = -2, and if u(k) is a unit-step sequence, then U(z) =
z/(z - 1) and
5 - 2z- 1 (2z- 1 - 3z- 2) z
Y(z)= + 1 2 .--
3+2z-1-z-2 3+2z- -z- z-1
5z2 - z z(2z - 3) z(5z 2 - 4z - 2)
~-------- + ------~--~~---
3z2 + 2z - 1 (3z - 1)(z + 1)(z - 1) (3z - l)(z + l)(z - 1)
12.5 SOLVING LTIL DIFFERENCE EQUATIONS 491
y(k) = -19(1)k
- + -9 ( -l)k - -1 (l)k (12.21)
24 3 8 4
for k = O, 1, 2, .... We see that using the z-transform, we can obtain closed-form
solutions of LTIL difference equations.
and
N(p) := bmpm + bm-!Pm-1 + ... + b¡p + bo (12.23b)
This is the homogeneous equation. Its solution is excited exclusively by initial con-
ditions. The application of the z-transform to (12.26) yields, as in (12.20),
Y(z) = /(z)
D(z)
492 CHAPTER 12 DISCRETE-TIME SYSTEM ANALYSIS
The transfer function describes only the zero-state responses of L TIL systems.
The transfer function can easily be obtained from difference equations. For
example, if a system is described by the difference equation
D(p)y(k) = N(p)u(k)
where D(p) and N(p) are defined as in (12.23), then its transfer function is
Poles and zeros of G(z) are defined exactly as in the continuous-time case. For
example, given
N(z) 2(z + 3)(z - 1)(z + 1) 2(z + 3)
112 30
G(z) = D(z) = (z - l)(z + 2)(z + I? = (z + 2)(z + 1) 2 . )
lts poles are -2, -1 and -1; its zero is -3. Thus, if N(z) and D(z) in G(z) =
N(z)/D(z) have no common factors, then the roots of N(z) are the zeros and the roots
of D(z) are the poles of G(z).
12.5 SOLVING LTil DIFFERENCE EQUATIONS 493
with D(p) and N(p) defined in (12.23). The zero-input response of the system is
govemed by the modes, the roots of the characteristic polynomial D(z). If N(p) and
D(p) have no common factors, then the set of the poles of the transfer function in
(12.29) equals the set of the modes. In this case, the system is said to be completely
characterized by its transfer function and there is no loss of essential information in
using the transfer function to study the system. On the other hand, if D(z) and N(z)
have common factors-say, R(s)-then
N(z) N(z)R(z) N(z)
G(z) = - = = =--
D(z) D(z)R(z) D(z)
In this case, the poles of G(z) consist of only the roots of D(z). The roots of R(z) are
not poles of G(z), even though they are modes of the system. Therefore, if D(z) and
N(z) have common factors, not every mode will be a pole of G(z) (G(z) is said to
have missing poles), and the system is not completely characterized by the transfer
function. In this case, we cannot disregard the zero-input response, and care must
be exercised in using the transfer function.
To conclude this section, we plot in Figure 12.11 the time responses of sorne
poles. If a simple or repeated pole lies inside the unit circle of the z-plane, its time
response will approach zero as k _..,. oo. If a simple or repeated pole líes outside the
unit circle, its time response will approach infinity. The time response of a simple
pole at z = 1 is a constant; the time response of a simple pole on the unit circle
other than z = 1 is a sustained oscillation. The time response of a repeated pole on
the unit circle will approach infinity as k_..,. oo. In conclusion, the time response of
a simple or repeated pole approaches zero if and only if the pole lies inside the unit
circle. The time response of a pole approaches a nonzero constant if and only if the
pole is simple and is located at z = l.
Imz Imz
(a) (b)
with deg N(z) = m and deg D(z) = n. The transfer function is improper if m > n
and proper if n ;:::: m. A system with an improper transfer function is called a non-
causal or an anticipatory system because the output of the system may appear before
the application of an input. For example, if G(z) = z2 /(z - 0.5), then its unit-step
response is
z2 z
Y(z) = G(z)U(z) = · -- = z + 1.5 + 1.75z- 1 + 1.875z- 2 + · · ·
z- 0.5 z- 1
and is plotted in Figure 12.12(a). We see that the output appears at k = -1, before
the application of the input at k = O. Thus the system can predict what will be
applied in the future. No physical system has such capability. Therefore no physical
discrete-time system can have an improper digital transfer function.
The output of a noncausal system depends on future input. For example, if
G(z) = (z 3 + 1)/(z - 0.1), then y(k) depends on past input u(m) with m :S k and
future input u(k + 1) and u(k + 2). Therefore, a noncausal system cannot operate
on real time. If we store the input on a tape and start to compute y(k) after receiving
u(k + 2), then the transfer function can be used. However, in this case, we are not
using G(z), but rather G(z)/z2 = (z 3 + 1)/z2 (z - 0.1), which is no longer improper.
If we introduce enough delay to make an improper transfer function proper, then it
can be used to process signals. Therefore, strictly speaking, transfer functions used
in practice are all proper transfer functions.
If a system has a proper transfer function, then no output can appear befare the
application of an input and the output y(k) depends on the input u(mT), with
u(k) u(k)
- - -. -.-~.
.j____l___j____L_I
I I -'---I- · k ---.-..•-+--'-IJ !]- l....LI..II~.
.L.....L.......! k
y(k) u(k)
-101234 -1 o1 2 3 4 5 6
(a) (b)
Figure 12.12 (a) Response of noncausal system. (b) Response of causal system with r = 5.
l
12.6 DISCRETE-TIME STATE EQUATIONS 495
m :::::: k. Such systems are called causal systems. We study in the remainder of this
text only causal systems with proper digital transfer functions. Recall that in the
continuous-time case, we also study only systems with proper transfer functions.
However, the reasons are different. First, an improper analog transfer function cannot
be easily built in practice. Second, it will amplify high-frequency noise, which often
exists in analog systems. In the discrete-time case, we study proper digital transfer
functions because of causality_
Consider a proper (biproper or strictly proper) transfer function G(z) =
N(z)/D(z). Let r = deg D(z) - deg N(z)_ lt is the difference between the degrees
of the denominator and numerator_ We call r the pole-zero excess of G(z) because
it equals the difference between the number of poles and the number of zeros. Let
y(k) be the step response of G(z). If r = O or G(z) is biproper, then y(O) ~ O. If
r = 1, then y( O) = O and y( 1) ~ O. In general, the step response of a digital transfer
function with pole-zero excess r has the property
y(O) = O y(l) = O · · · y(r - 1) = O y(r) ~O
This is a set of algebraic equations. Therefore the solution of the equation due to
x(O) and u(k), k 2:: O, can be obtained by direct substitution as
and, in general,
k-1
x(k) = Akx(O) + L Ak-l-mbu(m) (12.33)
m=O
'--v---'
Zero-Input Zero-State
Response Response
496 CHAPTER 12 DISCRETE-TIME SYSTEM ANALYSIS
The equation is controllable if we can transfer any state to any other state in a finite
number of sampling instants by applying an input. The equation is observable if we
can determine the initial state from the knowledge of the input and output over a
finite number of sampling instants. The discrete-time equation is controllable if and
only if the controllability matrix
(12.38)
has rank n. The equation is observable if and only if the observability matrix
V (12.39)
12.7 BASIC BLOCK DIAGRAMS AND REALIZATIONS 497
has rank n. These conditions are identical to the continuous-time case. We prove the
controllability part in the following. We write (12.33) explicitly for k = n as
n~!
2: An~ 1 ~mbu(m)
m=O
u(n 1)
u(n 2)
[b Ab A2b···An~ 1 b] u(n 3)
u(O)
For any x(O) and x(n), a so1ution u(k), k = O, 1, ... , n - 1, exists in (12.40) if and
only if the matrix U has rank n (Theorem B.1). This completes the proof. If an
equation is controllab1e, then the transfer of a state to any other state can be achieved
in n sampling periods and the input sequence can be computed from (12.40). Thus,
the discrete-time case is considerab1y simpler than the continuous-time case. The
observability part can be similarly established. See Problem 12.13.
If a state-variable equation is controllable and observable, then the equation is
said to be a minimal equation. In this case, if we write
Every discrete-time state-variable equation can be easily built using the three ele-
ments shown in Figure 12.13. They are called multipliers, summers or adders, and
unit-delay elements. The gain a of a multiplier can be positive or negative, larger or
smaller than l. A» adder has two or more inputs and one and only one output. The
output is simply the sum of all inputs. If the output of the unit-delay element is x(k),
then the input is x(k + 1). A unit-delay element will be denoted by z- 1. These
elements are quite similar to those in Figure 5.3. A block diagram which consists of
only these three types of elements is called a basic block diagram.
We use an example to illustrate how to draw a basic block diagram for a discrete-
time state-variable equation. Consider
X¡(k
[ x (k
+
+
1)] [2
1) O
-0.3][x 1(k)]
-8 x 2 (k)
+ [-2]
O
uk
( )
(12.41a)
2
This means that y(k) will depend on the future input u(k + /) with l ;::: 1 and the
system is not causal. Thus, we study only proper rational G(z).
Y(z)
G(z) = - = G(oo) + b3z 3 + b2 z2 + b1z + b0 __ • d + _N(z)
(12.46)
U(z) z + a 3z 3 + a 2z2 + a 1z + a 0
4
D(z)
Then the following state-variable equation, similar to (5.17),
x(k + 1)
[-[, -a2
o
1
-a¡
o
o Tl m x(k)
+ u(k)
(12.47a)
o o
y(k) = [b3 b2 b¡ b0 ]x(k) + du(k) (12.47b)
with d = G(oo), is a realization of (5.12). The value of G(oo) yields the direct
transmission part. If G(z) is strictly proper, then d = O. Equation (12.47) is always
controllable and is therefore called the controllable-form realization. If N(z) and
D(z) in (12.45) have no common factors, then (12.46) is observable as well and the
equation is called a minimal equation. Otherwise, the equation is not observable.
The following equation, which is similar to (5.18),
x(k + 1) ~ ~ ~]
o o 1
x(k) + ¡;:]
b¡
u(k) (12.48a)
O O O b0
Exercise 12.7.1
The tandem and parallel realizations discussed in Section 5.5.2 can again be
applied directly to discrete transfer functions, and the discussion will not be repeated.
To conclude this section, we mention that the same command tf2ss in MATLAB
can be used to realize analog transfer functions and digital transfer functions. For
example, if
3z2 - z + 2 O+ 3z~ 1 - z~ 2+ 2z~ 3
(12.49)
G(z) = z3 + 2z2 + 1 1 + 2z ~ 1 + O · z~ 2 + z~ 3
then the following
num = [3 -1 2];den = [1 2 O 1];
[a,b,c,d] = tf2ss(num,den)
will generate its controllable-form realization. The command to compute the re-
sponse of G(s) due to a unit-step function is "step"; and G(s) is expressed in de-
scending powers of s. The command to compute the response of G(z) dueto a unit-
step sequence is "dstep". Furthermore, G(z) must be expressed in ascending powers
of z ~ 1 . Therefore,
num=[O 3 -1 2];den=[1 2 O 1];
y= dstep(num,den,20);
plot(y)
will generate 20 points of the unit-step response of (12.49).
12.8 STABILITY
ao a¡ a2 a3 a4
a4 a3 a2 a¡ ao
,'.- ~'.
.._bo.: b, b2 b3 o (1st a row) - k1(2nd a row)
b3
b3 b2 b¡ bo k2 = bo
pole of G(z) must lie inside the unit circle of the z-plane or have a magnitude less
than l. This condition can be deduced from the continuous-time case, where stability
requires every pole to lie inside the open left half s-plane. Because z = esT maps
the open left half s-plane into the interior of the unit circle in the z-plane, discrete
stability requires every pole to lie inside the unit circle of the z-plane.
In the continuous-time case, we can use the Routh test to check whether all
roots of a polynomiallie inside the open left half s-plane. In the discrete-time case,
we have a similar test, called the Jury test. Consider the polynomial
a 0 >O (12.50)
Although this test is stated for a polynomial of degree 4, it can be easily extended
to the general case. Wf? use an example to illustrate its application.
Example 12.8. 1
Consider a system with transfer function
(z - 2)(z + lO)
G(z) = 3 2 (12.51)
z - 0.1z - 0.12z - 0.4
To check its stability, we use the denominator to form the table
The three leading coefficients 0.84, 0.8096, and 0.771 are all positive; thus, all roots
of the denominator of G(z) lie inside the unit circle. Thus the system is stable.
This is called the final-value theorem. The theorem holds only if f(k) approaches a
constant. For example, if f(k) = 2k, then F(z) = z/(z - 2). For this z-transform
pair, we have f(oo) = oo, but
z
lim (z - 1) · - - = O · (- 1) = O
z--->1 Z - 2
Thus (12.52) does not hold. The condition for f(k) to approach a constant is that
(z - 1)F(z) is stable or, equivalently, all poles of (z - 1)F(z) lie inside the unit
12.9 STEADY-STATE RESPONSES OF STABLE SYSTEMS 503
circle. This implies that all poles of F(z), except for a possible simple pole at z =
1, must líe inside the unit circle. As discussed in Figure 12.11, if all poles of F(z)
lie inside the unit circle, then the time response will approach zero. In this case, F(z)
has no pole (z - 1) to cancel the factor (z - 1), and the right-hand side of (12.52)
is zero. If F(z) has one pole at z = 1 and remaining poles inside the unit circle such
as
F(z) = N(z)
(z - 1)(z - a)(z - b)
then it can be expanded, using partial fraction expansion, as
z z z
F(z) k1 - - + k2 -- + k3 - -
z-l z-a z-b
z - 1
with k1 lim - - F(z) = lim (z - 1) F(z). The inverse z-transform of F(z) is
Z----7-} Z Z----7-}
Considera discrete-time system with transfer function G(z). The response of G(z)
as k ....¿. oo is called the steady-state response of the system. If the system is stable,
then the steady-state response of a step sequence will be a step sequence, not nec-
essarily of the same magnitude. The steady-state response of a ramp sequence will
be a ramp sequence; the steady-state response of a sinusoidal sequence will be a
sinusoidal sequence with the same frequency. We establish these in the following.
Considera system with discrete transfer function G(z). Let the input be a step
sequence with magnitude a-that is, u(k) = a, for k = O, 1, 2, .... Then U(z)
az/(z - 1) and the output y(k) is given by
az
Y(z) = G(z)U(z) = G(z) · - -
z - 1
504 CHAPTER 12 DISCRETE-TIME SYSTEM ANALYSIS
To find the time response of Y(z), we expand, using partial fraction expansion,
Y(z) aG(z) aG(l)
- = - - = - - + (Terms due to poles of G(z))
z z-1 z-1
which implies
z
Y(z) aG(l) - - + (Terms dueto poles of G(z))
z - 1
If G(z) is stable, then every pole of G(z) líes inside the unit circle of the z-plane and
its time response approaches zero as k~ oo. Thus we have
y.(k) : = lim y(k) = aG(l)(1/ = aG(1) (12.54)
k-'>'"'
Thus, the steady-state response of a stable system with transfer function G(z) dueto
a unit-step sequence equals G(1). This is similar to (4.25) in the continuous-time
case. Equation (1t54) can also be obtained by applying the final-va1ue theorem. In
order for the fina1-value theorem to be applicable, the poles of
aG(z)z
(z - 1)Y(z) = (z - 1) - - = azG(z)
z - 1
must alllie inside the unit circle. This is the case because G(z) is stable by assump-
tion. Thus we have
y/k) : = lim y(k) = lim (z - 1)Y(z) lim azG(z) = aG(l)
k--->00 z--->1 z--->1
Example 12.9.1
G(z) = -=----.:.(z_-....,......:2)....:..(z_+_l0..::...)_ _
z3 - 0.1z 2 - 0.12z - 0.4
It is stable as shown in Examp1e 12.8.1. If u(k) = 1, then the steady-state output is
k = G = (1- 2)(1 + 10) _ -11 __
Ys( )
1
( ) 1 - 0.1 - 0.12 - 0.4 - 0.38 - 28 ·95
and
aTz
Y(z) = G(z)U(z) G(z) (z - 1)2
with
and (12.57)
In other words, if G(z) is stable, its steady-state response dueto a sinusoidal sequence
approaches a sinusoidal sequence with the same frequency; its amplitude is modified
by A(w0 ) and its phase by O(w0 ). The derivation of (12.56) is similar to (4.32) and
will not be repeated.
The plot of G(efwT) with respect to w is called the frequency response of the
discrete-time system. The plot of its amplitude A(w) is called the amplitude char-
acteristic and the plot of O(w), the phase characteristic. Because
efwT is periodic with period 2TT/T. Consequently, so are G(efwT), A(w), and O(w).
Therefore, we plot A(w) and O(w) only for w from - 1rjT to 1rjT. If all coefficients
of G(z) are real, as is always the case in practice, then A(w) is symmetric and O(w)
is antisymmetric with respect to w as shown in Figure 12.15. Therefore, we usually
plot A(w) and fJ(w) only for w from O to 1r/T or, equivalently, we plot G(z) only
along the upper circumference of the unit circle on the z-plane.
506 CHAPTER 12 DISCRETE-TIME SYSTEM ANAlYSIS
A(w) O(w)
-------------b---------+--~ú)
:rr o -
T T
(a) (b)
This is the transfer function of the discrete system whose impulse response equals
the sample of the impulse response of the analog system. Now we discuss the re-
lationship between the frequency response G(jw) of the analog system and the fre-
quency response G(eiwT) ofthe corresponding discrete system. It tums out that they
are related by
G(eiwT) = .!.
T k=
i G(1 (w -
-oo
k
2
T
1T)) = .!.
T k=
i-oo
G(j(w - kw,)) (12.59)
where ws = 27T/T is called the sampling frequency. See Reference [18, p. 371; 13,
p. 71.]. G(j(w - w.)) is the shifting ofG(jw) to ws and G(j(w + W 5 )) is the shifting
of (j_(jw) to - W 5 • Thus, except for the factor 1/T, G(~wT) is the sum of repetitions
of G(jw) at kws for all integers k. For example, if G(jw) is as shown in Figure
12.16(a), then the sum will be as shown in Figure 12.16(b). Note that the factor
1/T is not included in the sum, thus the vertical coordinate of Figure 12.16(b) is
TG(eiwT). The plot G(jw) in Figure 12.16(a) is zero for lwl;:::: 1r/T, and its repetitions
do not overlap with each other. In this case, sampling does not introduce aliasing
and we have
for lwl :s; 1r/T (12.60)
The plot G(jw) in Figure 12.16(c) is not zero for lwl ;:::: 1rjT, and its repetitions do
overlap with each other as shown in Figure 12.16(d). In this case, the sampling is
said to cause aliasing and (12.60) does not hold. However, if the sampling period T
is chosen to be sufficiently small, we have
for lwl :s; 1r/T (12.61)
12.1 O LYAPUNOV STABILITY THEOREM 507
G(jw) TG(ejwT)
------~--~~-+~--~-----·
(a)
-~ To ~
(J)
___.l,_{j---+------'---2;
(b)
- -...,.__..,____; ~~9~,.
4f----l----+o ro
-
G(jw)
----=~='---------+------==---..
~
(J)
-- -
T T
(e) (d)
Figure 12.16 Frequency responses of analog and digital systems.
If all eigenvalues of A lie inside the unit circle, then A is said to be stable. If we
compute its characteristic polynomial
Ll(z) = det (zl - A) (12.63)
then the stability of A can be determined by applying the Jury test. Another way of
checking the stability of A is applying the following theorem.
where we have substituted (12.64). The rest of the proof is similar to the continuous-
time case in Section 11.8 and will not be repeated. We call (12.64) the discrete
Lyapunov equation. The theorem can be used to establish the Jury test just as the
continuous-time Lyapunov theorem can be used to establish the Routh test. The
proof, however, is less transparent than in the continuous-time case. See Reference
[15, p. 421].
PROBLEMS
12. 1. Find the z-transforms of the following sequences, for k = O, 1, 2, ... ,
a. 38(k - 3) + (- 2t
b. sin 2k + e-o.zk
c. k(0.2t + (0.2t
12.2. Find the z-transforms of the sequences obtained from sampling the following
continuous-time signals with sampling period T = 0.1:
a. e-o.ztsin 3t + cos 3t
b. teO.it
12.3. Use the direct division method and partial fraction method to find the inverse
z-transforms of
z - 10
a.
(z + 1)(z - 0.1)
z
b.
(z + 0.2)(z - 0.3)
c. -:3:-----
z (z - 0.5)
12.4. Find the solution of the difference equation
y(k) + y(k - 1) - 2y(k - 2) = u(k 1) + 3u(k - 2)
dueto the initial conditions y(- 1) = 2, y(- 2) = 1, and the unit-step input
sequence.
12.5. Repeat Problem 12.4 for the difference equation
y(k + 2) + y(k + 1) - 2y(k) = u(k + 1) + 3u(k)
Is the result the same as the one in Problem 12.4?
PROBLEMS 509
due to zero initial conditions (that is, y(- 1) = O, y(- 2) = O) and the unit-
step input sequence. This is called the unit-step response.
12.7. Repeat Problem 12.6 for the difference equation
y(k + 2) + y(k + 1) - 2y(k) = u(k + 1) + 3u(k)
Will the response appear before the application of the input? A system with
improper transfer function is a noncausal system. The output y(k) of such a
system depends on u([) with l 2': k-that is, present output depends on future
input.
12.9. Consider
x 1(k
[ x (k
+ 1)] [~ ~] x(k) + [~] u(k)
2 + 1)
y(k) = [2 1]x(k)
Compute its transfer function.
12. 1O. Consider
[
X¡(k
x 2 (k
x 3(k
+
+
+
1)]
1)
1)
n m x(k) + u(k)
y(k) = [2 1]x(k)
Compute its transfer function.
12.11. Is the equation in Problem 12.9 controllable? observable?
l[
y(O)
l
y(1) - cbu(O)
e; ] x(O)
12.14. Draw basic block diagrams for the equations in Problems 12.9 and 12.10.
12.15. Find realizations for the following transfer functions
z2 + 2
a - -3 -
. 4z
b. 2z4 + 3z 3 + 4z 2 + z + 1
(z + 3) 2
c. (z + 1f(z + 2)
13. 1 INTRODUCTION
Plants of control systems are mostly analog systems. However, because digital com-
pensators ha ve many advantages over analog ones, we may be asked to design digital
compensators to control analog plants. In this chapter, we study the design of such
compensators. There are two approaches to carrying out the design. The first ap-
proach uses the design methods discussed in the preceding chapters to design an
analog compensator and then transform it into a digital one. The second approach
first transforms analog plants into digital plants and then carries out design using
digital techniques. The first approach performs discretization after design; the second
approach performs discretization before design. We discuss the two approaches in
order.
In this chapter, we encounter both analog and digital systems. To differentiate
them, we use variables with an overbar to denote analog systems or signals and
~riables without an overbar to denote digital systems or signals. For example,
G(s) is an analog transfer function and G(z) is a digital transfer function; y(t) is an
analog output and y(kT) is a digital output. However, if y(kT) is a sample of y(t),
then y(kT) = y(kT) and the overbar will be dropped. If the same input is applied to
an analog and a digital system, then we use u(t) and u(kT) to denote the inputs; no
overbar will be used.
511
512 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
Consider the analog compensator with proper transfer function C(s) shown in Figure
13.1(a). The arrangement in Figure 13.1(b) implements the analog compensator digi-
tally. lt consists of three parts: an A/D converter, a digital system oran algorithm,
and a D j A con verter. The problem is to find a digital system such that for any input
e(t), the output u(t) of the analog compensator and the output u(t) of the digital
compensator are roughly equal. From Figure 13.1(b), we see that the output of the
A/D converter equals e(kT), the sample of e(t) with sampling period T. We then
search for a digital system which operates on e(kT) to yield a sequence ú(kT). The
D j A con verter then holds the value of ú constant until the arrival of next data. Thus
the output u(t) of the digital compensator is stepwise as shown. The output of the
analog compensator is generally not stepwise; therefore, the best we can achieve is
that u(t) approximately equals u(t).
In designing a'digital system, ideally, for any input e(t), ú(kT) in Figure 13.1(b)
should equal the sample of u(t). 1t is difficult, if not impossible, to design such a
digital compensator that holds for all e(t). It is, however, quite simple to design such
a digital compensator for specific e(t). In this section, we design such compensators
for e(t) to be an impulse and a step function.
lmpulse-Invariance Method
Consider an ~alog compensator with a strictly proper transfer function Cs(s). If
the input of Cs(s) is an impulse (its Laplace transform is 1), then the output is
U(s) = C,(s) · 1 = C,(s)
lts inverse Laplace transform is actually the impulse response of the analog com-
pensator. The z-transform of the sample of the impulse response yields a digital
compensator with discrete transfer function
C(z) = Z[.P- 1 [C,(s)ll 1 ~d
or, using the notation in (12.7),
C(z) Z[C,(s)] (13.1)
u(t) u(t)
t==_t t--hn-
l__l___C___l_ t
e(t) ~
-~
(a) (b)
As discussed in Section 12.9.1, if the sampling ~riod is very small and the aliasing
is negligible, then the frequency responses of C5 (s) and C(z) will be of the same
form but differ by the factor 1/T in the frequency range Jwl : : : 7T/T. To take care of
this factor, we introduce Tinto (13.1) to yield
C(z) = TZ[C5 (s)]
!_his yields an impuls~invariant digital compensator for a strictly proper C5 (s). If
C(s) is biproper, then C(s) can be decomposed as
C(s) = k +C 5
(S)
The inverse Laplace transform of k is kf>(t). The value of f>(t) is not defined at
t = O; therefore, its sample is meaningless. If we require the frequency response of
C(z) to equal the frequency resp~se of k, the~ C(z) is simply k. Thus the impulse-
invariant digital compensator of C(s) = k + C 5 (s) is
C(z) = k + TZ[C,(s)] (13.2)
Note that the poles of C(z) are obtained from the poles of C(s) or C5 (s) by the
transformation z = esT which maps the open left h~ s-plane into the interior of the
unit circle of the z-plane; therefore, if all poles of C(s) lie inside the open left half
~plane, then all poles of C(z) willlie inside the unit circle on the z-plane. Thus, if
C(s) is stable, so is C(z).
Example 13.2. 1
C(z) = T [5 ·
z -
z _3 - 3 ·
e T z
z
e-T
J (13.4)
The compensator depends on the sampling period. Different sampling periods yield
different digital compensators. For example, if T = 0.5, then (13.4) becomes
05 [
5z 3z J 0.5z(2z - 2.366)
113 51
C(z) = " z - 0.223 z 0.607 (z - 0.223)(z - 0.607) "
Step-Invariance Method
Consider an analog compensator with transfer function C(s). We now develop a
digital compensator C(z) whose step response equals the samples of the step response
of C(s). The Laplace transform of a unit-step function is 1/s; the z-transform of a
unit-step sequence is z/(z - 1). Thus, the step responses of both systems in the
transform domains are
1 z
C(s) · - and C(z) · - -
s z - 1
Example 13.2.2
, [C(s)J 4z + _3_z_ 5z
Z -s- = - 3(z - 1) z - e-T
C(z) = ..:...(9_·_0_.6_0_7_-_5_·_0_.2_2_3_-_4....:..)z_-__,_(4_·_0_._13_5_-_9_·0_._22_3_+_5_·_0._60_7....:..)
3(z - 0.607)(z - 0.223)
0.116z - 0.523 (13.8)
2
z - 0.830z + 0.135
13.2 DIGITAL IMPLEMENTATIONS OF ANALOG COMPENSATORS 515
This is a strictly proper transfer function. Although (13.5) and (13.8) have the same
set of poles, their numerators are quite different. Thus the impulse-invariance and
step-invariance methods implement a same analog compensator differently.
As can be seen from this example that the z-transforrn of C(s)/ s will introduce
an unstable pole at 1, which, however, will be cancelled by (1 - z- 1). T.!!_us the
poles of C(z) in (13.6) consist of only the transforrnations of the poles of C(s) by
z = esT. Thus if C(s) is stable, so is the step-i.!!_variant digital compensator.
The step-invariant digital compensator of C(s) can also be obtained using state-
variable equations. Let
be a realization of C(s). Note that the input of the analog compensator is e(t) and
the output is u(t). If the input is stepwise as shown in Figure 2.23(a), then the
continuous-time state-variable equation in (13.9) can be described by, as derived in
(2.89),
x(k + 1) Áx(k) + be(k) (13.10a)
with
e= e a= d (13.10c)
The output u(k) of (13.10) equals the sample of (13.9) if the input e(t) is stepwise.
Because a unit-step function is stepwise, the discrete-time state-variable equation in'
(13.10) describes the step-invariant digital compensator. The discrete transfer func-
tion of the compensator is
G(z) = c(zl - Á)- 1b + a (13.11)
Example 13.2.3
Find the step-invariant digital compensator for the analog compensator in (13.3).
The controllable-forrn realization of C(s) = (2s - 4)/(s 2 + 4s + 3) is
This transfer function is the same as (13.8), other than the discrepancy dueto trun-
cation errors. Therefore, step-invariant digital compensators can be obtained using
either transfer functions or state-variable equations. In actual implementation, digital
transfer functions must be realized as state-variable equations. Therefore, in using
state-variable equations, we may stop after obtaining (13.13). There is no need to
compute its transfer function.
Forward Approximation
The integration of (13.15a) from loto lo + T yields, with e(l) = O,
i t0
to+T dx(l)
-- =
dl
x(l0 + T) - x(l0 ) =
f'o+T
to
Ax(l)dl (13.16)
Z [
x(l + T) - x(l)]
=
zX(z) - X(z)
=
z - 1
--X(z)
T T T
in the transform domains, Equation (13.18) is equivalent to
z - 1
s=-- (Forward difference) (13.19)
T
U sing this transformation, an analog compensator can easily be changed into a digital
compensator. This is called the forward-difference or Euler' s melhod. This trans-
formation may not preserve the stability of C(s). For example, if C(s) = 1/(s + 2),
then
T
C(z) = - - - -
z - 1 z - 1 + 2T
+ 2
T
r:r--
1 1
Ti 1 1
,/i' 1 1
which is un~able for T > l. Therefore, forward difference may not preserve the
stability of C(s). In gene~l, if the sampling period is sufficiently large, C(z) may
become unstable even if C(s) is stable.
The forward-difference transformation can easily be achieved using state-
variable equations as in Section 5.2. Let
x(t) = Ax(t) + be(t) (13.20a)
Example 13.2.4
2 (-z-;-1 - 2)
C(z) = C(s)IF(z-1)/T = (
z - 1
)2 z- 1
-- +4--+3
T T (13.22)
1 - 2T)
2T(z -
2
z + (4T - 2)z + (3T 2 - 4T + 1)
is a digital compensator obtained using the forward-difference method. If T = 0.5,
then (13.22) becomes
2 ·_
C(z) = _ _ _ _ _ 0.5(z - _
_:.__ 1 -_1)
....:....__ __
z2 + (4 · 0.5 - 2)z + (3 · 0.25 - 2 + 1) (13.23)
z - 2 z- 2
z2 - 0.25 (z + 0.5)(z - 0.5)
This is the digital compensator. If we realize C(s) as
Then
u( k) [2 -4]x(k) (13.24b)
13.2 DIGITAL IMPLEMENTATIONS OF ANALOG COMPENSATORS 519
is the digital compensator. lt can be shown that the transfer function of ( 13.24) equals
(13.22). See Problem 13.5.
Backward Approximation
In the backward approximation, the integration in (13.16) is approximated by the
shaded area shown in Figure 13.2(b). In this approximation, (13.16) becomes
x(t0 + T) - x(t0 ) = Ax(t0 + T)T
If the pole is stable (that is, a > 0), then the magnitude of (1 + aT) + jf3T is
always larger than l; thus, the pole in (13.26) always lies inside the unit circle. Thus
the transformation in (13.25) will transforma stable pole in C(s) into a stable pole
in C(z). Thus, if C(s) is stable, so is C(z).
Trapezoid Approximation
In this method, the integration in (13.16) is approximated by the trapezoid shown in
Figure 13.2(c) and (13.16) becomes
x(t0 + T) + x(t0 )
x(t0 + T) - x(t0 ) = A T
2
lts z-transform domain is
which implies
2 z - 1
- · - - X(z) = AX(z)
T z + 1
The Laplace transform of (13.15a) with e(t) = O is sX(s) AX(s). Thus, the
approximation can be achieved by setting
2 z- 1
S=--- (Trapezoidal approximation) (13.27)
T z + 1
This is called the trapezoidal approximation method. Equation (13.27) implies
T
(z + 1)s- = (z - 1) or
2
Thus we have
Ts
+-
2
z- - - - (13.28)
Ts
1
2
Por every z, we can compute a unique s from (13.27); for every s, we can compute
a unique z from (13.28). Thus the mapping in (13.27) is a one-to-one mapping,
called a bilinear transformation. Let s = a + jw. Then (13.28) becomes
+-
Ta + jTw
-
2 2
z =
1 - --j-
Ta Tw
2 2
and
J(1 +Tay
- + (T2wy
2
lzl (13.29)
2 2 j sin 0.5wT 2} wT
-
T 2 cos 0.5wT = -Tt a n2-
13.2 DIGITALIMPLEMENTATIONS OF ANALOG COMPENSATORS 521
which implies
_ 2 wT
w=-tan- (13.30)
T 2
This is plotted in Figure 13.3. We see that the analog frequency from w = O to
w= oo is compressed into the digital frequency from w = Oto w = 1r/T. This is
called frequency warping. Because of this warping and the nonlinear relationship
between wand w, sorne simple manipulation, called prewarping, is needed in using
bilinear transformations. This is a standard technique in digital filterdesign. See, for
example, Reference [13].
Pole-Zero Mapping
Consideran analog compensator with pole P; and zero q¡. In the pole-zero mapping,
pole P; is mapped into eP;T and zero q¡ is mapped into eq;T_ For example, the
compensator
2(s - 2) 2(s - 2) 2(s - 2)
C(s) = s 2 + 4s + 3 (s + 3)(s + 1) (s - (- 3))(s - (- 1))
is mapped into
2(z - e2 T)
C(z) = --....:......,;.;:;---:....._--;;;-
(z - e- 3T)(z - e-T)
(13.32)
Let y(t) be the step response of (13.31). Because C(s) has three more poles than
y(t) y(kt)
o o 1 2 3 4 5 6 7
(a) (b)
Figure 13.4 (a) Step response of (13.31). (b) Step response of (13.32).
zeros, it can be shown that y(O) = O, y(O) O, and y(O) = O (Problem 13.7) and
the unit-step response will be as shown in Figure 13.4(a). Let y(kT) be the response
of (13.32) dueto a unit-step sequence. Then, because C(z) has three more poles than
zeros, the response will start from k = 3, as shown in Figure 13.4(b). In other words,
there is a delay o~ 3 sampling instants. In general, if there is a pole-zero excess
of r in C(z), then there is a delay of r sampling instants and the response will start
from k = r. In order to eliminate this delay, a polynomial of degree r - 1 is
introduced into the numerator of C(z) so that the response of C(z) will start at
k = l. For example, we may modify (13.32) as
with deg N(z) = 2. If the zeros at s = oo in C(s) are considered to be mapped into
z = O, then we may choose N(z) = z2 . lt is suggested to choose N(z) = (z + 1f
in Reference [52]_!nd N(z) = z2 + 4z + 1 in Reference [3]. Note that the steady-
state response of C(s) in ( 13.31) due to a unit -step input is in general different from
the steady-state response of C(z) in (13.32) or (13.33). See Prob1em 13.8. If they are
required to be equal, we may modify b in (13.33) to achieve this.
13.3 AN EXAMPLE
Consider the control system shown in Figure 13.5(a). The plant transfer function is
1
(13.34)
G(s) - s(s + 2)
~ (a)
(b)
Figure 13.5 (a) Analog control system. (b) Digital control system.
3.6Tz
e 0 (Z) = k + TZ[Cs(s)] = 3 - ------=-==
z e-3.2T
(13.37)
(1 z -lz[l.875
) --
S
+
S
1.125
+ 3.2
J
(13.38)
(1 - 1.875z
1 [ -
z-) - + l.125z J
z - 1 z _ e-3.2T
3z - 1.125 - 1.875e- 3 ·2 T
z _ e-3.2T
e (z) __3(.:_z_-_1_+_2_T-'-z)
(13.40)
d - z - 1 + 3.2Tz
524 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
6((1 + T)z - 1 + T)
ee (z) -
(2 + 3.2T)z - 2 + 3.2T
(13.41)
Impulse-invariant A A e e F
Step-invariant A A B B B
Forward difference B B B F F
Backward difference e e F F F
Bilinear A- A- B B B+
Pole-zero B+ B e e e
where A denotes the best or closest to the analog system and F the worst or not
acceptable. For T = 0.1 and 0.2, the responses in Figure 13.6(a) and (b) are very
close to the analog response; therefore, they are given a grade of A. Although the
responses in Figure 13.6(e) are quite good, they show sorne overshoot; therefore
they are given a grade of A-. If T is large, the responses in Figure 13.6(a), (e), and
(d) are unacceptable. Overall, the compensators obtained by using the step-invariant
and bilinear methods yield the best results. The compensator obtained by pole-zero
transformation is acceptable but not as good as the previous two. In conclusion,
if the sampling period is sufficiently small, then any method can be used to digitize
analog compensators. However, for a larger sampling period, it is better to use
the step-invariant and bilinear transformation methods to digitize an analog
compensator.
2.0 2.0
1.5 1.5
: ¡;~~~"~'"""'5'-
0.0
0.1 T= O
0.5
0~--~----~----L---~----~
0.0 2.0 4.0 6.0 8.0 10.0
(a) (b)
2.0 2.0
1.5 1.5
1.0
0.5
:: ,;.~;:.~' : m ••:::=
2.0 4.0 6.0 8.0 10.0 0.0 2.0 4.0 6.0 8.0 10.0
(e) (d)
2.0 2.0
1.5 1.5
o/ 0~--~-----L----~--~----~
0.0 2.0 4.0 6.0 8.0 10.0 0.0 2.0 4.0 6.0 8.0 10.0
(e) (f)
Figure 13.6 Unit-step responses of analog system with various digital compensators.
introduce computational problems, as is discussed in the next section. See also Ref-
erence [46]. How to choose an adequate sampling period has been widely discussed
in the literature. References [46, 52] suggest that the sampling frequency or 1/T be
chosen about ten times the bandwidth of the closed-loop transfer function. Reference
[3] suggests the following rules: If the pole of an overall system is real-say,
526 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
Because 1/a is the time constant, (13.43) implies that two to four sampling points
be chosen in one time constant. If the poles of an overall system are complex and
have a damping ratio in the neighborhood of 0.7, then the sampling period can be
chosen as
0.5 ~ 1
T=--- (13.44)
where wn is the natural frequency in radians per second. If an overall transfer function
has more than one real pole and/or more than one pair of complex-conjugate poles,
then we may use the real or complex-conjugate poles that are closest to the imaginary
axis as a guide in ~hoosing T.
The bandwidth of G0 (s) in (13.35) is found, using MATLAB, as B = 1.2 ra-
dians. If the sampling frequency 1/T is chosen as ten times of 1.2/27T, then T =
27T/12 = 0.52. The poles of the closed-loop system in (13.35) are -1.6 ± j0.65.
Its natural frequency is wn = V3 = 1.73, and its damping ratio is ? = 3.2/2wn =
0.46. The damping ratio is not in the neighborhood of 0.7; therefore, strictly speak-
ing, (13.44) cannot be used. However, as a comparison, we use it to compute the
sampling period which ranges from 0.29 to 0.58. It is comparable to T = 0.52. If
we use T = 0.5 for the system in the preceding section, then, as shown in Figure
3.6, the system with the digital compensator in (13.38) or (13.41) has a step response
close to that of the original analog system but with a larger overshoot. If overshoot
is not desirable, then the sampling period for the system should be chosen as 0.2,
about 20 times the closed-loop bandwidth. If T = 0.2, the compensators in (13.37),
(13.39), and (13.42) can also be used. Thus the selection of a sampling period de-
pends on which digital compensator is used. In any case, the safest way of choosing
a sampling period is to use computer simulations.
13.7(e), the conversion is called the first-order hold. Clearly, higher-order holds are
also possible. However, the zero-order is the hold most widely used. The D/ A
converter discussed in Section 12.3 implements zero-order hold.
The Laplace transform of the pulse p(t) with height 1 and width T shown in
Figure A.3 is computed in (A.23) as
_1 (1 _ e-sT)
S
If the input of a zero-order hold is 1, then the output is p(t). Therefore, the zero-
order hold can be considered to have transfer function
(13.45)
S
as shown in Figure 13.8(a). Note that the input u(kT) of Figure 13.8(a) must be
modified as
u*(t) = 2:
k~O
u(kT)o(t - kT)
Because e-Ts introduces only a time delay, we may move it outside the z-transform
as
528 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
y(kT) u (kT)
~ Z [G(s)] y(kT)
Z S
(a) (b)
Figure 13.8 (a) Equivalent analog plant. (b) Equivalent digital plant.
[- J [--s- J
G(s) z - 1 G(s)
G(z) = (1 - z- 1)Z -s- = - - Z (13.48)
2
This is a discrete transfer function. lts input is u(kT) and its output is y(kT), the
sample of the analog plant output in Figure 13.8(a). The discrete system shown in
Figure 13.8(b) is called the equivalent discrete or digital plant, and its discrete
transfer function i~ given by (13.48).
By comparing (13.48) with (13.6), we see that the equivalent ~crete plant
transfer function G(z) and the original analog plant transfer function G(s) are step
invariant. As discussed in Secti~_n 13.2, the step-invariant digital plant G(z) can also
be obtained from analog plant G(s) by using state-variable equations. Let
x(t) Ax(t) + bu(t) (13.49a)
be a realization of G(s). If the input is stepwise as in digital control, then the input
y(t) and output u(t) at t = kT can be described, as derived in (2.89), by
x(k + 1) Áx(k) + bu(k) (13.50a)
with
e= e a= d (13.50c)
The transfer function of (13.50) equals (13.48). As shown in Example 13.2.3, this
computation can easily be carried out by using the commands tf2ss, c2d, and ss2tf
in MATLAB.
(a) (b)
Figure 13.9 Direct design of digital control system.
Example 13.4. 1
Consider an analog plant with transfer function
101 101
(13.51)
G(s) = (s + 1)2 + 100 = s2 + 2s + 101
with its poles plotted in Figure 13.1 O. Its step-invariant digital transfer function is
given by
G(z) (1 _ z_ 1 )Z [ 101 J
s((s + 1)2 + 100)
1 [1 S + 1
Z -
-z- z ~ - (s + 1)2 + 100 (s + 1)2 + wo]
_z_-_1
z
[-z-____
z - 1 z2 -
2
z _-_z_e_-_T_c_o_s_1_0_T_--=-=
2ze-T cos lOT + e- 2T
(13.52)
where we have used the z-transform pairs in Table 12.1. Now if the sampling period
T is chosen as IOT = 27T, then cos 10T = 1, sin IOT = O and e-T = e- 0 ·2 7T =
Ims
X 10
-10 -1 o
.L L .L L -n!T
X -10
z -
- - -1 [ - -z-
G(z)
z z - 1
z -
- - -1 [ - -z- (13.53)
z z - 1
z -
- -1 [ - -z- 0.47
-
z z - 1 z - 0.53
lt is a transfer function with only one real pole, whereas the original analog plant
transfer function in (13.51) has a pair of complex-conjugate poles. Figure 13.11
shows the unit-step responses of (13.51) and (13.53). We see that the oscillation in
the analog plant does not appear in its step-invariant digital plant. Thus, sorne dy-
namics of an analog plant may disappear from or become hidden in its equivalent
digital plant.
1.8 1.8
1.6 1.6
1.4 1.4
1.2 1.2
1
0.8
0.6
~ 1
0.8
0.6
~~
0.4 0.4
0.2 0.2
o o
o 2 4 6 8 10 12 14 16 18 20 o 2 4 6 8 10 12 14 16 18 20
w 00
Figure 13.11 Unit-step responses of G(s) and G(z).
The reason for the disappearance of the dynamics can easily be explained from
the plot in Figure 13.10. Recall from Figure 12.9 that the mapping z = esT is nota
one-to-one mapping. If the sampling period T is chosen so that 7T/T equals half of
the imaginary part of the complex poles as shown in Figure 13.10, then the complex
poles will be mapped into real poles. Furthermore, the two poles are mapped into
the same location. This is the reason for the disappearance of the dynamics. Knowing
the reason, it becomes simple to avoid the problem. If the sampling period is chosen
to be small enough that the primary strip (the region bounded between - 7T/T and
7T/T as shown in Figure 12.9) covers all poles of G(s), then no dynamic will be lost
in the sampling and its equivalent digital plant can be used in design.
13.4 EQUIYALENT DIGITAL PLANTS 531
Example 13.4.2
Loss of dynamics can also be explained using state-variable equations. Consider the
transfer function in Example 13.4.1 or
101 101
G(s)
(s + 1f + 100 (s + 1 + j10)(s + 1 - }10) (13.54)
5.05} 5.05}
S + 1 + 10} S + 1 - 10}
-1 0- 10} o 1
x(t) [ x(t) + [ ] u(t) (13.55a)
-1 + 10} ] 1
y(t) = [5.05} - 5.05j)x(t) (13.55b)
y( k) [5.05} - 5.05j]x(k)
1
Although we consider state-variable equations with only real coefficients in the preceding chapters, all
concepts and results are equally applicable to equations with complex coefficients without any modifi-
cation. See Reference [15].
532 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
If T is chosen as lOT = 2 7T or T = 0.2 7T, then e- IOjT = 1, and the equation reduces
to
1
x(k + 1)
-T
[ eO
0
e-T
J x(k) + _1_1_
1
[1 + lOj ( - e-T)l
u(k) (13.56a)
- 10j (1 e-T)
Because Á is diagonal and has the same eigenvalues, the equation is neither con-
trollable nor observable. See Problem 11.2. This can also be verified by checking
the ranks of the controllability and observability matrices. The transfer function of
(13.56) can be computed as (1 - e-T)j(z - e-T ), which is the same as (13.53).
Thus controllability and observability of a state-variable equation may be destroyed
after sampling. For a more general discussion, see Reference [15, p. 559].
Example 13.4.3
Consider
G(s) (13.57)
s(s 2 + 2s + 2)
lts pole-zero excess is 3; therefore, the step-invarient discretization of G(s) will
introduce two zeros into G(z). We use MATLAB and state-variable equations to
carry out discretization. The commands
nu = 1 ;de= [1 2 2 O]; (Express G(s) in numerator and denomina~r.)
[a,b,c,d] = tf2ss(nu,de); (Yield controllable-form realization of G(s).)
[da,db] = c2d(a,b,0.5); (Discretize a and b with sampling period 0.5.)
[dn,dd] = ss2tf(da,db,c,d, 1); (Compute the discrete transfer function G(z).)
[z,p,k] =tf2zp(dn,dd) (Express G(z) in zero and pole form.)
13.4 EQUIVALENT DIGITAL PLANTS 533
yield 2
G(z) = _____0.0161(z
____:._ _2.8829)(z
____:_:.....__ +
0.2099)
_____:__ _ _ __ +
(13.58)
(z 1.0000)(z - 0.5323 + 0.2908j)(z - 0.5323 - 0.2908))
The discretization does introduce two zeros. If the sampling period is chosen as
T = 0.1, then
G(z) = _ _ _ _ 1.585
__ . w- 4
(z _
_....:....,_ _____:_.:.___+
+ 3.549)(z _0.255)
____:._ __
(13.59)
(z l)(z - 0.9003 + 0.0903j)(z - 0.9003 - 0.0903))
lt also introduces two zeros.
Now we discuss why sampling will introduce additional zeros. The Laplace
transform of the unit-step response y(t) of G(s) is G(s)/ s. Using the initial value
theorem in Appendix A, its value at t = O can be computed as
_ G(s) -
y(O) = lim s - = G(oo)
t---?oo S
Because y(kT) is the sample of y(t), we have y(O) = y(O) and, consequently, G(oo)
= G(oo). Now if G(s) is biproper, that is, G(oo) #- O, then G(oo) #- O and G(z) is
biproper. Similarly, if G(s) is strictly proper, sois G(z). Let G(z) be a strictly proper
transfer function of degree 4:
(13.60)
2
The two steps ss2tf and tf2zp can be combined as ss2zp. However, I was not able to obtain correct
answers using ss2zp for this problem, even though ss2zp is successfully used in Example 13.5.1.
Therefore, the difficulty may be caused by numerical problems.
534 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
Thus w~have y(O) = O and y(T) = h 1 • The unit-step response y(t) of the analog
system G(s) is generally of the forro shown in Figure 13.4(a). No matter what its
pole-zero excess is, generally y(T) is different from zero. If y(kT) is a sample of
y(t), then y(T) = b 1 = y(T) #- O. Therefore, the pole-zero excess ~ G(z) in (13.60)
is l. This establishes the assertion that if the pole-zero excess of G(s) is r 2': 1, the
pole-zero excess of G(z) is always l. Thus, sampling will generate r - 1 additional
zeros into G(z).
In the analog case, a zero is called a minimum-phase zero if it líes inside the
open left half s-plane, a non-minimum-phase zero if it lies inside the closed right
half s-plane. Following this terminology, we calla zero inside the interior of the unit
circle on the z-plane a minimum-phase zero, a zero on or outside the unit circle a
non-minimum-phase zero. For the analog plant transfer function in (13.57), sampling
introduces a mínimum- and a non-minimum-phase zero as shown in (13.58) and
(13.59). Non-minimum-phase zeros will introduce constraints in design, as will be
discussed in a later section.
To conclude 'this section, we mention that the poles of G(z) are transformed
from the poles of G(s) by z = esr. Once the poles of G(z) and, consequently, the
l
coefficients a; in (13.60) are computed, then the coefficients h; in (13.60) can be
computed from a; and the samples of the unit-step response y(t) as
o o
h¡l [
h2 a1 1
- 1 o o][y(T)
O y(2T)
(13.62)
[b4 a3
b3 a2 a1
az
O
1 y(4T)
y(3T)
See Problem 13.11. Thus, the numerator of G(z) in (13.60) is determined by the first
four samples ofy(t). If T is small, then these four samples are hardly distinguishable,
as can be seen from Figure 13.4(a). Therefore, the possibility of introducing errors
in b; is large. Thus, using an unnecessarily small sampling period will not only
increase the amount of computer computation, it will also increase computational
error. Therefore, selection of a sampling period is not simple.
In this section we discuss how to use the root-locus method to design a digital
compensator directly from a given equivalent digital plant. The root-locus design
method actually consists of two parts: searching for a desired pole region, and plot-
ting the roots of p(s) + kq(s) as a function of real k. The plot of root loci discussed
in Section 7.4 for the continuous-time case is directly applicable to the discrete-time
case; therefore, we discuss only the desired pole region in the digital case.
The desired pole region in Figure 7.4 for analog systems is developed from the
specifications on the settling time, overshoot, and rise time. Settling time requires
closed-loop poles to lie on the left-hand side of the vertical line passing through
-a = - 4.5/ ts, where ts denotes the settling time. The verticalline is transformed
by z = esr into a circle with radius e-aras shown in Figure 13.13(a). Note that the
Ims lmz
n/T
?
e-a 1T
o
Res Rez
-az-at
1
1
e-a2T
-n/T
(a)
lms lmz
S't rl
rz
Sz
r2
81
Res Rez
Sz
(b)
Ims lmz
(e)
Imaxis
1
90°
-1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
(d)
536 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
mapping z = esT is not one-to-one, therefore we map only the primary strip (or
- n/T:::; w :::; TT/T) in the s-plane into the interior of the unit circle on the z-plane.
The poles denoted by X on the s-plane are mapped into the positive real axis of the
z-plane inside the unit circle. The poles with imaginary part TT/T shown with small
squares are mapped into the negative real axis inside the unit circle.
The overshoot is govemed by the damping ratio ? or the angle (J in the analog
case. If we substitute s = rei 0 into z = esT and plot z as a function of r (for a fixed
(J) and as a function of (J (for a fixed r), then we will obtained the solid lines and
dotted lines in Figure 13.13(b). Because the overshoot is govemed by e, the solid
line in Figure 13.13(b) determines the overshoot. The distance from the origin or r
is inversely proportional to the rise time; therefore, the dotted line in Figure 13.13(b)
determines the rise time. Consequently, the desired pole region in the analog case
can be mapped into the one shown in Figure 13.13(c) for the digital case. For con-
venience of design, the detailed relationship in Figure 13.13(b) is plotted in Figure
13.13(d). With the preceding discussion, we are ready to discuss design of digital
compensators using the root-locus method. We use an example to illustrate the
design.
Example 13.5. 1
Consider the problem in Section 7.2.2-that is, given a plant with transfer function
1
G(s) - -s(-s_+_2_) (13.63)
[0.5 0.25
(1 - Z-
1
)Z 7 - -s- + s0.25
+ 2J (13.64)
-
"''·~ h G(z)
This can also be obtained using MATLAB by typing nu = [1]; de= [1 2 O];
[a,b,c,d] = tf2ss(nu,de); [da,db] = c2d(a,b, 1 ); [z,p,k] = ss2zp(da,db,c,d). The
result is z = -0.5232, p = 1, 0.1353, and k = 0.2838, and is the same as (13.65).
Next we choose the unity-feedback configuration in Figure 13.14 and find, if pos-
sible, a gain h to meet the design specifications. First we compute the overall transfer
function:
hG(z) 0.2838h(z + 0.5232)
GJz) = _+_h....:....G"--(-z) (13.66)
(z - l)(z - 0.1353) + 0.2838h(z + 0.5232)
eP = lim
k--->oo
lr(k) -a Ys(k)l = la -al
a
= O
and the overall system will automatically meet the specification on the position error
so long as the system is stable. This situation is similar to the continuous-time case
because the analog plant transfer function is of type l. Thus if a digital plant transfer
function has one pole at z = 1, and if the unity-feedback system in Figure 13.14 is
stable, then the plant output will track asymptotically any step-reference input.
As discussed in Section 7.2.1, in order to have the overshoot less than 5%, the
damping ratio Cmust be larger than O. 7. This can be translated into the curve denoted
by 0.7 in Figure 13.15. In order to have the settling time less than 9 seconds, we
require a 2: 4.5/9 = 0.5. This can be translated into the circle with radius e-o.sT
= 0.606 as shown in Figure 13.15. We plot in the figure also the root loci of
-1 0.2838(z + 0.5232)
(13.67)
h (z - l)(z - 0.1353)
The one in Figure 13.15(a) is the complete root loci; the one in Figure 13.15(b)
shows only the critica! part. The root loci have two breakaway points at 0.48 and
- 1.52 and consist of a circle centered at - 0.5232 and with radius l. From the root
loci, we see that if
538 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
Imz
(a)
lmz
/
/
1
--~~----------~4---*-+---~+-~--~----+-~~*-~Rez
-0.523 o 0.2 0.4 1 0.6
0.48
(b)
where h 1 and h2 are indicated on the plot, the system will meet the specifications on
overshoot and settling time. Now the system is required to have a rise time as small
as possible, therefore the closest pole should be as far away as possible from the
origin of the s-plane or on the dotted line in Figure 13.13(d) with wn as large as
possible. Thus we choose h = h2 . By drawing vectors from the two poles and one
zero to h2 as shown in Figure 13.15(b) and measuring their magnitudes, we obtain,
from (13.67),
0.67 X 0.42
h2 = = 1.03
0.2838 X 0.96
Thus by choosing h 1.03, the overall system will meet all design specifications.
This example shows that the root-locus method discussed in Chapter 7 can be
directly applied to design digital control systems. Note that the result h = 1.03 in
digital design is quite different from the result h = 2 obtained in analog design in
Section 7 .2.2. This discrepancy may be caused by the selection of the sampling
13.5 ROOT-LOCUS METHOD 539
period T = l. To see the effect of the sampling period, we repeat the design by
choosing T = 0.2. Then the equivalent digital transfer function is
From its root loci and using the same argument, the gain h that meets all the spec-
ifications can be found as h = l. 71. This is closer to the result of the analog design.
To compare the analog and digital designs, we plot in Figure 13.16 the plant outputs
and actuating signals of Figure 13.14 dueto a unit-step reference input for T = 1
with h = 1.03 and T = 0.2 with h = 1.71. We also plot in Figure 13.16 the unit-
step response and actuating signal for the analog design, denoted by T = O and
h = 2. We see that the digital design with T = 0.2 and h = 1.71 is almost identical
to the analog design with T = O and h = 2. The maximum value of the actuating
signal in the digital design, however, is smaller.
In conclusion, the root-locus method can be applied to design digital control
systems. The result, however, depends on the sampling period. lf the sampling period
is sufficiently small, then the result will be close to the one obtained by analog design.
There is one problem in digital design, however. If the sampling period is small,
then the possibility of introducing numerical error will be larger. For example, for
the problem in the preceding example, as T decreases, the design will be carried out
in a region closer to z = 1, where the solid lines in Figure 13. l3(d) are more
clustered. Therefore, the design will be more sensitive to numerical errors.
2.0~-.----------~--------~----------~----------~--------~
\ - . \ u(t) (T= O, h = 2)
Frequency-domain design methods, specially, the Bode plot method, are useful in
designing analog control systems. Because the frequency response G(jw) of analog
transfer functions is a rational function of jw and because the Bode plot of linear
factors can be approximated by two asymptotes-which are obtained by considering
two extreme cases: w ~ O and w ~ oo-the plot of Bode plots is relatively slmple.
In the digital case, the frequency response G(ejwT) is an irrational function of w.
Furthermore, the frequency of interest is limited to the range from Oto 7T/T. There-
fore, the procedure of plotting Bode plots in Chapter 8 cannot be directly applied to
the discrete-time case.
One way to overcome this difficulty is to carry out a transformation. Consider
2 z - 1
w= - - - (13.69a)
Tz+ 1
or
wT
+ -2
z- (13.69b)
wT
1 -
2
This is the bilinear transformation studied in (13.27). Let z ejwT and w = jv.
Then, we have, similar to (13.30),
2 wT
V -tan- (13.70)
T T
Thus the transformation transforms w from Oto 7T/T to v from Oto oo as shown in
Figure 13.3. Define
G(w) = G(z)lz=(l +wT/2)/(1-wT/2) (13.71)
Then G(jv) will be a rationa1 function ofv and v ranges from Oto oo. Thus the Bode
design method can be applied. In conclusion, design of digital compensators using
the Bode plot method consists of the following steps: (1) Sample the ana1og plant
to obtain an equivalent digital plant transfer function G(z). (2) Transform G(z) to
G(w) using (13.69). (3) Use the Bode method on G(jv) to find a compensator
C(w). (4) Transform C(w) to C(z) using (13.69). This completes the design.
The key criteria in the Bode design method are the phase margin, gain margin,
and gain-crossover frequency. Because the transformations from G(s) to G(z) by
z = esT and from G(z) to G(w) by (13.69) are not linear, considerable distortions
occur. Furthermore, the sampling of G(s) may introduce non-minimum-phase zeros
into G(z); this may also cause difficulty in using the Bode method. Thus, even though
the Bode method can be used to design digital compensators, care must be exer-
cised and the result must be simulated before actual implementation of digital
compensators.
4
13.7 STATE FEEDBACK, STATE ESTIMATOR, AND DEAD·BEAT DESIGN 541
The design methods in Sections 11.4 and 11.6 for analog systems can be applied to
digital systems without any modification. Consider the discrete-time state-variable
equation
x(k + 1) Ax(k) + bu(k) (13.72a)
y( k) ex( k) (13.73b)
is called the dead-beat design. In this case, the overall transfer function is of the
form
(13.75)
N (z)
0
z
Y(z) = - ---
zn z - 1
(13.76)
N (l)z
= ea + e 1z- 1
+ e2 z- 2
+ ··· + en-I z-n+ 1
+ en z-n + -z 0 _- 1
of (13.75) will become a step sequence after the nth sampling instant. This cannot
happen in analog systems; the step response of any stable analog system will become
a step function only at t ~ oo. Note that a pole at z = Oin digital systems corresponds
to a pole at s = - 00 in analog systems. It is not possible to design an analog system
with a proper transfer function that has all poles at s = - oo. Thus dead-beat design
is not possible in analog systems.
Example 13.7.1
with transfer function G(z) = 10/z(z + 1). This equation is identical to (11.41)
except that it is in the discrete-time domain. Find a feedback gain k in u = r - kx
so that the resulting system has all eigenvalues at z = O. We use the procedure in
Section 11.4 to carry out the design. We compute
and
Ll(z) z2 + O· z + O
Thus we have
k = [O - 1 O - O] = [ - 1 O]
The similarity transformation is identical to the one in Example 11.4.2 and equals
p-1 = L~ ~~J
Thus the feedback gain is
This completes the design of state feedback. Next we use the procedure in Section
11.6.1 to design a reduced-dimensional state estimator. The procedure is identical
to the one in Example 11.6.1. Because (13.77) has dimension n = 2, its reduced-
dimensional estimator has dimension l. Because the eigenvalue of F is required to
be different from those of A, we cannot choose F = O. If we choose F = O, then
TA - FT = gc has no solution. See Problem 13.14. We choose, rather arbitrarily,
13.7 STATE FEEDBACK, STATE ESTIMATOR, AND DEAD-BEAT DESIGN 543
or
[O t1 - t2 ] - [0.1t 1 0.1t2 ] = [1 O]
Thus we have -0.1t 1 = 1 and t 1 - l.lt2 = O which imply t 1 -10 and t2
t¡/1.1 = -9.09. The matrix
1~ - ~.09]-
1
[-
We compute
0
h = Tb = [ -10 -9.09] [ ] -90.9
10
Thus the }-dimensional state estimator is
z(k + 1) 0.1z(k) + y(k) - 90.9u(k) (13.79a)
This completes the design. W e see that the design procedure is identical to the analog
case. We plot in Figure 13.17 the state estimator in (13.79) and the state feedback
-------~-------------------~
State feedback State estimator
Figure 13.17 State feedback and state estimator.
544 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
in (13.78) applying at the output of the estimator. To verify the result, we use Ma-
son's formula to compute the transfer function from r to y. It is computed as
Pole placement and model matching, discussed in Chapter 1O for analog systems,
can be applied directly to digital systems. To discuss model matching, we must
discuss physical constraints in implementation. As in the analog case, we require
(13.81)
13.8 MODEL MATCHING 545
is said to be implementable if there exists a control configuration such that G0 (z) can
be implemented without violating any of the preceding four constraints. This defi-
nition is similar to the analog case. The necessary and sufficient conditions for G0 (z)
= N 0 (z)/D0 (z) to be implementable are
l. deg D 0 (z) - deg N 0 (z) 2: deg D(z) - deg N(z) (pole-zero excess inequality).
2. All zeros of N(z) on and outside the unit circle must be retained in NJz) (re-
tainment of non-minimum-phase zeros).
3. All roots of DJz) must líe inside the unit circle.
Condition (1) implies that if the plant has a delay of r : = [deg D(z) - deg N(z)]
sampling instants, then G0 (z) must have a delay of at least r sampling instants. If a
control configuration has no plant leakage, then the roots of N(z) will not be affected
by feedback. Therefore, the only way to eliminate zeros of N(z) is by direct pole-
zero cancellations. Consequently, zeros on or outside the unit circle of the z-plane
should be retained in N0 (z). In fact, zeros that do not líe inside the desired pole region
discussed in Figure 13.13(c) should be retained even if they líe inside the unit circle.
Thus the definition and conditions of implementable transfer functions in the digital
case are identical to the analog case discussed in Section 9.2.
As in the analog case, the unity-feedback configuration in Figure 10.1 can be
used to implement every pole placement but not every model matching. The two-
parameter configuration in Figure 10.6 and the plant input/output feedback config-
uration in Figure 10.15 can be used to implement any model matching. The design
procedures in Chapter 1O for the analog case are directly applicable to the digital
case. We use an example to illustrate the procedures.
Example 13.8. 1
Consider
z + 0.6 N(z)
G(z) = - - - - - - (13.82)
(z - 1)(z + 0.5) D(z)
0.25(z + 0.6)
(13.83)
Go(z) = z 2 - 0.8z + 0.2
-
,,,,.~
C(z) G(z)
(a)
,--------------,
y(k)
)
-
1
1
1 1
L--------------~
(b)
replaced by z, we have
C(z) = Go(z)
G(z)(l - G0 (Z))
0.25(z + 0.6)
z2 - 0.8z + 0.2
(z + 0.6) . ( _ 0.25(z + 0.6) )
1
(z - 1)(z + 0.5) z2 - 0.8z + 0.2 (13.84)
0.25(z - 1)(z + 0.5)
z2 - 0.8z + 0.2 - 0.25(z + 0.6)
0.25(z - 1)(z + 0.5) 0.25(z + 0.5)
(z - l)(z - 0.05) z - 0.05
This is a proper compensator. We mention that the design has a pole-zero cancel-
lation of (z + 0.5}. The pole is dictated by the plant transfer function. Because the
pole is stable, the system is totally stable.
Next we implement G0 (z) in the two-parameter configuration shown in Figure
13.18(b). We use the procedure in Section 10.4.1 with s replaced by z. We compute
D N O NO][At:!_~
0
D N !D
1
0
1
:
[DO NO i DD NN MA
2 2 :
1
0
2
0
2
0
1
] ¡-0.5
-0.5
1
O
0.6:1
o
o :
!-0.5
1
o
!-0.5
1
~-6] ¡~~] ~2]
1
O
A1
M1
[- 0.8
1
From this example, we see that the design procedures in Chapter lO for analog
systems can be directly applied to design digital systems.
This chapter introduced two approaches to design digital compensators. The first
approach is to design an analog compensator and then transform it into a digital one.
We discussed six methods in Section 13.2 to carry out the transformation. Among
these, the step-invariant and bilinear transformation methods appear to yield the best
results. The second approach is to transform an analog plant into an equivalent step-
invariant digital plant. We then design digital compensators directly by using the
root-locus method, state-space method, or linear algebraic method. The Bode design
method cannot be directly applied because G(efwT) is an irrational function of w
and because w is limited to (- 7T/T, 7T/T). However, it can be so used after a bilinear
transformation.
A question may be raised at this point: Which approach is simpler and yields a
better result? In the second approach, we first discretize the plant and then carry out
design. lf the result is found to be unsatisfactory, we must select a different sampling
period, again discretize the analog plant, and then repeat the design. In the first
approach, we carry out analog design and then discretize the analog compensator.
548 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
If the discretized compensator does not yield a satisfactory result, we select a dif-
ferent sampling period and again discretize the analog compensator. There is no
need to repeat the design; therefore, the first approach appears to be simpler. There
is another problem with the second approach. In discretization of an analog plant, if
the sampling period is small, the resulting digital plant will be more prone to nu-
merical error. In analog design, the desired pole region consists of a good portion
of the left half s-plane. In digital design, the desired pole region consists of only a
portion of the unit circle of the z-plane. Therefore, digital design is clustered in a
much smaller region and, consequently, the possibility of introducing numerical error
is larger. In conclusion, the first approach (discretization after design) may be simpler
than the second approach (discretization before design).
PROBLEMS
13.1. Find the impulse-invariant and step-invariant digital compensators of the fol-
lowing two analog compensators with sampling period T = 1 and T = 0.5:
2
a.------
(s + 2)(s + 4)
2s- 3
b.---
s + 4
13.2. Use state-variable equations to find step-invariant digital compensators for
Problem 13.1.
13.3. Use forward-difference, backward-difference, and bilinear transforrnation
methods to find digital compensators for Problem 13.1.
13.4. Use state-variable equations to find forward-difference digital compensators
for Problem 13.1.
13.5. a. Find the controllable-forrn realization for the transfer function in (13.22).
b. Show that the realization and the state-variable equation in (13.24) are
equivalent. Can you conclude that the transfer function of (13.24) equa1s
(13.22)? [Hint: The simi1arity transforrnation is
[~ -TJ
Tz
13.6. Use pole-zero mapping to find digital compensators for Problem 13.1.
13.7. Use the analog initial value theorem to show that the step response of (13.31)
has the property y(O) = O, y(O) = O, and y(O) = O. Thus the value of y(D
is small if T is small.
13.8. a. Compute the steady-state response of the analog compensator in (13.31)
due to a unit-step input.
PROBLEMS 549
a. 2
S
10
b.---
s(s + 2)
8
c.------
s(s + 4)(s - 2)
1
G(s) = .
s(s + 1 + JlO)(s + 1 - jlO)
Find its equivalent digital plant transfer functions for T = 0.27T and T = 0.1.
Can they both be used in design?
13.11. Use (13.61) to establish (13.62).
z + 1
G(z) = - - - - - -
(z - 0.3)(z - 1)
Use the root-locus method to designa system to meet (1) position error = O,
(2) overshoot :510%, (3) settling time :55 seconds, and (4) rise time as small
as possible.
13.13. Design a state feedback and a full-dimensional state estimator for the plant
in Problem 13.12 such that all poles of the state feedback and state estimator
are at z = O.
13.14. In the design of a reduced-dimensional state estimator for the digital plant
in Example 13.7.1, show that if F = O and g = 1, then the equation TA -
FT = gc has no solution T.
13.15. Design a state feedback and a reduced-dimensional state estimator for the
plant in Prob1em 13.12 such that all poles of the state feedback are at z = O.
Can you choose the eigenvalue of the estimator as z = O? If not, choose it
asz = 0.1.
550 CHAPTER 13 DISCRETE-TIME SYSTEM DESIGN
13.16. Repeat Problem 13.15 using the linear algebraic method. Are the results the
same? Which method is simpler?
13.17. Consider the plant in Problem 13.12. Use the unity-feedback configuration to
find a compensator such that all poles are located at z = O. What zero, if any,
is introduced in G0 (z)? Will G0 (z) track any step-reference input without an
error?
13.18. Consider the plant in Problem 13.12. Implement the following overall transfer
function
0.5(z + 1)
Go(z) = 2
z
Will this system track any step-reference input?
13.19. Consider
z + 2 h(z + 2)
G(z) = - - - - - G0 (z) -
(z - 1)(z - 3) z(z + 0.2)
Find h so that G0 (1) = l. Implement G0 (z) in the unity-feedback configuration
and the two-parameter feedback configuration. Are they both acceptable in
practice?
PID Controllers
14. 1 INTRODUCTION
In this chapter, we discuss first analog PID controllers and the adjustment of
their parameters. We then discuss digital implementations of these controllers. Be-
fore proceeding, we mention that what will be discussed for nonlinear control sys-
tems is mostly extrapolated from linear control systems. Therefore, basic knowledge
of linear control systems is essential in studying nonlinear systems. We also mention
that, unlike linear systems, a result in a nonlinear system in a particular situation is
not necessarily extendible to other situations. For example, the response of the non-
linear system in Figure 6.14 due to r(t) = 0.3 approaches 0.3, but the response due
to r(t) = 4 X 0.3 does not approach 4 X 0.3. Instead, it approaches infinity. Thus
the response of nonlinear systems depends highly on the magnitude of the input. In
this chapter, all initial conditions are assumed to be zero and the reference input is
a unit-step function.
Consider the system shown in Figure 14.1. lt is assumed that the plant cannot be
adequately modeled as a linear time-invariant lumped system. This is often the case
if the plant is a chemical process. Most chemical processes have a long time delay
in responses and measurements; therefore, they cannot be modeled as lumped sys-
tems. Chemical reactions and liquid f\ow in pipes may not be describable by linear
equations. Control valves are nonlinear elements~ they saturate when fully o\)ened.
Therefore, many chemical processes cannot be adequately modeled as linear time-
invariant and lumped systems.
Many industrial processes, including chemical processes, can be controlled in
the open-loop configuration shown in Figure 14.1(a). In this case, the plant and the
controller must be stable. The parameters of the controller can be adjusted manually
or by a computer. In the open-loop system, the parameters generally must be read-
justed whenever there are changes in the set point and load. If we introduce feedback
around the nonlinear plant as shown in Figure 14.1 (b ), then the readjustment may
not be needed. Feedback may also improve performance, as in the linear case, and
make the resulting systems less sensitive to plant perturbations and externa! disturb-
ances. Therefore, it is desirable to introduce feedback for nonlinear plants.
y(t)
(a)
y(t)
(b)
Figure 14.1 (a) Open-loop nonlinear system. (b) Unity-feedback nonlinear system.
14.2 PID CONTROLLERS IN INDUSTRIAL PROCESSES 553
The design of nonlinear feedback systems is usually carried out by trial and
error. Whenever we use a trial-and-error method, we begin with simple control
configurations such as the unity-feedback configuration shown in Figure 14.l(b) and
use simple compensators or controllers. If not successful, we then try more complex
configurations and controllers. We discuss in the following sorne simple controllers.
Proportional Controller
The transfer function of a proportional controller is simply a gain, say kP. If the input
of the controller is e(t), then the output is u(t) = kpe(t) or, in the Laplace transform
domain, U(s) = kPE(s). To abuse the terminology, we calla nonlinear system stable 1
if its output excited by a unit-step input is bounded and approaches a constant as
t ~ oo. Now if a plant is stable, as is often the case for chemical processes, the
feedback system in Figure 14.1 will remain stable for a range of kP. As kP increases,
the unit-step response may become faster and eventually the feedback system may
become unstable. This is illustrated by an example.
Example 14.2.1
Consider the system shown in Figure 14.2. The plant consists of a saturation non-
linearity, such as a valve, followed by a linear system with transfer function
-s + 1
G(s)- - - - - -
(5s + 1)(s + 1)
The controller is a proportional controller with gain kP. We use Protoblock2 to simu-
late the feedback system.lts unit-step responses with kP = 0.1, 1, 4.5, 9, are shown
r------------,
~
~~
:-[I]--
1
1
1
L _ _ _ _ _ _ _ _ _ _ _ _ _j
G(s) :
1
Plant
1
The stability of nonlinear systems is much more complex than that of linear time-invariant lumped
systems. The response of a nonlinear system depends on whether or not initial conditions are zero and
on the magnitude of the input. Therefore, the concept of bounded-input, bounded-output stability defined
for linear systems is generally not used for nonlinear systems. Instead, we have asymptotic stability,
Lyapunov stability, and stability of limit cycles. This is outside the scope of this text and will not be
discussed.
2
A trademark of the Grumman Corporation. This author is grateful to Dr. Chien Y. Huang for carrying
out the simulation.
554 CHAPTER 14 PID CONTROLLERS
1.4.---~----~----~--~----~----~--~----~----~---.
k=9
1.2
0.8
0.6
1
1
1
0.4 1
0.2 1
1
k=0.1
------ ----------------
1_--
o
-0.2
\
- 1
o 2 4 6 8 10 12 14 ' 16 18 20
Figure 14.3 Unit-step responses.
in Figure 14.3. We see that as kP increases, the response becomes faster but more
oscillatory. If kP is less than 9, the response will eventually approach a constant, and
the system is stable. If kP is larger than 9, the response will approach a sustained
oscillation and the system is not stable. Recall that we have defined a nonlinear
system to be unstable if its unit-step response does not approach a constant. The
smallest kP at which the system becomes unstable is called the ultimate gain.
For sorne nonlinear plants, it may be possible to obtain good control systems
by employing only proportional controllers. There is, however, one problem in using
such controllers. We see from Figure 14.3 that, for the same unit-step reference
input, the steady-state plant outputs are different for different kP. Therefore, if the
steady-state plant output is required to be, for example, 1, then for different kP' the
magnitude of the step-reference input must be different. Consequently, we must
adjust the magnitude of the reference input or reset the set point for each kP. There-
fore, manual resetting of the set point is often required in using proportional
controllers.
The preceding discussion is in fact extrapolated from the linear time-invariant
lumped systems discussed in Section 6.3.2. In the unity-feedback configuration in
Figure 6.4(a), if the plant is stable and the compensator is a proportional controller,
then the loop transfer function is of type O. In this case, the position error is different
from zero and calibration or readjustment of the magnitude of the step-reference
input is needed to have the plant output reach a desired value. If the loop transfer
14.3 PID CONTROLLERS IN INDUSTRIAL PROCESSES 555
function is of type 1, then the position error is zero and the readjustment of the
reference input is not necessary. This is the case if the controller consists of an
integrator, as will be discussed in the next paragraph.
Integral Controller
If the input of an integral controller with gain k; is e(t), then the output is
or U(s) = k;E(s)/ s. Thus the transfer function of the integral controller is kJ s. For
linear unity-feedback systems, if the forward path has a pole at S = 0 Of Of type 1,
and if the feedback system is stable, then the steady-state error due to any step-
reference input is zero for any k;. We may have the same situation for nonlinear
unity-feedback systems. In other words, if we introduce an integral controller in
Figure 14.1 (b) and if the system is stable, then for every k;, the steady-state error
may be zero, and there is no need to reset the set point. For this reason, the integral
control is also called the reset control and k; is called the reset rate.
Although the integral controller will eliminate the need of resetting the set point,
its presence will make the stabilization of feedback systems more difficult. Even if
it is stable, the speed of response may decrease and the response may be more
oscillatory, as is generally the case for linear time-invariant lumped systems. In
addition, it may generate the phenomenon of integral windup or reset windup. Con-
sider the system shown in Figure 14.4(a) with an integral controller and a valve
saturation nonlinearity with saturation level um. Suppose the error signal e(t) is as
shown in Figure 14.4(a). Then the corresponding controller output u(t) and valve
output u(t) are as shown. ldeally, ifthe error signal changes from positive to negative
at t0 , its effect on u(t) will appear at the same instant t0 . However, because of the
integration, the value of u(t) at t = t0 is quite large as shown, and it will take awhile,
say until time t2, for the signal to unwind to um. Therefore, the effect of e(t) on
u(t) will appear only after t2 as shown in Figure 14.4(a) and there is a delay of
t2 - t0 • In conclusion, because of the integration, the signal u(t) winds up over the
saturation level and must be unwound before the error signal can affect on u(t). This
is called integral windup or reset windup.
lf the error signal e(t) in Figure 14.4 is small and approaches zero rapidly,
integral windup may not occur. If integral windup does occur and makes the feedback
system unsatisfactory, it is possible to eliminate the problem. The basic idea is to
disable the integrator when the signal u(t) reaches the saturation level. For example,
consider the arrangement shown Figure 14.4(b). The input e(t) of the integral con-
troller is e(t) = e(t) X w(t), where w(t) = 1 if iu(t)i :::::; um and w(t) = 0 if iu(t)i
> um. Such w(t) can be generated by a computer program or by the arrangement
shown in Figure 14.4(b). The arrangement disables the integrator when its output
reaches the saturation level, thus u(t) will not wind up over the saturation level, and
when e(t) changes sign at t0 , its effect appears immediately at u(t). Hence, the per-
formance of the feedback system may be improved. This is called antiwindup or
integral windup prevention.
556 CHAPTER 14 PID CONTROLLERS
e(t) u
L~'¡*=LG¡ "·~
-
-
S
" )'
(a)
~
-u- m
Plant
e(t)
(b)
Derivative Controller
If the input of a derivative controller with derivative constant kd is e(t), then its
output is kdde(t)/dt or, in the Laplace transform domain, kdsE(s). Therefore, the
transfer function of the derivative controller is kds. This is an improper transfer
function and is difficult to implement. In practice, it is built as
kds
(14.2)
+ kds
N
14.3 PID CONTROLLERS IN INDUSTRIAL PROCESSES 557
where N, ranging from 3 to 1O, is determined by the manufacturer and is called the
taming factor. This taming factor makes the controller easier to build. lt also limits
high-frequency gain; therefore, high-frequency noise will not be unduly amplified.
For control signals, which are generally of low frequency, the transfer function in
(14.3) can be approximated as kds.
The derivative controller is rarely used by itself in feedback control systems.
Suppose the error signa! e(t) is very large and changes slowly or, in the extreme
case, is a constant. In this case, a good controller should generate a large actuating
signa! to force the plant output to catch up with the reference signal so that the error
signal will be reduced. However, if we use the derivative controller, the actuating
signal will be zero, and the error signal will remain large. For this reason, the deriv-
ative controller is not used by itself in practice. If we write the derivative of e(t) at
t = t0 as
with a > O, then its value depends on the future e(t). Thus the derivative or rate
controller is also called the anticipatory controller.
The combination of the proportional, integral, and derivative controllers is called
a PID controller. Its transfer function is given by
(14.3)
where T¡ is called the integral time constant and Td the derivative time constant. The
PID controller can be arranged as shown in Figure 14.5(a). This arrangement is
discussed in most control texts and is called the "textbook PID controller" in Ref-
erence [3]. The arrangement is not desirable if the reference input r contains dis-
continuities, such as in a step function. In this case, e(t) will be discontinuous and
its differentiation will generate an impulse or a very large actuating signa!. An al-
temative arrangement, called the derivative-of-output controller, is shown in Figure
14.5(b) where only the plant output y(t) is differentiated. In this arrangement, the
discontinuity of r will appear at u through the proportional gain but will not be
Figure 14.5 (a) Textbook PID controller. (b) Derivative-of-output controller. (e) Set-point-
on-1 controller.
558 CHAPTER 14 PID CONTROllERS
with r(t) = 1, is minimized. This is called the quarter-decay criterion. Ziegler and
Nichols used this criterion to develop their rules. These rules were developed mainly
from experiment.
Closed-Loop Method Consider the system shown in Figure 14.1 (b ). The controller
consists of only a proportional controller with gain kP. It is assumed that the system
is stable for O :5 kP < ku and the unit-step response of the system with kP = ku is
of the form shown in Figure 14.6(b). It has a sustained oscillation with period Tu.
We call ku the ultimate gain and Tu the ultimate period. Then the rules of Ziegler
and Nichols for tuning the PID controller are as shown in Table 14.1.
e(t) e(t)
'!_
a 4
"1-
1+-----Tu-1
o (a) o (b)
Figure 14.6 (a) Quarter-decay response. (b) Ultimate gain and period.
14.3 PID CONTROLLERS IN INDUSTRIAL PROCESSES 559
Controller kp T;
p 0.5Ku
PI 0.4Ku 0.8Tu
PID 0.6Ku O.STU O.l2Tu
y(t)
o~~~------L------------------.
---1 L ~ T ----1
Figure 14.7 Unit-step response.
560 CHAPTER 14 PID CONTROLLERS
Controller kp T; Td
p TL
PI 0.9TL 0.3L
PID 1.2TL 2L 0.5L
e-Ls = - = (14.6a)
eLs + Ls
Ls
e-Ls/2 1 - -
e-Ls 2 2 Ls
eLs/2 = (14.6b)
Ls 2 + Ls
1 +
2
and
Ls (Ls?
e-Ls/2 +
e-Ls 2 8 8 4Ls + L2s2
eLs/2 = (14.6c)
Ls (Ls? 8 + 4Ls + L2s2
1 + +
2 8
Note that these approximations are good for s small or s approaching zero. Because
s small govems the time response as t approaches infinity, as can be seen from the
fina1-value theorem, these equations give good approximations for the steady-state
response but, generally, poor approximations for the transient response. For example,
Figure 14.8 shows the unit-step responses of
e-3s
G(s) = 2s + 1
All of them approach the same steady-state value, but their transient responses are
quite different. The unit-step response of G 3 (s) is closest to the one of G(s).
Once a plant transfer function with time delay is approximated by a rational
transfer function, then all methods introduced in this text can be used to carry out
the design. This is the second approach mentioned in Section 14.1.
14.3 PID CONTROLLERS FOR LINEAR TIME-INVARIANT LUMPED SYSTEMS 561
The PID controller certainly can also be used in the design of linear time-invariant
lumped systems. In fact, the proportional controller is always the first controller to
be tried in using the root-locus and Bode plot design methods. If we use the PID
controller for linear time-invariant lumped systems, then the tuning formula in Table
14.1 can also be used. In this case, the ultimate gain Ku and ultimate period Tu can
be obtained by measurement or from the Bode plot or root loci. If the Bode plot of
the plant is as shown in Figure 14.9(a) with phase-crossover frequency wP and gain
margin a dB, then the ultimate gain and ultimate period are given by
27r
Tu=-
wP
If the root loci of the plantare as shown in Figure 14.9(b), then from the intersection
of the root loci with the imaginary axis, we can readily obtain the ultimate gain Ku
and the ultimate period as Tu = 27r/ wU' Once K u and Tu are obtained, then the tuning
rule in Table 14.1 can be employed.
Even though PID controllers can be directly applied to linear time-invariant
lumped systems, there seems no reason to restrict compensators to them. The transfer
function of PI controllers is
This is a special case of the phase-lag compensator with transfer function k(s + b)/
(s + a) and b > a ?: O. Therefore, phase-lag compensators are more general than
PI compensators and it should be possible to design better systems without restricting
a = O. This is indeed the case for the system in Example 8.10.1. See the responses
562 CHAPTER 14 PID CONTROLLERS
dB Res
--------~~--------~Ims
o
(a), (b)
in Figure 8.39. The phase-lag controller yields a better system than the PI controller
does.
If we use the more realistic derivative controller in (14.2), then the transfer
function of PD controllers is
k(s + ac)
(14.7)
+ kds s + a
N
where k = N + kP' a = N/ kd, and e = kP/ (N + kP) < l. This is a special case
of the phase-lead compensator with transfer function k(s + b)/(s + a) and
O :::::: b < a. Similarly, the transfer function of realistic PID controllers is a special
case of the following compensator of degree 2:
k(s 2 + b 1s + b0 )
(14.8)
s2 + a 1 s + a0
Thus if we use general controllers, then the resulting systems should be at least as
good as those obtained by using PID controllers. Furthermore, systematic design
methods are available to compute general controllers. Therefore, for linear time-
invariant lumped systems, there seems no reason to restrict controllers to PID
controllers.
There is, however, one situation where PID controllers are useful even if a plant
can be adequately modeled as an LITL system. PID controllers can be built using
hydraulic or pneumatic devices; general proper transfer functions, however, cannot
be so built. If control systems are required to use hydraulic systems, such as in all
existing Boeing commercial aircrafts, then we may have to use PID controllers. A
14.4 DIGITAL PID CONTROLLERS 563
new model of AirBus uses control by wire (by electrical wire) rather than hydraulic
tube. In such a case there seems no reason to restrict controllers to PID controllers,
because controllers with any proper transfer functions can be easily built using elec-
trical circuits as discussed in Chapter 5. Using control by wire, the controller can be
more complex and the performance of the system can be improved.
(14.9)
G (z) = k [ 1 + T
+ T (z - 1)]
-'d"-'----'- (14.10)
P T¡(Z - 1) T
This is the simplest digital implementation of the PID controller. Another possibility
is to use the trapezoidal approximation in ( 13.27) for the integrator and the backward
difference in (13.25) for the differentiator; then (14.9) becomes
G(z) = kP [ 1
kp [ 1 (14.11)
kp [ 1
lf we define
kPT
(Integral gain) (14.12b)
T¡
(14.13)
This is one commonly used digital PID controller. Note that digital kP differs from
analog kP by the amount kPT/2T¡ which is small )f the sampling period T is small.
However, if T is small, then the derivative gain kd will be large. This problem will
not arise if the analog differentiator is implemented as in (14.2). In this case, the
l
transfer function of analog PID controllers becomes
G(s)
k{ +-+
1
T¡s
1 +-
r,,Tds (14.14a)
l
N
~)
T¡s
S- (-
If we use the impulse-invariant method for the integrator and the pole-zero mapping
for the differentiator, then we have
G(z) = kP
[ 1
T
+ -- - + N(z - 1)] (14.15a)
T¡(z 1) z - a
with
(14.15b)
If we use the forward difference for the integrator and the backward difference for
the differentiator, then we have
G(z) = kP [ 1 + - -T- - +
T¡(z 1) Td
NTd z -
+ NT z -
1]
f3
(14.16a)
with
(14.16b)
f3 = Td + NT
This is a commonly used digital PID controller. We mention that if T is very small,
then (14.15) and (14.16) yield roughly the same transfer function. In (14.16a), if we
use forward difference for both the integrator and differentiator, then the resulting
digital differentiator may become unstable. This is the reason we use forward dif-
ference for the integrator and backward difference for the differentiator.
14.4 DIGITAL PID CONTROLLERS 565
The digital PID controllers in (14.10), (14.13), (14.15), and (14.16) are said to
be in positionform. Now we develop a different form, called velocity form. Let the
input and output of the digital PID controller in (14.13) be e(k) : = e(kT) and
u(k) : = u(kT). Then we have
where r(k) is the reference input sequence and y( k) is the plant output. If the reference
input is a step sequence, then r(k) = r(k - 1) = r(k - 2) and
e(k) - e(k - 1) = r(k) - y(k) - [r(k - 1) - y(k - 1)]
(14.20)
= - [y(k) - y(k - 1)]
and
e(k) - 2e(k - 1) + e(k - 2) = r(k) - y(k) - 2[r(k - 1) - y(k - 1)]
+ r(k - 2) - y(k - 2) (14.21)
= - [y(k) - 2y(k - 1) + y(k - 2)]
which implies
, k r
U(z) = -kPY(z) + ---'-'-_.. ,. , E(z) - Kd(l - z- 1)Y(z)
- z
This is plotted in Figure 14.10. We see that only the integration acts on the error
signa!; the proportional and derivative actions act only on the plant output. This is
called the velocity-form PID controller. This is the set-point-on-I-only controller
shown in Figure 14.5(c).
566 CHAPTER 14 PID CONTROLLERS
y(k)
If the sampling period is small, then the effects of analog and digital PID con-
trollers will be close; therefore, the tuning methods discussed for the analog case
can be used to tune digital PID controllers. Because the dynamics of industrial
processes are complex and not necessarily linear, no analytical methods are available
to determine parameters of PID controllers; therefore, their determinations will in-
volve tria! and error. At present, active research has been going on to tune these
parameters automatically. See, for example, References [3, 31].
TheLaplace
Transform
A. 1 DEFINITION
In this appendix, we give a brief introduction of the Laplace transform and discuss
its application in solving linear time-invariant differential equations. The introduc-
tion is not intended to be complete; it covers only the material used in this text.
Consider a function f(t) defined for t ;:::: O. The Laplace transform of f(t),
denoted by F(s), is defined as
Example A. 1. 1
Consider f(t) = e-at, for t;:::: O. Its Laplace transform is
00
F(s) = ~[e-at] =
i
o-
oo
e-ate-st dt = __ l
_ e-<a+s)t 1
s + a t~o-
567
568 APPENDIX A THE LAPLACE TRANSFORM
-1
- - [e-(a+s)tl- - e-(a+s)tl- -]
s + a t-oo t-o (A.2)
-1 1
--[0 1] = - -
s +a S+ a
where we have used e-<s+a)tlt~oo = O. This holds only if Res> Re (-a), where
Re stands for the real part. This condition, called the region of convergence, is often
disregarded, however. See Reference [18] for ajustification. Thus, the Laplace trans-
form of e-at is l/(s + a). This transform holds whether a is real or complex.
Consider the two functions defined in Figure A.1(a) and (b). The one in Figure
A.1(a) is a pulse with width E and height 1/E. Thus the pulse has area 1, for all
E > O. The function in Figure A.1 (b) consists of two triangles with total area equal
to 1, for all E> 0., The impulse or deltafunction is defined as
o(t) = lim o.,(t)
,;->O
where B.,(t) can be either the function in Figure A.1(a) or that in (b). The impulse is
customarily denoted by an arrow, as shown in Figure A.1(c). If the area of the
function in Figure A.l(a) or in (b) is 1, then the impulse is said to have weight l.
Note that o(t) = O, for t # O. Because 8(0) may assume the value of oo, if Figure
A.1(a) is used, orO, if Figure A.1(b) is used, o(t) is not defined at t = O.
The impulse has the following properties
J:oo f(t)o(t)dt J: 00
f(t)o(t - O)dt f(O) (A.4)
T1
E
0 "'E o E o
(a)
2 (b) (e)
Note that if the lower limit of (A.l) is O rather than o-, then the Laplace transform
of o(t) could be 0.5 or sorne other value. If we use O-, then the impulse will be
included wholly in the Laplace transform and no ambiguity will arise in defining
Ll(s). This is one of the reasons for using o- as the lower limit of (A.l).
The Laplace transform is defined as an integral. Because the integral is a linear
operator, so is the Laplace transform-that is,
for any constants a 1 and a 2 . We list in Table A.l sorne of the often used Laplace-
transform pairs.
o(t) (impulse)
1 (unit-step function)
S
n!
t" (n = positive integer)
sn+ 1
tne-at n!
(s + ay+I
w
sin wt
sz + wz
S
cos wt
sz + wz
w
e-at sin wt
(s + a) 2 + wz
s + a
e-at cos wt
(s + af + Wz
f(t)eat F(s- a)
570 APPENDIX A THE !APLACE TRANSFORM
The computation of f(t) from its Laplace transform F(s) is called the inverse Laplace
transform. Alt)10ugh f(t) can be computed from
1 fe+ joo
f(t) = - . . F(s)est ds
27rj c-;oo
the formula is rare1y used in engineering. It is much simpler to find the inverse of
F(s) by looking it up in atable. However, before using atable, we must express F(s)
as a sum of terms available in the table.
Consider the Laplace transform
N(s) N(s)
F(s) = - = -----'--'--=-- (A.6)
D(s) (s - a)(s - b) 2D(s)
where N(s) and Dl.._s) are two polynomials with deg N(s) ~ deg D(s), where deg
stands for the degree. We assume that D(s) has a simple root at s = a anda repeated
root with multiplicity 2 at s = b, as shown in (A.6). Then F(s) can be expanded as
F(s) = k 0 k a_+--+--=----=-
+ __ kbi kb2
s - a s - b (s - bf (A.7)
+ (Terms due to the roots of D(s))
with
k0 = F(oo) (A.Sa)
and
~bi
d [F(s)(s - b)z¡] s=b
-- ds (A.Sd)
This procedure is called partial fraction expansion. Using Table A.1, the inverse
Laplace transform of F(s) is
f(t) = k 0 8(t) + kaeat + kb 1eb1 + kb 2tebr + (Terms due to the roots of D(s))
Note that (A.8b) is applicable for any simple root; (A.8c) and (A.8d) are applicable
for any repeated root with multiplicity 2. Formulas for repeated roots with multi-
plicity 3 or higher and altemative formulas are available in Reference [18].
Example A.2. 1
W e expand it as
= _s_-_2s~¿-_3,
3
k 1 = F(s)(s + 1) 1s~ -1
= -1 + 2 + 3 = -4
2
S (S - 2) F _
1
1 • (- 3) 3
8 - 4 + 3 7
4·3 12
3
s - 2s + 31 3
k32 = F(s)s21s~o = (s - 2)(s + 1) s~o = -2 = -1.5
k3t =
d [F(s)s 2 ] s~o
ds 1
Exercise A.2. 1
Por a more detailed discussion ofpartial fraction expansion, see Reference [18].
Differentiation in Time
Let F(s) = ~[f(t)]. Then
(A.9b)
and, in general
where J<il(O-) denotes the ith derivative of f(t) at t = o-. We see that if
¡<il(O-) = O for i = O, 1, 2, ... , then differentiation in the time domain is converted
into multiplication by s in the Laplace-transform domain.
lntegration in Time
The Laplace transform of the integral of f(t) is given by
~ [L t(t)dt] = ~F(s)
Hence integration in the time domain is converted into division by s in the Laplace
transform domain.
Final-Value Theorem
Let f(t) be a function defined for t :::=: O. If f(t) approaches a constant as t ~ oo, or
if sF(s) has no pole in the closed right half s-plane, then
lim f(t) = lim sF(s) (A. lO)
t~oo s-----?0
However, the function f(t) approaches infinity as t ~ oo and the equality in (A. lO)
does not hold.
Initial-Value Theorem
Let f(t) be a function defined for t :::=: O, and let F(s) be its Laplace transform. 1t is
assumed that F(s) is a rational function of s. If F(s) = N(s)/D(s) is strictly proper,
that is, deg D(s) > deg N(s), then
f(O+) = lim sF(s)
s~oo
A.4 SOLVING LTIL DIFFERENTIAL EQUATIONS 573
Consider
S + 3
F 1(s)
s3 + 2s 2 + 4s + 2
and
2s 2 + 3s +
F 2 (s) =
s 3 + 2s 2 + 4s + 2
They are both strictly proper. Therefore the initial-value theorem can be appiied.
The application of the theorem yields
f 1(0+) = lim sF1(s) = O
S--->00
and
The rational function F 3(s) = (2s + 1)/(s + 1) is not strictly proper. The application
of the initial-value theorem yields
. s(2s + 1)
hm --'----'- 00 (A.ll)
s--->oo s--->00 S + 1
In this section we apply the Laplace transform to solve linear differential equations
with constant coefficients. This is illustrated by examples.
Example A.4. 1
Consider the first-order differential equation
d d
-d y(t) + 2y(t) = 3 - u(t) + 2u(t) (A.l2)
t dt
574 APPENDIX A THE lAPLACE TRANSFORM
The problem is to find y(t) due to the initial condition y(O-) = 2 and the input
u(t) = 1, for t :2:: O. The application of the Laplace transform to (A.l2) yields, using
(A.9a),
sY(s) - y(O-) + 2Y(s) = 3sU(s) - 3u(O-) + 2U(s) (A.l3)
which implies
5 2
Y(s) = s + 2 + s(s + 2)
which can be simplified as, after partial fraction expansion of the second term,
5 1 1 4
Y(s) = - - +-- +-
s+2 s s+2 S + 2 S
for t :2:: O.
This example gives another reason for using o- rather than O as the lower limit
in defining the Laplace transform. From (A.14), we have y(O) = 5, which is different
from the initial condition y( O-) = 2. If we had used y( O) = 2, then confusion would
have arisen. In conclusion, the reason for using O- as the lower limit in (A.1) is
twofold: First, to include impulses at t = O in the Laplace transform, and second,
to avoid possible confusion in using initial conditions. If f(t) does not contain im-
pulses at t = O and is continuous at t = O, then there is no difference in using either
Ooro- in(A.l).
Exercise A.4. 1
Find the solution of (A.l2) dueto y(O-) = 2, u(O-) = O, and u(t) 8(t).
Exomple A.4.2
lt is assumed that all initial conditions are zero. Find the response y(t) due to
u(t) = e-r, t ~O. The application of the Laplace transform to (A.15) yields
or
S
Y(s) = s2 + 2s + 5 U(s) (A.l6)
S S
Y(s) = = (A 17)
(s 2 + 2s + 5)(s + 1) (s + 1)(s + 1 - j2)(s + 1 + }2) ·
Thus, the remaining task is to compute the inverse Laplace transform of Y(s). We
expand itas
S + 1 S + 1 - }2 S + 1 + }2
with
j2~s +
-1
Y(s)(s + l)ls= -1
(s + 1 - 1 + }2)1s=-l 4
-1 + }2 1 - }2
Y(s)(s + 1 - j2)!s=-l+j2 = (} )(} ) ---
2 4 8
and
-1 - }2 1 + }2
k3 = Y(s)(s + 1 + j2)1s= - l - j2 = ( _ } )( _ } ) ---
2 4 8
k 3 = k'i : = a - jb
576 APPENDIXA THE LAPLACE TRANSFORM
In the subsequent development, it is simpler to use polar form rei 0 • The polar form
of a + jh is
x = rei 0
where r = Ya 2 + b2 and (} = tan- 1(b/a). For example, if x = -2 + jl, then
X = -v4+J e.itan- 1
[1/(-2)] = Vs eitan- 1
(-0.5)
Because tan (- 26.5°) = tan 153.5° = - 0.5, one may incorrectly write x as
Vs e-i26 s. The correct x, however, should be
x = Vs eil53.5°
as can be seen from Figure A.2. In the complex plane, x is a vector with real part
-2 and imaginary part 1 as shown. Thus its phase is 153.5° rather than -26.5°. In
computing the polar form of a complex number, it is advisable to draw a rough graph
to insure that we o9tain the correct phase.
Now we shall express k2 and k3 in polar form as
!m
-2 o
cosa
2
we obtain
Vs
y(t) = -0.25e- 1 + 4 e- 1 cos(2t - 1.1) (A.l9)
Exercise A.4.2
Considera time function f(t) which is zero for t :::::: O. Then f(t - T) with T 2>: O is
a delay of f(t) by T seconds as shown in Figure A.3. Let F(s) be the Laplace
transform of f(t). Then we have
.;:g[f(t - T)] = e-TsF(s) (A.20)
f(t) j(t- T)
o o
Consider the pulse p(t) shown in Figure A.4. Let q(t) be a unit-step function,
that is, q(t) = 1 for t 2: O and q(t) = O, for t < O. Then p(t) can be expressed as
p(t) = q(t) - q(t - T) (A.22)
Thus we have
p(t)
O T
B. 1 MATRICES
A (B.l)
All aij are real numbers. The matrix has n rows and m columns and is called an
n X m matrix. The element aij is located at the ith row and jth column and is called
the (i, j)th element or entry. The matrix is ~alled a square matrix of order n if m =
n, a column vector or simply a column if m = 1, a row vector ora row if n = l.
Two n X m matrices are equal if and only if all corresponding elements are the
same. The addition and multiplication of matrices is defined as follows:
579
580 APPENDIX B LINEAR ALGEBRAIC EQUATIONS
A + B [a;j + b¡)nxm
nxm nXm
e A A e = [ea¡)nxm
1 XI nXm nXm IX]
A
nXm
B
mxp
= [f k= 1
a¡kbkj]
nXp
For a square matrix A = [a¡), the entries a;;. i = 1, 2, 3, ... , on the diagonal
are called the diagonal elements of A. A square matrix is called a lower triangular
matrix if all entries above the diagonal elements are zero; an upper triangular matrix
if all entries below the diagonal elements are zero. A square matrix is called a
diagonal matrix if all entries, except the diagonal entries, are zero. A diagonal matrix
is called a unit matrix if all diagonal entries equal l. The transpose of A, denoted
by A', interchanges the rows and columns. For example, if
where i 1, i 2 , ••• , in are all possible orderings of the second subscripts 1, 2, ... , n,
and the integer i is the number of interchanges of two digits required to bring the
ordering i 1, i 2 , ••• , in into the natural ordering 1, 2, ... , n. For example, we have
11
det [a
a2I
and
B.2 DETERMINANT AND INVERSE 581
det
[
a¡¡
a 21 a 22
O O]
O = det
[a 11
O
an
a22
al3] =
a23 a¡¡a22a33 (B.2)
where A and B need not be of the same order. For example, if A is n X n and B is
m X m, then C is m X n and D is n X m. The composite matrices in (B.4) may be
called block diagonal matrices.
A square matrix is called nonsingular if its determinant is nonzero; singular if
it is zero. For nonsingular matrices, we may define the inverse. The inverse of A,
denoted by A- 1, has the property A- 1A = AA- 1 = l. lt can be computed by using
the formula
l 1
A- 1 = --AdjA (B.5)
det A det A [e¡)
where
cij = ( -1)i+j (Determinant of the submatrix of A
by deleting its jth row and ith column.)
The matrix [e¡) is called the adjoint of A. For example, we have
a¡¡ ai2 -1 1 [ a22 -a 12 ]
[
a2I
a
22 J - a
11
a
22
- a
12
a
21
- a
21 a¡¡
a31
azz
a32
is of the form
o
[b"
b~J
B := A- 1
bz¡ b22
b3¡ b32
582 APPENDIX B LINEAR ALGEBRAIC EQUATIONS
By definition, we have
T" bz¡
o
b22
or, o] [¡ ~]
O a 21
o
azz o - o
o
1
o
b31 b32 b33 a31 a32 a33 O
Equating the element (1, 1) yields b 11 a 11 = l. Thus we have b 11 = a!i 1 • Equating
elements (2, 2) and (3, 3) yields b22 a:U 1 and b33 a33 1. Equating element
(2, 1) yields
which implies
[A DJ - [A- J
1 1
= a (B.6)
O B O B- 1
where Aa + DB- 1 O. Thus we have
a= -A- 1DB- 1
Consider the matrix in (B.l). Let a;r denote its ith row, that is,
The set of n row vectors in (B.1) is said to be linear/y dependent if there exist n real
numbers a 1 , az, ... , an, not all zero, such that
(B.7)
[1 2
~]
3
alr]
a 2r 2 -1 o (B.8)
[
a3r 2 4 6
We have
1 X a 1r + 0 X a 2r + (- 0.5) X a 3r [O O O O]
B.3 THE RANK OF MATRICES 583
Therefore the three row vectors in (B.8) are linearly dependent. Consider
[~ ~ ~]
3
[:::]
a3r 1
-
2
o
o
(8.9)
We have
a 1a 1r + a 2a 2 r + a 3a 3 r
[a 1 + 2a 2 + a 3 2a 1 - a 2 + 2a 3 3a 1 4a¡] (8.10)
[O O O O]
The only a; meeting (B. lO) are a; = O for i = 1, 2, and 3. Therefore, the three row
vectors in (B.9) are linearly independent.
If a set of vectors is linearly dependent, then at least one of them can be ex-
pressed as a linear combination of the others. For example, the first row of (B.8) can
be expressed as
a 1r = O X a 2r + 0.5 X a 3r
This first row is a dependent row. If we delete all dependent rows in a matrix, the
remainder will be linearly independent. The maximum number of linearly inde-
pendent rows in a matrix is called the rank of the matrix. Thus, the matrix in (B.8)
has rank 2 and the matrix in (B.9) has rank 3.
The rank of a matrix can also be defined as the maximum number of linearly
independent columns in the matrix. Additionally, it can also be defined from deter-
minants as follows: An n X m matrix has rank r if the matrix has an r X r submatrix
with nonzero determinant and all square submatrices with higher order have deter-
minants zero. Of course, these definitions all lead to the same rank. A consequence
of these definitions is that for an n X m matrix, we have
Rank (A) ::::: min (n, m) (8.11)
An n X m matrix is said to have a full row rank if it has rank n or all its rows
are linearly independent. A necessary condition for the matrix to have a full row
rank is m 2: n. Thus, if a matrix has fewer rows than columns, then it cannot have
a full row rank. If a square matrix has a full row rank, then it also has a full column
rank and is called nonsingular.
To conclude this section, we discuss the use of MATLAB to compute the rank
of matrices. Matrices are represented in MATLAB row by row, separated by semi-
colon. For example, the matrix in (B.8) is represented as
a= [1 2 3 4;2 -1 O 0;2 4 6 8];
The command
rank(a)
yields 2, the rank of the matrix in (B.8). The command
rank([1 2 3 4;2 -1 O 0;1 2 O O])
584 APPENDIX B liNEAR ALGEBRAIC EQUATIONS
yields 3, the rank of the matrix in (B.9). Thus the use of the computer software is
very simple.
l'he number of bits used in digital computers is finite, therefore numerical errors
always occur in computer computation. As a result, two issues are important in
computer computation. The first issue is whether the problem is ill conditioned or
not. For example, we have
~] ~]
1/3 0.33333
Rank = 1 Rank [ = 2
[ 1 1
We see that small changes in parameters yield an entirely different result. Such a
problem is said to be ill conditioned. Thus, the computation of the rank is not a
simple problem on a digital computer. The second issue is the computational method.
A method is said to be numerically stable if the method will suppress numerical
errors in the process of computation. lt is numerically unstable if it will amplify
numerical errors and yield erroneous results. The most reliable method of computing
the rank is to use the singular value decomposition. See Reference [15].
where aij and y¡ are known and X; are unknown. This set of equations can be written
in matrix form as
Ax =y (B.12)
where
[""
G12
l
a,m
mm
a21 G22 Gzm
A= . : = [a;) X- y
a ni an2 anm
The set has n equations and m unknowns. A is an n X m matrix, x is an m X 1
vector, and y is an n X 1 vector.
THEOREM 8.1
For every y, a solution x exists in Ax y if and only if A has a full row
rank. •
B.4 LINEAR ALGEBRAIC EQUATIONS 585
For a proof of this theorem, see Reference [15]. We use an example to illustrate
its implication. Consider
2 3
[i -1
4
o
6
A1though this equation has a solution (x' = [1 O 1]) for y' = [7 14],
it does not have a solution for y' = [O O 1]. In other words, the equation has
so1utions for sorne y, but not for every y. This follows from Theorem B.1 because
the 3 X 4 matrix does not have a full row rank.
Consider again (B.12). It is assumed that A has a full row rank. Then for any
y, there exists an x to meet the equation. Now if n = m, the solution is unique. If
n < mor, equivalently, (B.12) has more unknowns than equations, then solutions
are not unique; (m - n) number of the parameters of the solutions can be arbitrarily
assigned. For example, consider
2 1
Ax = [ (8.13)
-1 o
The matrix A in (B.13) has a full row rank. It has three unknowns and two equations,
therefore one of x 1, x 2 , and x 3 can be arbitrarily assigned. It is important to mention
that not every one of x 1 , x 2 , or x 3 can be assigned. For example, if we assign x 2
3, then (B.13) becomes
2x 1 + 3 - 4x3 3 -1
or
-x 1 + 2x3 = -1
-10 + 2x3 = -1
or
Exercise B. 1
THEOREM 8.2
A nontrivial solution exists in Ax = O if and only if A is singular. Or,
equivalently, x = O is the only solution of Ax = O if and only if A is
nonsingular. •
There are many ways to compute the solution of Ax = y. We discuss in this section
the method of Gaussian elimination. This method is applicable no matter whether
or not A is nonsingular. 1t can also be used to compute nontrivial solutions of
Ax = O. This is illustrated by examples.
Example 8.1
Find a solution of
x1 + 2x2 + x 3 10 (B.l5)
x1 + 3x2 o (B.l7)
Subtraction of the product of 2 and (B.15) from (B.16), and subtraction of (B.15)
from (B.17) yield
10 (B.l5')
-17 (B.l6')
-10 (B.l7')
8.5 EUMINATION AND SUBSTITUTION 587
-17 (8.16")
7 (8.17")
This process is called Gaussian elimination. Once this step is completed, the solution
can easily be obtained as follows. From (B.17"), we have
7
3
Substitution of x 3 into (B.16") yields
28 23
x2 = - 17 + 4x3 = - 17 +
3 3
Substitution of x 3 and x 2 into (B.15") yields
46 7
x 1 = 10 - 2x2 - x 3 = 10 + = 23
3 3
This process is called back substitution. Thus, the solution of linear algebraic equa-
tions can be obtained by Gaussian elimination and then back substitution.
Example 8.2
Find a nontrivial solution, if it exists, of
x1 + 2x2 + 3x3 o (8.18)
3x 1 + 7x2 + x3 o (8.20)
Subtraction of the product of 2 and (B.18) from (B.19) and subtraction of the product
of 3 and (B.18) from (B.20) yield
o (8.18')
o (8.19')
o (8.20')
We see that (B.l9') and (B.20') are identical. In other words, the two unknowns x2
and x 3 are govemed by only one equation. Thus either one can be arbitrarily assigned.
Let us choose x 3 l. Then x 2 = 8. The substitution of x 3 = 1 and x2 = 8 into
(B.18') yields
x 1 = - 2x2 - 3x3 = - 16 - 3 = - 19
Thus x 1 -19, x 2 = 8, x 3 = 1 is a nontrivial solution.
- ----------
588 APPENDIX B LINEAR ALGEBRAIC EQUATIONS
Gaussian elimination is not a numerically stable method and should not be used
on computer computation. The procedure, however, is useful in hand computation.
In hand calculation, there is no need to eliminate X¡ in the order of x 1, x 2 , and x 3 •
They should be eliminated in the order which requires less computation. In Example
B.l, for instance, x 3 does not appear in (B.17). Therefore we should use (B.l5) and
(B.16) to eliminate x 3 to yield
4x 1 + 9x2 = 23
We use this equation and (B.l7) to eliminate x 2 to yield x 1 = 23. The substitution
ofx 1 into (B.17) yields x 2 = -23/3. The substitution of x 1 and x2 into (B.15) yields
x 3 = 7/3. This modified procedure is simpler than Gaussian elimination and is
suitable for hand calculation. For a more detailed discussion, see Reference [18].
Before carrying out elimination in the first column (corresponding to the elimination
of x 1 from the 2nd, 3rd, and 4th equations), we search for the element with the Jargest
magnitude in the first column, say a 31 , and then interchange the first and third equa-
tions. This step is called partial pivoting. The element a 31 , which is now located at
position (1, 1), is caBed the pivot. We then divide the first equation by a 31 to nor-
malize the pivot to l. After partial pivoting and normalization, we carry out elimi-
nation to yield
(B.22)
In the elimination, the same operations must be applied to y¡. Next we repeat the
same procedure to the submatrix bounded by the dashed lines. If the element with
the largest magnitude among a~~, a~~' anda~~ is nonzero, we bring it to position
(2, 2), normalize it to 1, and then carry out elimination to yield
O 1
B.6
--~~i_-~_i1
1 : 1
GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING
(B.23a)
'
: (2) (2) = (2)
[O: O a33 a34 x3 y3
O :1 O a< > a< > x 4
2
43
2
44
y<42 >
If the three elements a~~, a~~, anda~~ in (B.22) are all zero, (B.22) becomes
1 _!!_~'i_ __ ~_i1
O :1 O 0
__ (!i~l [X¡]
a > a > x 0
[Yil)l
y<'>
23 24 2 2
: (1) (1) = (1) (B.23b)
O: O a 33 a 34 x3 y3
[
O :1 O a43
0 > J'>
44
x4 yo>
4
In this case, no elimination is needed and the pivot is zero. We repeat the same
procedure to the submatrix bounded by the solid lines in (B.23). Finally we can
obtain
[H~ ~][;:]
0 0 0 1 X
4
= [f~:]
y(44 )
(B.24)
where p denotes possible nonzero elements. The transformation of (B.21) into the
form in (B.24) is called Gaussian elin:lination with partial pivoting. It is a numeri-
cally stable method. It can easily be prÓgrammed on a digital computer and is widely
used in practice. Once (B.21) is transformed into (B.24), the solutionx; can be easily
obtained by back substitution.
Now we discuss the use of MATLAB to sol ve a set of linear algebraic equations.
We rewrite (B.15), (B.16), and (B.17) in matrix formas
The commands
a=[1 2 1·2 5 -2·1 3 O]· b=[10·3·0]· a\b
' ' ' ' ' '
yield x 1 = 23.000, x 2 = -7.6667, x 3 = 2.3333. Thus the use of MATLAB is very
simple.
References
590
REFERENCE~
[14] - - - . ''A contribution to the design of linear time-invariant multivariable
systems," Proc. Am. Automatic Control Conf., June, 1983.
[15] - - - . Linear System Theory and Design, New York: Holt, Rinehart and
Winston, 1984.
[16] - - - . Control System Design: Conventional, Algebraic and Optimal Meth-
ods, Stony Brook, NY: Pond Woods Press, 1987.
[17] - - - . "lntroduction to the Linear Algebraic Method for Control System
Design," IEEE Control Systems Magazine, Vol. 7, No. 5, pp. 36-42, 1987.
[18] - - - . System and Signa! Analysis, New York: Holt, Rinehart and Winston,
1989.
[19] Chen, C. T. and B. Seo. "Applications of the Linear Algebraic Method for
Control System Design," IEEE Control Systems Magazine, Vol. 10, No. 1,
pp. 43-47, 1989.
[20] - - - . ''The Inward Approach in the Design of Control Systems,'' IEEE
Trans. on Education, Vol. 33, pp. 270-278, 1990.
[21] Chen, C. T. and S. Y. Zhang. "Various implementations of implementable
transfer matrices," IEEE Trans. Automatic Control, Vol. AC-30,
pp. 1115-1118, 1985.
[22] Coughanowr, D. R. Process Systems Analysis and Control, 2nd ed., New
York: McGraw-Hill, 1991.
[23] Craig, J. J. Introduction to Robotics, 2nd ed., Reading MA: Addison-Wesley,
1989.
[24] Daryanani, G. Principies of Active Network Synthesis and Design, New York:
Wiley, 1976.
[25] Doebelin, E. O. Control System Principies and Design, New York: John
Wiley, 1985.
[26] Doyle, J. and G. Stein. "Multivariable feedback design: Concepts for clas-
sical/modem analysis," IEEE Trans. on Automatic Control, Vol. 26, No. 1,
pp. 4-16, 1981.
[27] Evans, W. R. "Control system synthesis by the root locus method," Trans.
AJEE, Vol. 69, pp. 67-69, 1950.
[28] Franklin, G. F., J. D. Powell, and M. L. Workman. Digital Control of Dynamic
Systems, 2nd ed., Reading, MA: Addison-Wesley, 1990.
[29] Franklin, G. F., J. D. Powell, and A. Emami-Naeini. Feedback Control of
Dynamic Systems, Reading, MA: Addison-Wesley, 1986.
[30] Frederick, D. K., C. J. Herget, R. Kool, and C. M. Rimvall. "The extended
list of control software," Electronics, Vol. 61, No. 6, p. 77, 1988.
[31] Gawthrop, P. J. and P. E. Nomikos. "Automatic tuning of commercial PID
controllers for single-loop and multiloop applications,'' IEEE Control Systems
Magazine, Vol. 10, No. 1, pp. 34-42, 1990.
[32] Gayakwad, R. and L. Sokoloff. Analog and Digital Control Systems, Engle-
wood Cliffs, NJ: Prentice-Hall, 1988.
[33] Graham, D. and R. C. Lathrop. ''The synthesis of optimum response: Criteria
and standard forros," AJEE, Vol. 72, Pt. 11, pp. 273-288, 1953.
-
592 REFERENCES
595
596 INDEX