1
Lecture Notes Control System Engineering-II
VEER SURENDRA SAI UNIVERSITY OF TECHNOLOGY BURLA, ODISHA, INDIA
DEPARTMENT OF ELECTRICAL ENGINEERING
CONTROL SYSTEM ENGINEERING-II (3-1-0)
Lecture Notes
Subject Code: CSE-II
For 6
th
sem. Electrical Engineering & 7
th
Sem. EEE Student
2
DISCLAIMER
COPYRIGHT IS NOT RESERVED BY AUTHORS. AUTHORS ARE NOT RESPONSIBLE
FOR ANY LEGAL ISSUES ARISING OUT OF ANY COPYRIGHT DEMANDS AND/OR
REPRINT ISSUES CONTAINED IN THIS MATERIALS. THIS IS NOT MEANT FOR ANY
COMMERCIAL PURPOSE & ONLY MEANT FOR PERSONAL USE OF STUDENTS
FOLLOWING SYLLABUS. READERS ARE REQUESTED TO SEND ANY TYPING
ERRORS CONTAINED, HEREIN.
3
Department of Electrical Engineering,
CONTROL SYSTEM ENGINEERING-II (3-1-0)
MODULE-I (10 HOURS)
State Variable Analysis and Design: Introduction, Concepts of State, Sate Variables and State Model,
State Models for Linear Continuous-Time Systems, State Variables and Linear Discrete-Time
Systems, Diagonalization, Solution of State Equations, Concepts of Controllability and Observability,
Pole Placement by State Feedback, Observer based state feedback control.
MODULE-II (10 HOURS)
Introduction of Design: The Design Problem, Preliminary Considerations of Classical Design,
Realization of Basic Compensators, Cascade Compensation in Time Domain(Reshaping the Root
Locus), Cascade Compensation in Frequency Domain(Reshaping the Bode Plot),
Introduction to Feedback Compensation and Robust Control System Design.
Digital Control Systems: Advantages and disadvantages of Digital Control, Representation of
Sampled process, The z-transform, The z-transfer Function. Transfer function Models and dynamic
response of Sampled-data closed loop Control Systems, The Z and S domain Relationship, Stability
Analysis.
MODULE-III (10 HOURS)
Nonlinear Systems: Introduction, Common Physical Non-linearities, The Phase-plane Method: Basic
Concepts, Singular Points, Stability of Nonlinear System, Construction of Phase-trajectories, The
Describing Function Method: Basic Concepts, Derivation of Describing Functions, Stability analysis
by Describing Function Method, Jump Resonance, Signal Stabilization.
Liapunov‟s Stability Analysis: Introduction, Liapunov‟s Stability Criterion, The Direct Method of
Liapunov and the Linear System, Methods of Constructing Liapunov Functions for Nonlinear
Systems, Popov‟s Criterion.
MODULE-IV (10 HOURS)
Optimal Control Systems: Introduction, Parameter Optimization: Servomechanisms, Optimal Control
Problems: State Variable Approach, The State Regulator Problem, The Infinite-time Regulator
Problem, The Output regulator and the Tracking Problems, Parameter Optimization: Regulators,
Introduction to Adaptive Control.
BOOKS
[1]. K. Ogata, “Modem Control Engineering”, PHI.
[2]. I.J. Nagrath, M. Gopal, “Control Systems Engineering”, New Age International Publishers.
[3]. J.J.Distefano, III, A.R.Stubberud, I.J.Williams, “Feedback and Control Systems”, TMH.
[4]. K.Ogata, “Discrete Time Control System”, Pearson Education Asia.
4
MODULE-I
State space analysis.
State space analysis is an excellent method for the design and analysis of control systems.
The conventional and old method for the design and analysis of control systems is the
transfer function method. The transfer function method for design and analysis had many
drawbacks.
Advantages of state variable analysis.
It can be applied to non linear system.
It can be applied to tile invariant systems.
It can be applied to multiple input multiple output systems.
Its gives idea about the internal state of the system.
State Variable Analysis and Design
State: The state of a dynamic system is the smallest set of variables called state variables such that
the knowledge of these variables at time t=t
o
(Initial condition), together with the knowledge of input
for
0
, completely determines the behaviour of the system for any time  
0
.
State vector: If n state variables are needed to completely describe the behaviour of a given system,
then these n state variables can be considered the n components of a vector X. Such a vector is called
a state vector.
State space: The n-dimensional space whose co-ordinate axes consists of the x
1
axis, x
2
axis,.... x
n
axis, where x
1
, x
2
,..... x
n
are state variables: is called a state space.
State Model
Lets consider a multi input & multi output system is having
r inputs
1
,
2
, .
()
m no of outputs
1
,
2
, .
()
n no of state variables
1
,
2
, .
()
Then the state model is given by state & output equation
X
t
= AX
t
+ BU
t
state equation
Y
t
= CX
t
+ DU
t
output equation
A is state matrix of size (n×n)
B is the input matrix of size (n×r)
C is the output matrix of size (m×n)
5
D is the direct transmission matrix of size (m×r)
X(t) is the state vector of size (n×1)
Y(t) is the output vector of size (m×1)
U(t) is the input vector of size (r×1)
(Block diagram of the linear, continuous time control system represented in state space)
= 
+ 
= 
+ 
STATE SPACE REPRESENTATION OF N
TH
ORDER SYSTEMS OF LINEAR
DIFFERENTIAL EQUATION IN WHICH FORCING FUNCTION DOES NOT INVOLVE
DERIVATIVE TERM
Consider following nth order LTI system relating the output y(t) to the input u(t).
+
1
1
+
2
2
+ +
1
1
+
=
Phase variables: The phase variables are defined as those particular state variables which are
obtained from one of the system variables & its (n-1) derivatives. Often the variables used is
the system output & the remaining state variables are then derivatives of the output.
Let us define the state variables as
1
=
2
=


=


3
=

=
2

=
1
=
1

6
From the above equations we can write
1
=
2
2
=
3
1
=
= 
1
1
2
1
+
Writing the above state equation in vector matrix form
X
t
= AX
t
+ Bu
t
Where =
1
2
×1
, =
0 1
0
0
0
0 1
0
0

0 0

1

2
.
1

1
×
Output equation can be written as
Y
t
= CX
t
=
1 0 . 0
Example: Direct Derivation of State Space Model (Mechanical Translating)
Derive a state space model for the system shown. The input is f
a
and the output is y.
We can write free body equations for the system at x and at y.
7
Freebody Diagram
Equation
There are three energy storage elements, so we expect three state equations. The energy
storage elements are the spring, k
2
, the mass, m, and the spring, k
1
. Therefore we choose
as our state variables x (the energy in spring k
2
is ½k
2
x²), the velocity at x (the energy in
the mass m is ½mv², where v is the first derivative of x), and y (the energy in spring k
1
is
½k
1
(y-x)² , so we could pick y-x as a state variable, but we'll just use y (since x is already a
state variable; recall that the choice of state variables is not unique). Our state variables
become:
Now we want equations for their derivatives. The equations of motion from the free body
diagrams yield
or
8
with the input u=f
a
.
Example: Direct Derivation of State Space Model (Electrical)
Derive a state space model for the system shown. The input is i
a
and the output is e
2
.
There are three energy storage elements, so we expect three state equations. Try
choosing i
1
, i
2
and e
1
as state variables. Now we want equations for their derivatives. The
voltage across the inductor L
2
is e
1
(which is one of our state variables)
so our first state variable equation is
If we sum currents into the node labeled n1 we get
This equation has our input (ia) and two state variable (iL2 and iL1) and the current
through the capacitor. So from this we can get our second state equation
Our third, and final, state equation we get by writing an equation for the voltage across
L
1
(which is e
2
) in terms of our other state variables
We also need an output equation:
9
So our state space representation becomes
State Space to Transfer Function
Consider the state space system:
Now, take the Laplace Transform (with zero initial conditions since we are finding a
transfer function):
We want to solve for the ratio of Y(s) to U(s), so we need so remove Q(s) from the
output equation. We start by solving the state equation for Q(s)
The matrix Φ(s) is called the state transition matrix. Now we put this into the output
equation
Now we can solve for the transfer function:
Note that although there are many state space representations of a given system, all
of those representations will result in the same transfer function (i.e., the transfer
function of a system is unique; the state space representation is not).
10
Example: State Space to Transfer Function
Find the transfer function of the system with state space representation
First find (sI-A) and the Φ=(sI-A)
-1
(note: this calculation is not obvious. Details
are here). Rules for inverting a 3x3 matrix are here.
Now we can find the transfer function
To make this task easier, MatLab has a command (ss2tf) for converting from state space
to transfer function.
>> % First define state space system
>> A=[0 1 0; 0 0 1; -3 -4 -2];
>> B=[0; 0; 1];
>> C=[5 1 0];
>> [n,d]=ss2tf(A,B,C,D)
n =
11
0 0 1.0000
5.0000
d =
1.0000 2.0000 4.0000
3.0000
>> mySys_tf=tf(n,d)
Transfer function:
s + 5
----------------------
s^3 + 2 s^2 + 4 s + 3
Transfer Function to State Space
Recall that state space models of systems are not unique; a system has many state space
representations. Therefore we will develop a few methods for creating state space models
of systems.
Before we look at procedures for converting from a transfer function to a state space
model of a system, let's first examine going from a differential equation to state space.
We'll do this first with a simple system, then move to a more complex system that will
demonstrate the usefulness of a standard technique.
First we start with an example demonstrating a simple way of converting from a single
differential equation to state space, followed by a conversion from transfer function to state
space.
Example: Differential Equation to State Space (simple)
Consider the differential equation with no derivatives on the right hand side. We'll use
a third order equation, thought it generalizes to n
th
order in the obvious way.
For such systems (no derivatives of the input) we can choose as our n state variables the
variable y and its first n-1 derivatives (in this case the first two derivatives)
12
Taking the derivatives we can develop our state space model
Note: For an nth order system the matrices generalize in the obvious way (A has ones above the
main diagonal and the differential equation constants for the last row, B is all zeros with b
0
in the
bottom row, C is zero except for the leftmost element which is one, and D is zero)
Repeat Starting from Transfer Function
Consider the transfer function with a constant numerator (note: this is the same system
as in the preceding example). We'll use a third order equation, thought it generalizes to
n
th
order in the obvious way.
For such systems (no derivatives of the input) we can choose as our n state variables the
variable y and its first n-1 derivatives (in this case the first two derivatives)
Taking the derivatives we can develop our state space model (which is exactly the same
as when we started from the differential equation).
13
Note: For an nth order system the matrices generalize in the obvious way (A has ones
above the main diagonal and the coefficients of the denominator polynomial for the last
row, B is all zeros with b
0
(the numerator coefficient) in the bottom row, C is zero except
for the leftmost element which is one, and D is zero)
If we try this method on a slightly more complicated system, we find that it
initially fails (though we can succeed with a little cleverness).
Example: Differential Equation to State Space (harder)
Consider the differential equation with a single derivative on the right hand side.
We can try the same method as before:
The method has failed because there is a derivative of the input on the right hand, and
that is not allowed in a state space model.
Fortunately we can solve our problem by revising our choice of state variables.
Now when we take the derivatives we get:
14
The second and third equations are not correct, because ÿ is not one of the state
variables. However we can make use of the fact:
The second state variable equation then becomes
In the third state variable equation we have successfully removed the derivative of the
input from the right side of the third equation, and we can get rid of the ÿ term using the
same substitution we used for the second state variable.
The process described in the previous example can be generalized to systems with
higher order input derivatives but unfortunately gets increasingly difficult as the order
of the derivative increases. When the order of derivatives is equal on both sides, the
process becomes much more difficult (and the variable "D" is no longer equal to
zero). Clearly more straightforward techniques are necessary. Two are outlined
below, one generates a state space method known as the "controllable canonical form"
and the other generates the "observable canonical form (the meaning of these terms
derives from Control Theory but are not important to us).
Controllable Canonical Form (CCF)
Probably the most straightforward method for converting from the transfer
function of a system to a state space model is to generate a model in "controllable
canonical form." This term comes from Control Theory but its exact meaning is not
important to us. To see how this method of generating a state space model works,
consider the third order differential transfer function:
We start by multiplying by Z(s)/Z(s) and then solving for Y(s) and U(s) in terms of
Z(s). We also convert back to a differential equation.
15
We can now choose z and its first two derivatives as our state variables
Now we just need to form the output
From these results we can easily form the state space model:
In this case, the order of the numerator of the transfer function was less than that of
the denominator. If they are equal, the process is somewhat more complex. A result
that works in all cases is given below; the details are here. For a general n
th
order
transfer function:
the controllable canonical state space model form is
Key Concept: Transfer function to State Space (CCF)
For a general n
th
order transfer function:
the controllable canonical state space model form is
16
Observable Canonical Form (OCF)
Another commonly used state variable form is the "observable canonical form."
This term comes from Control Theory but its exact meaning is not important to us.
To understand how this method works consider a third order system with transfer
function:
We can convert this to a differential equation and solve for the highest order
derivative of y:
Now we integrate twice (the reason for this will be apparent soon), and collect terms
according to order of the integral:
Choose the output as our first state variable
Looking at the right hand side of the differential equation we note that y=q
1
and we
call the two integral terms q
2
:
so
This is our first state variable equation.
17
Now let's examine q
2
and its derivative:
Again we note that y=q
1
and we call the integral terms q
3
:
so
This is our second state variable equation.
Now let's examine q
3
and its derivative:
This is our third, and last, state variable equation.
Our state space model now becomes:
In this case, the order of the numerator of the transfer function was less than that of
the denominator. If they are equal, the process is somewhat more complex. A result
that works in all cases is given below; the details are here. For a general n
th
order
transfer function:
the observable canonical state space model form is
18
Key Concept: Transfer function to State Space (OCF)
For a general n
th
order transfer function:
the observable canonical state space model form is
=
Adj
SI A
+
SI A
D
SI A
SI A
is also known as characteristic equation when equated to zero.
MATLab Code
Transfer Function to State Space(tf2ss)
Y(s)
U(s)
=
s
s
3
+ 14s
2
+ 56s + 160
num=[1 0];
den=[1 14 56 160];
[A,B,C,D]=tf2ss(num,den)
A =
-14 -56 -160
1 0 0
0 1 0
19
B =
1
0
0
C =
0 1 0
D =
0
Concept of Eigen Values and Eigen Vectors
The roots of characteristic equation that we have described above are known as eigen values
of matrix A.
Now there are some properties related to eigen values and these properties are written below-
1. Any square matrix A and its transpose A
T
have the same eigen values.
2. Sum of eigen values of any matrix A is equal to the trace of the matrix A.
3. Product of the eigen values of any matrix A is equal to the determinant of the matrix A.
4. If we multiply a scalar quantity to matrix A then the eigen values are also get multiplied by
the same value of scalar.
5. If we inverse the given matrix A then its eigen values are also get inverses.
6. If all the elements of the matrix are real then the eigen values corresponding to that matrix are
either real or exists in complex conjugate pair.
Eigen Vectors
Any non zero vector
that satisfies the matrix equation
= 0 is called the eigen
vector of A associated with the eigen value
,. Where
, i = 1, 2, 3, ……..n denotes the i
th
eigen values of A.
This eigen vector may be obtained by taking cofactors of matrix
along any row &
transposing that row of cofactors.
20
Diagonalization
Let
1
,
2
, . .
be the eigenvectors corresponding to the eigen value
1
,
2
, . .
respectively.
Then =
1
2
is called diagonalizing or modal matrix of A.
Consider the n
th
order MIMO state model
X
t
= AX
t
+ BU
t
Y
t
= CX
t
+ DU
t
System matrix A is non diagonal, so let us define a new state vector V(t) such that
X(t)=MV(t).
Under this assumption original state model modifies to
V
t
=
V
t
+ B
U
t
Y
t
= C
V
t
+ DU
t
Where
=

=  ,
=

,
= 
The above transformed state model is in canonical state model. The transformation described
above is called similarity transformation.
If the system matrix A is in companion form & if all its n eigen values are distinct, then
modal matrix will be special matrix called the Vander Monde matrix.
   =
1 1
1
1
1
2
3
1
2
1
1
2
2
3
2
2
1
3
1
.
2
1
×
State Transition Matrix and Zero State Response
We are here interested in deriving the expressions for the state transition matrix and zero state
response. Again taking the state equations that we have derived above and taking their
Laplace transformation we have,
21
Now on rewriting the above equation we have
Let [sI-A]
-1
= θ(s) and taking the inverse Laplace of the above equation we have
The expression θ(t) is known as state transition matrix(STM).
L
-1
.θ(t)BU(s) = zero state response.
Now let us discuss some of the properties of the state transition matrix.
1. If we substitute t = 0 in the above equation then we will get 1.Mathematically we can write
θ(0) =1.
2. If we substitute t = -t in the θ(t) then we will get inverse of θ(t). Mathematically we can write
θ(-t) = [θ(t)]
-1
.
3. We also another important property [θ(t)]
n
= θ(nt).
Computation of STN using Cayley-Hamilton Theorem
The Cayleyhamilton theorem states that every square matrix A satisfies its own
characteristic equation.
This theorem provides a simple procedure for evaluating the functions of a matrix.
To determine the matrix polynomial
=
0
+
1
+
2
2
+ +
+
+1
+1
+
Consider the scalar polynomial
() =
0
+
1
+
2
2
+ +
+
+1
+1
+
Here A is a square matrix of size (n×n). Its characteristic equation is given by
=

=
+
1
1
+
2
2
+ +
1
+
= 0
If
is divided by the characteristic polynomial
, then
=
+
=
+
. . (1)
Where
is the remainder polynomial of the form
=
0
+
1
+
2
2
+ +
1
1
(2)
22
If we evaluate
at the eigen values
1
,
2
, . .
, then
= 0 and we have from
equation (1)
=
; = 1,2, . . . (3)
The coefficients
0
,
1
, . .
1
, can be obtained by successfully substituting
1
,
2
, . .
into equation (3).
Substituting A for the variable in equation (1), we get
=
+
As
 , 
=
=
0
+
1
+
2
2
+ +
1
1
    .
CONCEPTS OF CONTROLLABILITY & OBSERVABILITY
State Controllability
A system is said to be completely state controllable if it is possible to transfer the
system state from any initial state X(t
o
) to any desired state X(t) in specified finite time by a
control vector u(t).
Kalman‟s test
Consider n
th
order multi input LTI system with m dimensional control vector
X
t
= AX
t
+ BU
t
is completely controllable if & only if the rank of the composite matrix Q
c
is n.
=


Observability
A system is said to be completely observable, if every state X(t
o
) can be completely
identified by measurements of the outputs y(t) over a finite time interval(
1
).
Kalman‟s test
Consider n
th
order multi input LTI system with m dimensional output vector
X
t
= AX
t
+ BU
t
Y
t
= CX
t
+ DU
t
is completely observable if & only if the rank of the observability matrix Q
o
is n.
= [

]
23
Principle of Duality: It gives relationship between controllability & observability.
The Pair (AB) is controllable implies that the pair (A
T
B
T
) is observable.
The pair (AC) is observable implies that the pair (A
T
C
T
) is controllable.
Design of Control System in State Space
Pole placement at State Space
Assumptions:
The system is completely state controllable. ƒ
The state variables are measurable and are available for feedback. ƒ
Control input is unconstrained.
Objective:
The closed loop poles should lie at
1
,
2
, . .
which are their „desired locations‟
Necessary and sufficient condition: The system is completely state controllable.
Consider the system
X
t
= AX
t
+ BU
t
The control vector U is designed in the following state feedback form U =-KX
This leads to the following closed loop system
X
t
=
A BK
X
t
= A
CL
X
t
The gain matrix K is designed in such a way that



=
1

2
. .
Pole Placement Design Steps:Method 1 (low order systems, n ≤ 3):
Check controllability
Define =
1
2
3
Substitute this gain in the desired characteristic polynomial equation
+ 
=
1

2
. .
Solve for
1
,
2
,
3
by equating the like powers of S on both sides
MATLab Code
Finding State Feedback gain matrix with MATLab
MATLab code acker is based on Ackermann‟s formula and works for single input single
output system only.
MATLab code place works for single- or multi-input system.
24
Example
Consider the system with state equation
X
t
= AX
t
+ BU
t
Where
A =
0 1 0
0 0 1
1 5 6
, B =
0
0
1
By using state feedback control u=-Kx, it is desired to have the closed loop poles at
1
= 2 + 4,
2
= 2 4,
3
= 10
Determine the state feedback gain matrix K with MATLab
A=[0 1 0;0 0 1;-1 -5 -6];
B=[0;0;1];
J=[-2+i*4 -2-i*4 -10];
k=acker(A,B,J)
k =
199 55 8
A=[0 1 0;0 0 1;-1 -5 -6];
B=[0;0;1];
J=[-2+i*4 -2-i*4 -10];
k=place(A,B,J)
k =
199.0000 55.0000 8.0000
State Estimators or Observers
One should note that although state feed back control is very attractive because of precise
computation of the gain matrix K, implementation of a state feedback controller is possible
only when all state variables are directly measurable with help of some kind of sensors.
• Due to the excess number of required sensors or unavailability of states for measurement, in
most of the practical situations this requirement is not met.
Only a subset of state variables or their combinations may be available for measurements.
Sometimes only output y is available for measurement.
25
Hence the need for an estimator or observer is obvious which estimates all state variables
while observing input and output.
Full Order Observer : If the state observer estimates all the state variables, regardless of
whether some are available for direct measurements or not, it is called a full order
observer.
Reduced Order Observer : An observer that estimates fewer than ``n'' states of the
system is called reduced order observer.
Minimum Order Observer : If the order of the observer is minimum possible then it is
called minimum order observer.
Observer Block Diagram
Design of an Observer
The governing equation for a dynamic system (Plant) in statespace representation may be
written as:
X
t
= AX
t
+ BU
t
Y
t
= CX
t
The governing equation for the Observer based on the block diagram is shown below. The
superscript „‟ refers to estimation.
= 
+ +
(
)
=
26
Define the error in estimation of state vector as
= (
)
The error dynamics could be derived now from the observer governing equation and state
space equations for the system as:
= (
)
= 
The corresponding characteristic equation may be written as:
|(
) | = 0
You need to design the observer gains such that the desired error dynamics is obtained.
Observer Design Steps:Method 1 (low order systems, n ≤ 3):
Check the observability
Define
=
1
2
3
Substitute this gain in the desired characteristic polynomial equation
|(
) | =
1

2
. .
Solve for
1
,
2
,
3
by equating the like powers of S on both sides
Here
1
,
2
, . .
are desired eigen values of observer matrix.
27
MODEL QUESTIONS
Module-1
Short Questions each carrying Two marks.
1. The System matrix of a continuous time system, described in the state variable form
is
A =
x 0 0
0 y 1
0 1 2
Determine the range of x & y so that the system is stable.
2. For a single input system
X
= AX + BU
Y = CX
A =
0 1
1 2
; B =
0
1
; C =
1 1
Check the controllability & observability of the system.
3. Given the homogeneous state space equation X
=
0 1
1 2
X ;
Determine the steady state value

= lim

() given the initial state value
X(0) =
10
10
.
4. State Kalman‟s test for observability.
The figures in the right-hand margin indicate marks.
5. For a system represented by the state equation X
= AX
Where
A =
0 1 0
3 0 2
12 7 6
Find the initial condition state vector X(0) which will excite only the mode
corresponding to the eigenvalue with the most negative real part. [10]
6. Write short notes on Properties of state transition matrix. [3.5]
7. Investigate the controllability and observability of the following system:
=
1 0
0 2
+
0
1
; =
0 1
[8]
8. Write short notes on [4×5]
(a) Pole placement by state feedback.
(b) state transition matrix
(c) MIMO systems
(d) hydraulic servomotor
(e) Principle of duality due to kalman
9. A system is described by the following differential equation. Represent the system in
state space:
28
3
3
+ 3
2
2
+ 4


+ 4=
1
+ 3
2
+ 4
3
()
and outputs are
1
= 4


+ 3
1
2
=
2
2
+ 4
2
+
3
[8]
10. Find the time response of the system described by the equation
() =
1 1
0 2
+
0
1
(t)
X
0
=
1
0
, u
t
= 1, t > 0 [14]
11. (a) Obtain a state space representation of the system
C(s)
U(s)
=
10(s+2)
s
3
+3s
2
+5s+15
[7]
(b) A linear system is represented by
=
6 4
2 0
+
1
1
; =
1 0
1 1
(i) Find the complete solution for Y(t) when U(t)=1(t),
1
0
= 1,
2
0
= 0
(ii) Draw a block diagram representing the system. [5+3]
12. Discuss the state controllability of the system
1
2
=
3 1
2 1.5
1
X
2
+
1
4
u
Prove the conditions used. [3+4]
13. If a continuous-time state equation is represented in discrete form as
X[(K+1)T]=G(T)X(KT) + H(T) U(KT)
Deduce the expressions for the matrices G(T) & H(T)
Discretise the continuous-time system described by
=
0 1
0 0
+
0
1
;
Assume the sampling period is 2 secs. [5+4]
14.(a) Choosing
1
=current through the inductor [8]
2
=voltage across capacitor,determine the state equation for the system shown
in fig below
(b) Explain controllability and observability. [8]
15. A linear system is represented by
=
6 4
2 0
+
1
1
u
29
Y=
1 0
1 1
(a)Find the complete solution for y(t) when
U(t)=1(t),
1
(0)=1 and
2
(0)=0
(b) Determine the transfer function
(c) Draw a block diagram representing the system [9+4+3]
16.(a) Derive an expression for the transfer function of a pump controlled hydraulic system.
State the assumption made. [8]
(b) Simulate a pneumatic PID controller and obtain its linearized transfer function. [8]
17. Describe the constructional features of a rate gyro, explain its principle of operation and
obtain its transfer function. [8]
18. (a) Explain how poles of a closed loop control system can be placed at the desired points
on the s plane. [4]
(b) Explain how diagonalisation of a system matrix a helps in the study of controllability
of control systems. [4]
19. Construct the state space model of the system whose signal flow graph is shown in fig 2.
[7]
20. (a)Define state of a system, state variables, state space and state vector. What are the
advantages of state space analysis? [5]
(b) A two input two output linear dynamic system is governed by
()=
0 1
2 3
X(t)+
2 1
0 1
R(t)
Y(t)=
1 0
1 1
X(t)
i)Find out the transfer function matrix. [5]
ii)Assuming
0
= 0 find the output response Y(t) if [5]
R(t)=
0
3
for t0
21.(a) A system is described by [8]
(t)=
4 1 0
0 3 1
0 0 2
X(t)
Diagonalise the above system making use of suitable transformation X=PZ
(b) Show how can you compute

using results of (a) [7]
22. Define controllability and observability and of control systems. [4]
23. A feed back system has a closed loop transfer function:
30
()
()
=
10+ 40
3
+
2
+ 3
Construct three different state models showing block diagram in each case. [5×3]
24. Explain the method of pole placement by state-feedback. Find the matrix k=[
1
2
]
which is called the state feedback gain matrix for the closed loop poles to be located at -1.8
+
.
j2.4 for the original system governed by the state equation:
=
0 1
20.6 0
X +
0
1
[6]
25.(a) From a system represented by the state equation
=
()
The response of
X(t)=
2
2
2
when x(0)=
1
2
And x(t)=



when x(o)=
1
1
Determine the system matrix A and the states transition matrix φ(t). [12]
(b) Prove non uniqueness of state space model. [4]
26.(a) Show the following system is always controllable
=
0 1 0
0 0 1

3

2

1
+
0
0
1
(b) Explain the design of state observer.
(c) Illustrate and explain pole placement by state feedback. [4+4+4]
31
MODULE-II
COMPENSATOR DESIGN
Every control system which has been designed for a specific application should
meet certain performance specification. There are always some constraints which are
imposed on the control system design in addition to the performance specification. The
choice of a plant is not only dependent on the performance specification but also on the
size , weight & cost. Although the designer of the control system is free to choose a new
plant, it is generally not advised due to the cost & other constraints. Under this
circumstances it is possible to introduce some kind of corrective sub-systems in order to
force the chosen plant to meet the given specification. We refer to these sub-systems as
compensator whose job is to compensate for the deficiency in the performance of the
plant.
REALIZATION OF BASIC COMPENSATORS
Compensation can be accomplished in several ways.
Series or Cascade compensation
Compensator can be inserted in the forward path as shown in fig below. The transfer
function of compensator is denoted as G
c
(s), whereas that of the original process of the
plant is denoted by G(s).
feedback compensation
Combined Cascade & feedback compensation
32
Compensator can be electrical, mechanical, pneumatic or hydrolic type of device. Mostly
electrical networks are used as compensator in most of the control system. The very
simplest of these are Lead, lag & lead-lag networks.
Lead Compensator
Lead compensator are used to improve the transient response of a system.
Fig: Electric Lead Network
Taking i
2
=0 & applying Laplace Transform, we get
2
()
1
()
=
2
(
1
+ 1)
2
+
2
1
+
1
Let =
1
, =
2
1
+
2
< 1
2
()
1
()
=
(+1)
(1+)
Transfer function of Lead Compensator
Fig: S-Plane representation of Lead Compensator
Bode plot for Lead Compensator
Maximum phase lead occurs at
=
1
Let
= maximum phase lead
sin
=
1
1 +
=
1 sin
1 + sin
Magnitude at maximum phase lead
()
=
1
33
Fig: Bode plot of Phase Lead network with amplifier of gain =
1
Lag Compensator
Lag compensator are used to improve the steady state response of a system.
Fig: Electric Lag Network
Taking i
2
=0 & applying Laplace Transform, we get
2
()
1
()
=
2
+ 1
(
2
+
1
)+ 1
Let =
2
, =
1
+
2
2
> 1
2
()
1
()
=
+1
1+
Transfer function of Lag Compensator
34
Fig: S-Plane representation of Lag Compensator
Bode plot for Lag Compensator
Maximum phase lag occurs at
=
1
Let
= maximum phase lag
sin
=
1
1 +
=
1 sin
1 + sin
Fig: Bode plot of Phase Lag network
Cascade compensation in Time domain
Cascade compensation in time domain is conveniently carried out by the root locus
technique. In this method of compensation, the original design specification on dynamic
response are converted into &
of a pair of desired complex conjugate closed loop pole
35
based on the assumption the system would be dominated by these two complex pole
therefore its dynamic behavior can be approximated by that of a second order system.
A compensator is now designed so that the least damped complex pole of the
resulting transfer function correspond to the desired dominant pole & all other closed loop
poles are located very close to the open loop zeros or relatively far away from the jw axis.
This ensures that the poles other than the dominant poles make negligible contribution to
the system dynamics.
Lead Compensation
Consider a unity feedback system with a forward path unalterable Transfer function
(), then let the dynamic response specifications are translated into desired
location S
d
for the dominant complex closed loop poles.
If the angle criteria as S
d
is not meet i.e 
() ±180° the uncompensated Root
Locus with variable open loop gain will not pass through the desired root location,
indicating the need for the compensation.
The lead compensator
() has to be designed that the compensated root locus
passes through S
d
. In terms of angle criteria this requires that

= 
+ 
± 180°

= = ±180° 
Thus for the root locus for the compensated system to pass through the desired root
location the lead compensator pole-zero pair must contribute an angle .
For a given angle required for lead compensation there is no unique location for
pole-zero pair. The best compensator pole-zero location is the one which gives the
largest value of .
Where =
Fig: Angle contribution of Lead compensator
36
The compensator zero is located by drawing a line from S
d
making an angle with
line.
The compensator pole then located by drawing a further requisite angle to be
contribute at S
d
by the pole zero pair. From the geometry of the figure
sin
=
sin()
=
sin
sin()
Assuming big triangle
sin(+ )
=
sin()
=
sin(+ )
sin()
=
==
sin() sin
sin() sin(+ )
To find

,


= 0
=
1
2
()
Though the above method of locating the lead compensator pole-zero yields the largest
value of , it does not guarantee the dominance of the desired closed loop poles in the
compensated root-locus. The dominance condition must be checked before completing the
design. With compensator pole-zero so located the system gain at S
d
is computed to
determine the error constant. If the value of the error constant so obtained is unsatisfactory
the above procedure is repeated after readjusting the compensator pole-zero location while
keeping the angle contribution fixed as .
Lag Compensation
Consider a unit feedback system with forward path transfer function
=
(+
)
=1
(+
)
=+1
At certain value of K, this system has satisfactory transient response i.e its root locus plot
passes through(closed to) the desired closed loop poles location S
d
.
It is required to
improve the system error constant to a specified value
without
damaging its transient response. This requires that after compensation the root locus should
continue to pass through S
d
while the error constant at S
d
is raised to
. To accomplish
this consider adding a lag compensator pole-zero pair with zero the left of the pole. If this
37
pole-zero pair is located closed to each other it will contribute a negligible angle at S
d
such
that S
d
continues to lie on the root locus of the compensated system.
Fig:Locating the Lag Compensator Pole-zero
From the above fig. that apart from being close to each other the pole-zero pair close to
origin, the reason which will become obvious from discussion below.
The gain of the uncompensated system at S
d
is given by

(
) =
(
+
)
=+1
(
+
)
=1
For compensated system the system gain at S
d
is given by
=

+
=+1
+
=1
Since the pole & zero are located close to each other they are nearly equidistance from S
d
i.e  
i.e

The error constant for the compensated system is given by
=
=1
=+1

=1
=+1
=


is error constant at S
d
for uncompensated system.
is error constant at S
d
for compensated system.
38
=
=

. . (1)
The parameter of lag compensator is nearly equal to the ratio of specified error constant
to the error constant of the uncompensated system.
Any value of =
> 1 with 
& 
close to each other can be realized by keeping
the pole-zero pair close to origin.
Since the Lag compensator does contribute a small negative angle at S
d
, the actual error
constant will some what fall short of the specified value if obtained from equation(1) is
used. Hence for design purpose we choose somewhat larger than that the given by this
equation(1).
For the effect of the small lag angle is to give the closed loop pole S
d
with specified
but slightly lower
. This can be anticipated & counteracted by taking the
of S
d
to
be somewhat larger than the specified value.
Cascade compensation in Frequency domain
Lead Compensation
Procedure of Lead Compensation
Step1: Determine the value of loop gain K to satisfy the specified error constant. Usually
the error constant (K
p
,K
v
,K
a
) & Phase margin are the specification given.
Step2: For this value of K draw the bode plot & determine the phase margin for the
system.
Step3: If
s
=specified phase margin &
= phase margin of uncompensated system (found out from the bode plot drawn)
=margin of safety (since crossover frequency may increase due to compensation)
is the unknown reduction in phase angle 
() on account of the increase in cross-
over frequency. A guess is made on the value of depending on the slope in this
region of the dB-log w plot of the uncompensated system.
For a slope of -40dB/decade = 10° is a good guess. The guess value may have
to be as high as 15°  20° for a slope of -60dB/decade.
Phase lead required
=
+
Step4: Let
=
Determine
=
1 sin
1 + sin
If
> 60°, two identical networks each contributing a maximum lead of
/2 are used.
39
Step5: Find the frequency
at which the uncompensated system will have a gain equals
to 10 log
1
from the bode plot drawn.
Take
2
=
= cross-over frequency of compensated system.
Step6: Corner frequency of the network are calculated as
1
=
1
=
,
2
=
1

=
Transfer function for compensated system in Lead network
=
+
1
+
1

Step7: Draw the magnitude & Phase plot for the compensated system & check the resulting
phase margin. If the phase margin is still low raise the value of & repeat the procedure.
Lag Compensation
Procedure of Lead Compensation
Step1: Determine the value of loop gain K to satisfy the specified error constant.
Step2: For this value of K draw the bode plot & determine the phase margin for the
system.
Step3: If
s
=specified phase margin &
= phase margin of uncompensated system (found out from the bode plot drawn)
=margin of safety (10°)
For a suitable find
2
=
+ ,where
2
is measured above 180° line.
Step4: Find the frequency
2
where the uncompensated system makes a phase margin
contribution of
2
.
Step5: Measure the gain of uncompensated system at
2
. Find from the equation

2
= 20 log
Step6: Choose the upper corner frequency
2
=
1
of the network one octave to one decade
below
2
(. 
2
2

2
10
)
Step7: Thus & are determined which can be used to find the transfer function of Lag
compensator.
=
1
+
1
+
1

Compensated Transfer function () =
Draw the bode plot of the compensated system & check if the given specification are met.
40
MATLab Code
Plotting rootlocus with MATLAB(rlocus)
Consider a unity-feedback control system with the following feedforward transfer function:
=
+ 1
(+ 2)
Using MATLAB, plot the rootlocus.
=
+ 1
(+ 2)
=
3
+ 3
2
+ 2
num=[1];
den=[1 3 2 0];
h = tf(num,den);
rlocus(h)
Plotting Bode Diagram with MATLAB(bode)
Consider the following transfer function
=
25
2
+ 4+ 25
Plot the Bode diagram for this transfer function
41
num=[25];
den=[1 4 25];
bode(num,den)
grid on
42
Digital Control Systems
Sampling operation in sampled data and digital control system is used to model either the
sample and hold operation or the fact that the signal is digitally coded. If the sampler is used
to represent S/H (Sample and Hold) and A/D (Analog to Digital) operations, it may involve
delays, finite sampling duration and quantization errors. On the other hand if the sampler is
used to represent digitally coded data the model will be much simpler. Following are two
popular sampling operations:
1. Single rate or periodic sampling
2. Multi-rate sampling
We would limit our discussions to periodic sampling only.
1.1 Finite pluse width sampler
In general, a sampler is the one which converts a continuous time signal into a pulse
modulated or discrete signal. The most common type of modulation in the sampling and hold
operation is the pulse amplitude modulation.
The symbolic representation, block diagram and operation of a sampler are shown in
Figure 1. The pulse duration is p second and sampling period is T second. Uniform rate
sampler is a linear device which satisfies the principle of superposition. As in Figure 1, p(t) is
a unit pulse train with period T.
where
() represents unit step function. Assume that leading edge of the pulse at t
= 0 coincides with t = 0. Thus
() can be written as
43
Figure : Finite pulse with sampler : (a) Symbolic representation (b) Block diagram (c) Operation
According to Shannon's sampling theorem, "if a signal contains no frequency higher
than w
c
rad/sec, it is completely characterized by the values of the signal measured at instants
of time separated by T = π/w
c
sec."
Sampling frequency rate should be greater than the Nyquist rate which is twice the highest
frequency component of the original signal to avoid aliasing.
If the sampling rate is less than twice the input frequency, the output frequency will be
different from the input which is known as aliasing. The output frequency in that case is
called alias frequencyand the period is referred to as alias period.
The overlapping of the high frequency components with the fundamental component in the
frequency spectrum is sometimes referred to as folding and the frequency w
s
/2 is often
known as folding frequency. The frequency w
c
is called Nyquist frequency.
A low sampling rate normally has an adverse effect on the closed loop stability. Thus, often
we might have to select a sampling rate much higher than the theoretical minimum.
Ideal Sampler : In case of an ideal sampler, the carrier signal is replaced by a train of unit
impulses as shown in Figure 2. The sampling duration p approaches 0, i.e., its operation is
instantaneous.
44
the output of an ideal sampler can be expressed as
One should remember that practically the output of a sampler is always followed by a hold
device which is the reason behind the name sample and hold device. Now, the output of a
hold device will be the same regardless the nature of the sampler and the attenuation
factor p can be dropped in that case. Thus the sampling process can be be always
approximated by an ideal sampler or impulse modulator.
Z- Transform
Let the output of an ideal sampler be denoted by f*(t)
()
=
() =
()
=0

If we substitute =

,then we get F(z), is the Z-transform of f(t) at the sampling
instants k
45
Z - Transforms of some elementary functions
Unit step function is defined as:
Assuming that the function is continuous from right
The above series converges if
> 1
Unit ramp function is defined as:
The Z-transform is:
The above series converges if
> 1
46
For a polynomial function
=
The Z-transform is:
With ROC:
>
Exponential function is defined as:
47
Properties of Z-transform
Inverse Z-transforms
f(t) is the continuous time function whose Z-transform is F(z). Then the inverse transform
is not necessarily equal to f(t), rather it is equal to f(kT) which is equal to f(t) only at the
sampling instants. Once f(t) is sampled by an the ideal sampler, the information between
the sampling instants is totally lost and we cannot recover actual f(t) from F(z).
The transform can be obtained by using
1. Partial fraction expansion
2. Power series
3. Inverse formula.
The Inverse Z-transform formula is given as:
48
MATLab Code to Obtain the inverse Z transform (filter)
Example
Obtain the inverse z transform of
=
(+2)
(1)
2
X(z) can be written as
=
2
+ 2
2
2+ 1
num=[1 2 0];
den=[1 -2 1];
u=[1 zeros(1,30)];%If the values of x(k) for k=0,1,2,....,30 are desired
filter(num,den,u)
ans =
Columns 1 through 15
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43
Columns 16 through 30
46 49 52 55 58 61 64 67 70 73 76 79 82 85 88
Column 31
91
49
Application of Z-transform in solving Difference Equation
One of the most important applications of Z-transform is in the solution of linear difference
equations. Let us consider that a discrete time system is described by the following difference
equation.
The initial conditions are y(0) = 0, y(1) = 0 .
We have to find the solution y(k) for k > 0.
Taking z-transform on both sides of the above equation:
Using partial fraction expansion:
To emphasize the fact that y(k) = 0 for k < 0 , it is a common practice to write the solution
as:
where
() is the unit step sequence.
Relationship between s-plane and z-plane
In the analysis and design of continuous time control systems, the pole-zero configuration of
the transfer function in s-plane is often referred. We know that:
. Left half of s-plane Stable region.
. Right half of s-plane Unstable region.
For relative stability again the left half is divided into regions where the closed loop transfer
function poles should preferably be located.
Similarly the poles and zeros of a transfer function in z-domain govern the performance
characteristics of a digital system.
50
One of the properties of F*(s) is that it has an infinite number of poles, located periodically with
intervals of ±
with m = 0, 1, 2,......, in the s-plane where
is the sampling frequency in
rad/sec.
If the primary strip is considered, the path, as shown in Figure below, will be mapped into a unit
circle in the z-plane, centered at the origin.
Figure : Primary and complementary strips in s-plane
The mapping is shown in Figure below.
Figure : Mapping of primary strip in z-plane
51
Since
where m is an integer, all the complementary strips will also map into the unit circle.
Mapping guidelines
1. All the points in the left half s-plane correspond to points inside the unit circle in z-plane.
2. All the points in the right half of the s-plane correspond to points outside the unit circle.
3. Points on the jw axis in the s-plane correspond to points on the unit circle
= 1 in the z-
plane.
Pluse Transfer Function
Pulse transfer function relates Z-transform of the output at the sampling instants to the Z-
transform of the sampled input.
When the same system is subject to a sampled data or digital signal r*(t), the corresponding
block diagram is given in Figure 1 .
Figure 1: Block diagram of a system subject to a sampled input
The output of the system is C(s) = G(s)R*(s). The transfer function of the above system is
difficult to manipulates because it contains a mixture of analog and digital components.
Thus, for ease of manipulation, it is desirable to express the system characteristics by a
transfer function that relates r*(t) to c*(t), a fictitious sampler output, as shown in Figure 1.
52
One can then write:
() =
()
=0

Since c(kT) is periodic,
=
1
+ 
=

0
= 0
The detailed derivation of the above expression is omitted. Similarly,
Since R*(s) is periodic R*( s + jnw
s
) = R*(s). Thus
If we define
is known as pulse transfer function. Sometimes it is also referred to as the starred
transfer function.
If we now substitute z = e
Ts
in the previous expression, we will directly get the z-transfer
functionG(z) as
=
()
()
53
Pulse transfer of discrete data systems with cascaded elements
1. Cascaded elements are separated by a sampler
The block diagram is shown in Figure below.
Figure: Discrete data system with cascaded elements, separated by a sampler
The input-output relations of the two systems G
1
and G
2
are described by
and
Thus the input-output relation of the overall system is
We can therefore conclude that the z-transfer function of two linear system separated by a
sampler are the products of the individual z-transfer functions.
2. Cascaded elements are not separated by a sampler
The block diagram is shown in Figure below
Figure : Discrete data system with cascaded elements, not separated by a sampler
The continuous output C(s) can be written as
The output of the fictitious sampler is
z-transform of the product G
1
(s)G
2
(s)s is denoted as
Note:
54
The overall output is thus,
Pluse Transfer Function of Closed Loop Systems
A simple single loop system with a sampler in the forward path is shown in Figure below.
Figure : Block diagram of a closed loop system with a sampler in the forward path
The objective is to establish the input-output relationship. For the above system, the output of
the sampler is regarded as an input to the system. The input to the sampler is regarded as
another output. Thus the input-output relations can be formulated as
Taking pulse transform on both sides of E(s)
We can write from equation (3),
55
Taking pulse transformation on both sides of C(s)
Now, if we place the sampler in the feedback path, the block diagram will look like the
Figure 2.
Figure 2: Block diagram of a closed loop system with a sampler in the feedback path
The corresponding input output relations can be written as:
=
() . (4)
=
=
(5)
Taking pulse transformation of equations (4) and (5)
56
We can no longer define the input output transfer function of this system by either
()
()
or
()
()
.
Since the input r(t) is not sampled, the sampled signal
() does not exist.
The continuous-data output C(s) can be expressed in terms of input as.
Stability Analysis of closed loop system in z-plane
Similar to continuous time systems, the stability of the following closed loop system
can also be determined from the location of closed loop poles in z-plane which are the
roots of the characteristic equation
1.For the system to be stable, the closed loop poles or the roots of the characteristic
equation must lie within the unit circle in z-plane. Otherwise the system would be unstable.
2. If a simple pole lies at
= 1, the system becomes marginally stable. Similarly if a pair
of complex conjugate poles lie on the
= 1 circle, the system is marginally stable.
Multiple poles at the same location on unit circle make the system unstable.
57
Two stability tests can be applied directly to the characteristic equation without solving for
the roots.
Jury Stability test
Routh stability coupled with bi-linear transformation.
Jury Stability Test
Assume that the characteristic equation is as follows,
Where
0
> 0
58
Example : The characteristic equation is
Next we will construct the Jury Table.
Jury Table
Rest of the elements are also calculated in a similar fashion. The elements are b
1
=-0.0756
59
All criteria are satisfied. Thus the system is stable.
60
MODEL QUESTIONS
Module-2
Short Questions each carrying Two marks.
1. Determine the maximum phase lead of the compensator
D
s
=
0.5s + 1
0.5s + 1
2. State the various effects and limitations of a lag compensator. Draw a representative
sketch of a lag-lead compensator.
3. Derive Z transform of the following
X(1)=2; X(4)=-3; X(7)=8
and all other samples are zero. Define the stability of discrete time function.
4. Find the inverse Z-transform if X(z)=Z.
The figures in the right-hand margin indicate marks.
5. The unity feedback system has the open loop plant
=
1
+3
(+6)
Design a lag compensation to meet the following specifications:
(i) Step response settling time < 5s.
(ii) Step response overshoot < 15%
(iii) Steady state error to a unit ramp input < 10%. [10]
6. A discrete time system is described by the difference equation
y(k+2) + 5y(k+1) + 6y (k) = U (k)
y(0) = y(1) =0; T=1sec
(i) Determine a state model in canonical form.
(ii) Find the output y(k) for the input u(k)=1 for ≥ 0. [10]
7. Use Jury‟s test to show that the two roots of the digital system F(z)=Z
2
+Z+0.25=0 are
inside the circle. [3]
8. (a)What is the principal effect of (i) lag, (ii) lead compensation on a root locus. [3]
(b) A type-1 unity feedback system has an open-loop transfer function
=
+ 1
(0.2+ 1)
+
-
Lag
compensator
G(s)
C
61
Design phase lag compensation for the system to achieve the following
specifications:
Velocity error constant K
v
= 8
Phase margin 40° [13]
9 . A discrete-time system is described by the difference equation
y(k+2)+5y(k+1) +6y(k) =u(k)
y(0)=y(1)=0; T=1 sec
(i) Determine a state model in canonical form
(ii) Find the state transition matrix
(iii) For input u(k)=1 for k ≥ 0, find the output y(k) [5+5+6]
10. Determine the z-transform of
(i)
=
1
+
2
(ii)
=
10(1

)
(+2)
[8]
11. Write short notes on [4×7]
(a) Feedback compensation
(b) stability analysis of sampled data control system
(c) R-C notch type a.c. lead network
(d) Hold circuits in sample data control
(e) Network compensation of a.c systems
(f) Z domain and s domain relationship
(g) Spectral factorisation
12. (a) Describe the effect of:
(i) Lag and
(ii) Lead compensation on a root locus [4]
(b) Design a suitable phase lag compensating network for a type-1 unity feedback
system having an open-loop transfer function
=
0.1+ 1
(0.2+ 1)
to meet the following specifications:
Velocity error constant K
v
= 30sec
1
and phase margin 40° [12]
13. Consider the system
X
+ 1
=
2 1
1 2
+
0
1
Y
k
=
1 1
X
(k)
Find, if possible, a control sequence{u(0), u(1)} to drive the system from
0
=
1
0

2
=
1
0
[8]
14. Find the inverse z-transform of
(i)
=
10
1
(2)
(ii)
=
(1

)
1
(

)
[4+4]
62
15. Find Y(z) for the system in figure below, if r(t)=1(t), T=1 Sec,
=
1
+1
,
=
1
[7]
16. Explain the relationship between s-plane & z-plane. [3]
17. How do you find out response between sampling instants? [4]
18. The open-loop transfer function of a servo mechanism is given by
=
+.
(+.)
[15]
Add series lag compensation to the servo mechanism to give a gain margin of ≥
15dB and a phase margin ≥ 45
. Realise the compensator.
19.(a) Determine the state model of the system whose pulse transfer function given by
G(z)=
4
3
12
2
+137
1
2
(2)
[8]
(b) Find the z transform [8]
i)
3
2
+3
ii) 4
2
+10t+6
20. (a) Derive the transfer function of zero order hold circuit [4×3]
(b) State the specification in time domain and frequency domain used for the design of
continuous time linear system.
(c) Explain how signal is reconstructed from the output of the sampler.
21. Find the z transform of the following transfer function. [8]
i) G(s)=
1
(+)
2
ii) G(s)=
+
(+)
2
+
2
22.(a) A discrete time system is described by the difference equation [7]
Y(k+2)+5 y(k+1)+6 y(k) =u(k)
Y(0)=y(1)=0 u(k)=1 for k0
Find the output (k)
(b) Find out the range of values of gain k for which the closed loop system shown in
figure below remains stable. [8]
63
23. Prove that ZOH is a low pass filter. [4]
24. Explain what do you mean by aliasing in linear discrete data system. [4]
25.(a) Clearly explain how stability of sample data control system is assessed by Jury‟s
stability test. [7]
(b) Check the stability of the linear discrete system having the characteristics equation:
4
1.7
3
+ 1.04
2
0.268+ 0.024 = 0 [8]
26.(a) Determine the weighing sequence (impulse response ) of linear discrete system
described by
C(k)-c(k-1)=r(k) [7]
(b) With a neat circuit diagram, explain the principle of operation of sample and hold
device. [4]
(c) Explain the significance of shanon‟s theorem in sampling process. [4]
27. A linear control system is to be compensated by a compensating network having
()=
+
+
The system is shown in figure below
Find
,
and
so that the roots of the characteristics equation are placed at s= -50, -5
+
.
j5. [9]
28. A unity feedback system has an open loop transfer function of [16]
G(s)=
4
(2+1)
It is desired to obtain a phase margin of 40 degree without sacrificing the k
v
of the system.
Design a suitable lag network and compute the value of network components assuming any
suitable impedance level.
29. (a) Find the z transform of the following: [4+4]
i)y(t)=

2
ii)G(s)=
+
+
2
+
2
(b) For the system shown in fig below [8]
G(z)=
(+0.9)
(1)(0.7)
Determine the range of k for stability.
64
MODULE-III
Introduction
A linear system designed to perform satisfactorily when excited by a standard test
signal, will exhibit satisfactory behavior under any circumstances. Furthermore, the
amplitude of the test signal is unimportant since any change in input signal amplitude
results simply in change of response scale with no change in the basic response
characteristics. The stability of nonlinear systems is determined solely by the location of
the system poles & is independent entirely of whether or not the system is driven.
In contrast to the linear case, the response of nonlinear systems to a particular test
signals is no guide to their behavior to other inputs, since the principle of superposition no
longer holds. In fact, the nonlinear system response may be highly sensitive to the input
amplitude. Here the stability dependent on the input & also the initial state. Further, the
nonlinear systems may exhibit limit cycle which are self sustained oscillations of fixed
frequency & amplitude.
Behaviour of Nonlinear Systems
A nonlinear system, when excited by a sinusoidal input, may generate several harmonics in
addition to the fundamental corresponding to the input frequency. The amplitude of the
fundamental is usually the largest, but the harmonics may be of significant amplitude in
many situations.
Another peculiar characteristic exhibited by nonlinear systems is called jump phenomenon.
Jump Resonance
Consider the spring-mass-damper system as shown in Fig3.1(a). below.If the components
are assumed to be linear, the system equation with a sinusoidal forcing function is given by
+ +  =  . . (3.1)
Fig.3.1 (a) A spring-mass-damper system (b)Spring Characteristics
The frequency response curve of this system is shown in Fig3.2.
65
Fig. 3.2 Frequency response curve of spring-mass-damper system
Fig. 3.3 (a) Jump resonance in nonlinear system(hard spring case);
(b) Jump resonance in nonlinear system(hard spring case).
Let us now assume that the restoring force of the spring is nonlinear,given by
1
+
2
3
.The nonlinear spring characteristic is shown in Fig.3.1(b). Now the system equation
becomes
+ +
1
+
2
3
=  . (3.2)
The frequency response curve for the hard spring(
2
> 0) is shown in Fig3.3(a).
For a hard spring, as the input frequency is gradually increased from zero, the measured
two response follows the curve through the A, B and C, but at C an increment in frequency
results in discontinuous jump down to the point D, after which with further increase in
frequency, the response curve follows through DE. If the frequency is now decreased, the
response follows the curve EDF with a jump up to B from the point F and then the
response curve moves towards A. This phenomenon which is peculiar to nonlinear systems
is known as jump resonance. For a soft spring, jump phenomenon will happen as shown in
fig. 3.3(b).
Methods of Analysis
Nonlinear systems are difficult to analyse and arriving at general conclusions are tedious.
However, starting with the classical techniques for the solution of standard nonlinear
differential equations, several techniques have been evolved which suit different types of
analysis. It should be emphasised that very often the conclusions arrived at will be useful
for the system under specified conditions and do not always lead to generalisations. The
commonly used methods are listed below.
66
Linearization Techniques:
In reality all systems are nonlinear and linear systems are only approximations of the
nonlinear systems. In some cases, the linearization yields useful information whereas in
some other cases, linearised model has to be modified when the operating point moves
from one to another. Many techniques like perturbation method, series approximation
techniques, quasi-linearization techniques etc. are used for linearise a nonlinear system.
Phase Plane Analysis:
This method is applicable to second order linear or nonlinear systems for the study of the
nature of phase trajectories near the equilibrium points. The system behaviour is
qualitatively analysed along with design of system parameters so as to get the desired
response from the system. The periodic oscillations in nonlinear systems called limit cycle
can be identified with this method which helps in investigating the stability of the system.
Describing Function Analysis:
This method is based on the principle of harmonic linearization in which for certain class
of nonlinear systems with low pass characteristic. This method is useful for the study of
existence of limit cycles and determination of the amplitude, frequency and stability of
these limit cycles. Accuracy is better for higher order systems as they have better low pass
characteristic.
Classification of Nonlinearities:
The nonlinearities are classified into
i) Inherent nonlinearities and
ii) Intentional nonlinearities.
The nonlinearities which are present in the components used in system due to the inherent
imperfections or properties of the system are known as inherent nonlinearities. Examples
are saturation in magnetic circuits, dead zone, back lash in gears etc. However in some
cases introduction of nonlinearity may improve the performance of the system, make the
system more economical consuming less space and more reliable than the linear system
designed to achieve the same objective. Such nonlinearities introduced intentionally to
improve the system performance are known as intentional nonlinearities. Examples are
different types of relays which are very frequently used to perform various tasks.
Common Physical Non Linearities:
The common examples of physical nonlinearities are saturation, dead zone, coulomb
friction, stiction, backlash, different types of springs, different types of relays etc.
Saturation: This is the most common of all nonlinearities. All practical systems, when
driven by sufficiently large signals, exhibit the phenomenon of saturation due to limitations
of physical capabilities of their components. Saturation is a common phenomenon in
magnetic circuits and amplifiers.
67
Fig. 3.4 Piecewise linear approximation of saturation nonlinearity
Friction: Retarding frictional forces exist whenever mechanical surfaces come in sliding
contact. The predominant frictional force called the viscous friction is proportional to the
relative velocity of sliding surfaces. Vicous friction is thus linear in nature. In addition to
the viscous friction, there exist two nonlinear frictions. One is the coulomb friction which
is constant retarding force & the other is the stiction which is the force required to initiate
motion.The force of stiction is always greater than that of coulomb friction since due to
interlocking of surface irregularities,more force is require to move an object from rest than
to maintain it in motion.
Fig. 3.5 Characteristics of various types of friction
68
Dead zone: Some systems do not respond to very small input signals. For a particular
range of input, the output is zero. This is called dead zone existing in a system. The input-
output curve is shown in figure.
Fig. 3.6 Dead-zone nonlinearity
Backlash: Another important nonlinearity commonly occurring in physical systems is
hysteresis in mechanical transmission such as gear trains and linkages. This nonlinearity is
somewhat different from magnetic hysteresis and is commonly referred to as backlash. In
servo systems, the gear backlash may cause sustained oscillations or chattering
phenomenon and the system may even turn unstable for large backlash.
Figure 3.7: (a) gear box having backlash (b) the teeth A of the driven gear located midway
between the teeth B
1
, B
2
of the driven gear(c) gives the relationship between input and
output motions.
As the teeth A is driven clockwise from this position, no output motion takes place until
the tooth A makes contact with the tooth B
1
of the driven gear after travelling a distance
69
x/2. This output motion corresponds to the segment mn of fig3.7 (c). After the contact is
made the driven gear rotates counter clockwise through the same angle as the drive gear, if
the gear ratio is assumed to be unity. This is illustrated by the line segment no. As the input
motion is reversed, the contact between the teeth A and B
1
is lost and the driven gear
immediately becomes stationary based on the assumption that the load is friction controlled
with negligible inertia.
The output motion therefore causes till tooth A has travelled a distance x in the reverse
direction as shown in fig3.7 (c) by the segment op. After the tooth A establishes contact
with the tooth B
2
, the driven gear now mores in clockwise direction as shown by segment
pq. As the input motion is reversed the direction gear is again at standstill for the segment
qr and then follows the drive gear along rn.
Relay: A relay is a nonlinear power amplifier which can provide large power amplification
inexpensively and is therefore deliberately introduced in control systems. A relay
controlled system can be switched abruptly between several discrete states which are
usually off, full forward and full reverse. Relay controlled systems find wide applications
in the control field. The characteristic of an ideal relay is as shown in figure. In practice a
relay has a definite amount of dead zone as shown. This dead zone is caused by the facts
that relay coil requires a finite amount of current to actuate the relay. Further, since a larger
coil current is needed to close the relay than the current at which the relay drops out, the
characteristic always exhibits hysteresis.
Figure3.8:Relay Non Linearity (a) ON/OFF (b) ON/OFF with Hysteresis (c) ON/OFF
with Dead Zone
70
Multivariable Nonlinearity: Some nonlinearities such as the torque-speed characteristics
of a servomotor, transistor characteristics etc., are functions of more than one variable.
Such nonlinearities are called multivariable nonlinearities.
Phase Plane Analysis
Introduction
Phase plane analysis is one of the earliest techniques developed for the study of second
order nonlinear system. It may be noted that in the state space formulation, the state
variables chosen are usually the output and its derivatives. The phase plane is thus a state
plane where the two state variables x
1
and x
2
are analysed which may be the output
variable y and its derivative . The method was first introduced by Poincare, a French
mathematician. The method is used for obtaining graphically a solution of the following
two simultaneous equations of an autonomous system.
1
=
1
1
,
2
2
=
2
1
,
2
The
1
=
1
1
,
2
&
2
=
2
1
,
2
are either linear or nonlinear functions of the state
variables x
1
and x
2
respectively. The state plane with coordinate axes x1 and x2 is called
the phase plane. In many cases, particularly in the phase variable representation of
systems, take the form
1
=
2
2
=
2
1
,
2
The curve described by the state point
1
,
2
in the phase plane with time as running
parameter is called phase trajectory.The plot of the state trajectories or phase trajectories
of above said equation thus gives an idea of the solution of the state as time t evolves
without explicitly solving for the state. The phase plane analysis is particularly suited to
second order nonlinear systems with no input or constant inputs. It can be extended to
cover other inputs as well such as ramp inputs, pulse inputs and impulse inputs.
Phase Portraits
From the fundamental theorem of uniqueness of solutions of the state equations or
differential equations, it can be seen that the solution of the state equation starting from an
initial state in the state space is unique. This will be true if
1
1
,
2
and
2
1
,
2
are
analytic. For such a system, consider the points in the state space at which the derivatives
of all the state variables are zero. These points are called singular points. These are in fact
equilibrium points of the system. If the system is placed at such a point, it will continue to
lie there if left undisturbed. A family of phase trajectories starting from different initial
states is called a phase portrait. As time t increases, the phase portrait graphically shows
how the system moves in the entire state plane from the initial states in the different
71
regions. Since the solutions from each of the initial conditions are unique, the phase
trajectories do not cross one another. If the system has nonlinear elements which are piece-
wise linear, the complete state space can be divided into different regions and phase plane
trajectories constructed for each of the regions separately.
Analysis & Classification of Singular Points
Nodal Point: Consider eigen values are real, distinct and negative as shown in figure3.9
(a). For this case the equation of the phase trajectory follows as
2
=
1
2
1
Where c is
an integration constant . The trajectories become a set of parabola as shown in figure 3.9(b)
and the equilibrium point is called a node. In the original system of coordinates, these
trajectories appear to be skewed as shown in figure 3.9(c).
If the eigen values are both positive, the nature of the trajectories does not change, except
that the trajectories diverge out from the equilibrium point as both z
1
(t) and z
2
(t) are
increasing exponentially. The phase trajectories in the x
1
-x
2
plane are as shown in figure3.9
(d). This type of singularity is identified as a node, but it is an unstable node as the
trajectories diverge from the equilibrium point.
(c) Stable node in (X
1
,X
2
)-plane
72
(d) Unstable node in (X
1
,X
2
)-plane
Fig. 3.9
Saddle Point: Both eigen values are real,equal & negative of each other.The
corresponding phase portraits are shown in Fig 3.10. The origin in this case a saddle point
which is always unstable, one eigen value being positive.
Fig 3.10
Focus Point: Consider a system with complex conjugate eigen values. A plot for negative
values of real part is a family of equiangular spirals. Certain transformation has been
carried out for
1
,
2
to
1
,
2
to present the trajectory in form of a true spiral. The
origin which is a singular point in this case is called a stable focus. When the eigen values
are complex conjugate with positive real parts, the phase portrait consists of expanding
spirals as shown in figure and the singular point is an unstable focus. When transformed
into the x1-x2 plane, the phase portrait in the above two cases is essentially spiralling in
nature, except that the spirals are now somewhat twisted in shape.
73
Fig 3.11
Centre or Vortex Point:
Consider now the case of complex conjugate eigen values with zero real parts.
ie., λ1, λ2 = ±jω
dy
2
dy
1
=
jwy
1
jwy
2
=
y
1
y
2
for which y
1
dy
1
+ y
2
dy
2
= 0
Integrating the above equation, we get
1
2
+
2
2
=
2
which is an equation to a circle of
radius R. The radius R can be evaluated from the initial conditions. The trajectories are thus
concentric circles in y1-y2 plane and ellipses in the x1-x2 plane as shown in figure. Such a
singular points, around which the state trajectories are concentric circles or ellipses, are
called a centre or vortex.
Fig.3.12 (a) Centre in (y
1
,y
2
)-plane (b) Centre in (X
1
,X
2
)-plane
Construction of Phase Trajectories:
Consider the homogenous second order system with differential equations
2

2
+


+  = 0
+ 2
+
2
= 0
74
where ζ and ωn are the damping factor and undamped natural frequency of the
system.Defining the state variables as x = x
1
and =
2
, we get the state equation in the
state variable form as
1
=
2
2
= 
2
1
2
2
These equations may then be solved for phase variables x
1
and x
2
. The time response plots of
x
1
, x
2
for various values of damping with initial conditions can be plotted. When the
differential equations describing the dynamics of the system are nonlinear, it is in general not
possible to obtain a closed form solution of x
1
, x
2
. For example, if the spring force is
nonlinear say (k
1
x + k
2
x
3
) the state equation takes the form
1
=
2
2
=
1
1
2
2
1
3
Solving these equations by integration is no more an easy task. In such situations, a graphical
method known as the phase-plane method is found to be very helpful. The coordinate plane
with axes that correspond to the dependent variable x
1
and x
2
is called phase-plane. The curve
described by the state point (x
1
,x
2
) in the phase-plane with respect to time is called a phase
trajectory. A phase trajectory can be easily constructed by graphical techniques.
Isoclines Method:
Let the state equations for a nonlinear system be in the form
1
=
1
1
,
2
2
=
2
1
,
2
When both f1(x1,x2) and f2(x1,x2) are analytic.
From the above equation, the slope of the trajectory is given by

2

1
=
2
1
,
2
1
1
,
2
=
Therefore, the locus of constant slope of the trajectory is given by f2(x1,x2) = Mf1(x1,x2)
The above equation gives the equation to the family of isoclines. For different values of M, the
slope of the trajectory, different isoclines can be drawn in the phase plane. Knowing the value of
M on a given isoclines, it is easy to draw line segments on each of these isoclines.
Consider a simple linear system with state equations
1
=
2
2
=
2
1
Dividing the above equations we get the slope of the state trajectory in the x1-x2 plane as

2

1
=
2
1
2
=
75
For a constant value of this slope say M, we get a set of equations
2
=
1
+ 1
1
which is a straight line in the x
1
-x
2
plane. We can draw different lines in the x
1
-x
2
plane for
different values of M; called isoclines. If draw sufficiently large number of isoclines to cover
the complete state space as shown, we can see how the state trajectories are moving in the
state plane. Different trajectories can be drawn from different initial conditions. A large
number of such trajectories together form a phase portrait. A few typical trajectories are
shown in figure3.13 given below.
Fig. 3.13
The Procedure for construction of the phase trajectories can be summarised as below:
1.For the given nonlinear differential equation, define the state variables as x1 and x2 and
obtain the state equations as
1
=
2
2
=
2
1
,
2
2.Determine the equation to the isoclines as

2

1
=
(
1
,
2
)
2
=
3. For typical values of M, draw a large number of isoclines in x1-x2 plane
76
4. On each of the isoclines, draw small line segments with a slope M.
5. From an initial condition point, draw a trajectory following the line segments With slopes
M on each of the isoclines.
Delta Method:
The delta method of constructing phase trajectories is applied to systems of the form
+
, ,
= 0
Where
, ,
may be linear or nonlinear and may even be time varying but must be
continuous and single valued.
With the help of this method, phase trajectory for any system with step or ramp or any time
varying input can be conveniently drawn. The method results in considerable time saving
when a single or a few phase trajectories are required rather than a complete phase portrait.
While applying the delta method, the above equation is first converted to the form
+
[+ (, , )] = 0
In general (, , ) depends upon the variables ,  but for short intervals the changes in
these variables are negligible. Thus over a short interval, we have
+
[+ ] = 0, where δ is a constant.
Let us choose the state variables as
1
= ,
2
=
, giving the state equations
1
=
2
2
= 
(
1
+ )
Therefore, the slope equation over a short interval is given by

2

1
=

1
+
2
With δ known at any point P on the trajectory and assumed constant for a short interval, we can
draw a short segment of the trajectory by using the trajectory slope dx2/dx1 given in the above
equation. A simple geometrical construction given below can be used for this purpose.
1. From the initial point, calculate the value of δ.
2. Draw a short arc segment through the initial point with (-δ, 0) as centre, thereby
determining a new point on the trajectory.
3. Repeat the process at the new point and continue.
Example : For the system described by the equation given below, construct the trajectory
starting at the initial point (1, 0) using delta method.
+ +
2
= 0
77
Let =
1
 =
2
, then
1
=
2
2
=
2
1
2
The above equation can be rearranged as
2
= (
1
+
2
+
1
2
1
)
So that
=
2
+
1
2
1
At initial point δ is calculated as δ = 0+1-1 = 0. Therefore, the initial arc is centered at
point(0, 0). The mean value of the coordinates of the two ends of the arc is used to calculate
the next value of δ and the procedure is continued. By constructing the small arcs in this way
the arcs in this way the complete trajectory will be obtained as shown in figure3.14.
Fig.3.14
Limit Cycles:
Limit cycles have a distinct geometric configuration in the phase plane portrait, namely, that
of an isolated closed path in the phase plane. A given system may have more than one limit
cycle. A limit cycle represents a steady state oscillation, to which or from which all
trajectories nearby will converge or diverge. In a nonlinear system, limit cycles describes the
amplitude and period of a self sustained oscillation. It should be pointed out that not all
closed curves in the phase plane are limit cycles. A phase-plane portrait of a conservative
78
system, in which there is no damping to dissipate energy, is a continuous family of closed
curves. Closed curves of this kind are not limit cycles because none of these curves are
isolated from one another. Such trajectories always occur as a continuous family, so that there
are closed curves in any neighborhoods of any particular closed curve. On the other hand,
limit cycles are periodic motions exhibited only by nonlinear non conservative systems.
As an example, let us consider the well known Vander Pol‟s differential equation
2

2
(1
2
)


+ = 0
which describes physical situations in many nonlinear systems.
In terms of the state variables =
1
 =
2
, we obtained
1
=
2
2
= (1
1
2
)
2
1
The figure shows the phase trajectories of the system for μ > 0 and μ < 0. In case of μ > 0 we
observe that for large values of x1(0), the system response is damped and the amplitude of x1(t)
decreases till the system state enters the limit cycle as shown by the outer trajectory. On the other
hand, if initially x1(0) is small, the damping is negative, and hence the amplitude of x1(t)
increases till the system state enters the limit cycle as shown by the inner trajectory. When μ < 0,
the trajectories moves in the opposite directions as shown in figure3.15.
Fig.3.15 Limit cycle behavior of nonlinear system
A limit cycle is called stable if trajectories near the limit cycle, originating from outside or inside,
converge to that limit cycle. In this case, the system exhibits a sustained oscillation with constant
amplitude. This is shown in figure (i). The inside of the limit cycle is an unstable region in the
sense that trajectories diverge to the limit cycle, and the outside is a stable region in the sense that
trajectories converge to the limit cycle.
A limit cycle is called an unstable one if trajectories near it diverge from this limit cycle. In this
case, an unstable region surrounds a stable region. If a trajectory starts within the stable region, it
converges to a singular point within the limit cycle. If a trajectory starts in the unstable region, it
diverges with time to infinity as shown in figure (ii). The inside of an unstable limit cycle is the
stable region, and the outside the unstable region.
79
Describing Function Method of Non Linear Control System
Describing function method is used for finding out the stability of a non linear system. Of
all the analytical methods developed over the years for non linear control systems, this
method is generally agreed upon as being the most practically useful. This method is
basically an approximate extension of frequency response methods including Nyquist
stability criterion to non linear system.
The describing function method of a non linear system is defined to be the complex ratio of
amplitudes and phase angle between fundamental harmonic components of output to input
sinusoid. We can also called sinusoidal describing function. Mathematically,
Where, N = describing function,
X = amplitude of input sinusoid,
Y = amplitude of fundamental harmonic component of output,
φ
1
= phase shift of the fundamental harmonic component of output.
Let us discuss the basic concept of describing function of non linear control system.
Let us consider the below block diagram of a non linear system, where G
1
(s) and G
2
(s)
represent the linear element and N represent the non linear element.
Let us assume that input x to the non linear element is sinusoidal, i.e,
For this input, the output y of the non linear element will be a non sinusoidal periodic
function that may be expressed in terms of Fourier series as
80
Most of non linearities are odd symmetrical or odd half wave symmetrical; the mean value Y
0
for all such case is zero and therefore output will be,
As G
1
(s) G
2
(s) has low pass characteristics , it can be assumed to a good degree of
approximation that all higher harmonics of y are filtered out in the process, and the input x to
the nonlinear element N is mainly contributed by fundamental component of y i.e. first
harmonics . So in the describing function analysis, we assume that only the fundamental
harmonic component of the output. Since the higher harmonics in the output of a non linear
system are often of smaller amplitude than the amplitude of fundamental harmonic
component. Most control systems are low pass filters, with the result that the higher
harmonics are very much attenuated compared with the fundamental harmonic component.
Hence y
1
need only be considered.
We can write y
1
(t) in the form ,
Where by using phasor,
The coefficient A
1
and B
1
of the Fourier series are given by-
From definition of describing function we have,
81
Describing Function for Saturation Non Linearity
We have the characteristic curve for saturation as shown in the given figure3.16
Fig. 3.16. Characteristic Curve for Saturation Non Linearity.
Let us take input function as
Now from the curve we can define the output as :
Let us first calculate Fourier series constant A
1
.
On substituting the value of the output in the above equation and integrating the function
from 0 to 2π we have the value of the constant A1 as zero.
82
Similarly we can calculate the value of Fourier constant B
1
for the given output and the value
of B
1
can be calculated as,
The phase angle for the describing function can be calculated as
Thus the describing function for saturation is
83
Describing Function for Ideal Relay
We have the characteristic curve for ideal relay as shown in the given figure 3.17.
Fig. 3.17. Characteristic Curve for Ideal Relay Non Linearity.
Let us take input function as
Now from the curve we can define the output as
The output periodic function has odd symmetry :
Let us first calculate Fourier series constant A
1
.
On substituting the value of the output in the above equation and integrating the function
from 0 to 2π we have the value of the constant A
1
as zero.
Similarly we can calculate the value of Fourier constant B
1
for the given output and the value
of B
1
can be calculated as
84
On substituting the value of the output in the above equation y(t) = Y we have the value of
the constant B
1
And the phase angle for the describing function can be calculated as
Thus the describing function for an ideal relay is
Describing Function for Real Relay (Relay with Dead Zone)
We have the characteristic curve for real realy as shown in the given figure 3.18. If X is less
than dead zone Δ, then the relay produces no output; the first harmonic component of Fourier
series is of course zero and describing function is also zero. If X > &Delta, the relay produces
the output.
85
Fig. 3.18. Characteristic Curve for Real Relay Non Linearities.
Let us take input function as
Now from the curve we can define the output as
The output periodic function has odd symmetry :
Let us first calculate Fourier series constant A
1
.
86
On substituting the value of the output in the above equation and integrating the function
from 0 to 2π we have the value of the constant A
1
as zero.
Similarly we can calculate the value of Fourier constant B for the given output and the value
of B can be calculated as
Due to the symmetry of y, the coefficient B
1
can be calculated as follows,
Therefore, the describing function is
Describing Function for Backlash Non Linearity
We have the characteristic curve for backlash as shown in the given figure 3.19.
Fig. 3.19. Characteristic Curve of Backlash Non Linearity.
87
Let us take input function as
Now from the curve we can define the output as
Let us first calculate Fourier series constant A
1
.
On substituting the value of the output in the above equation and integrating the function
from zero to 2π we have the value of the constant A
1
as
Similarly we can calculate the value of Fourier constant B for the given output and the value
of B
1
can be calculated as
On substituting the value of the output in the above equation and integrating the function
from zero to pi we have the value of the constant B
1
as
We can easily calculate the describing function of backlash from below equation
88
Liapunov’s Stability Analysis
Consider a dynamical system which satisfies
x= f(x, t); with initial condition
=
;  
(3.3)
We will assume that f(x, t) satisfies the standard conditions for the existence and uniqueness
of solutions. Such conditions are, for instance, that f(x, t) is Lipschitz continuous with respect
to x, uniformly in t, and piecewise continuous in t. A point
is an equilibrium point of
equation (3.3) if F(x*, t) 0.
Intuitively and somewhat crudely speaking, we say an equilibrium point is locally stable if all
solutions which start near x* (meaning that the initial conditions are in a neighborhood of
remain near
for all time.
The equilibrium point x* is said to be locally Asymptotically stable if x* is locally stable and,
furthermore, all solutions starting near x* tend towards x* as t → ∞.
We say somewhat crude because the time-varying nature of equation (3.3) introduces all
kinds of additional subtleties. Nonetheless, it is intuitive that a pendulum has a locally stable
equilibrium point when the pendulum is hanging straight down and an unstable equilibrium
point when it is pointing straight up. If the pendulum is damped, the stable equilibrium point
is locally asymptotically stable. By shifting the origin of the system, we may assume that the
equilibrium point of interest occurs at x* = 0. If multiple equilibrium points exist, we will
need to study the stability of each by appropriately shifting the origin.
3.1 Stability in the sense of Lyapunov
The equilibrium point x* = 0 of (3.3) is stable (in the sense of Lyapunov) at t = t
0
if for any
> 0 there exists a δ(t
0
, ) > 0 such that
(
)
<
()
< ,  
. . (3.4)
Lyapunov stability is a very mild requirement on equilibrium points. In particular, it does not
require that trajectories starting close to the origin tend to the origin asymptotically. Also,
stability is defined at a time instant t
0
. Uniform stability is a concept which guarantees that
the equilibrium point is not losing stability. We insist that for a uniformly stable equilibrium
point x*, δ in the Definition 3.1 not be a function of t
0
, so that equation (3.4) may hold for all
t
0
. Asymptotic stability is made precise in the following definition:
3.2 Asymptotic stability
An equilibrium point x* = 0 of (3.3) is asymptotically stable at t = t
0
if
1. x * = 0 is stable, and
2. x * = 0 is locally attractive; i.e., there exists δ(t
0
) such that
(
)
< lim

() = 0, . . (3.5)
As in the previous definition, asymptotic stability is defined at t
0
.
89
Uniform asymptotic stability requires:
1. x * = 0 is uniformly stable, and
2. x * = 0 is uniformly locally attractive; i.e., there exists δ independent
of t0 for which equation (3.5) holds. Further, it is required that the convergence in equation
(3.5) is uniform.
Finally, we say that an equilibrium point is unstable if it is not stable. This is less of a
tautology than it sounds and the reader should be sure he or she can negate the definition of
stability in the sense of Lyapunov to get a definition of instability. In robotics, we are almost
always interested in uniformly asymptotically stable equilibria. If we wish to move the robot
to a point, we would like to actually converge to that point, not merely remain nearby. Figure
below illustrates the difference between stability in the sense of Lyapunov and asymptotic
stability.
Definitions 3.1 and 3.2 are local definitions; they describe the behavior of a system near an
equilibrium point. We say an equilibrium point x* is globally stable if it is stable for all initial
conditions
0
. Global stability is very desirable, but in many applications it can be
difficult to achieve. We will concentrate on local stability theorems and indicate where it is
possible to extend the results to the global case. Notions of uniformity are only important for
time-varying systems. Thus, for time-invariant systems, stability implies uniform stability
and asymptotic stability implies uniform asymptotic stability.
Figure:3.20 Phase portraits for stable and unstable equilibrium points.
90
Basic theorem of Lyapunov
Let V (x, t) be a non-negative function with derivative
along the trajectories of the system.
1. If V (x, t) is locally positive definite and
(x, t) 0 locally in x and for all t, then the
origin of the system is locally stable (in the sense of Lyapunov).
2. If V (x, t) is locally positive definite and decrescent, and
(x, t) 0 locally in x and for all
t, then the origin of the system is uniformly locally stable (in the sense of Lyapunov).
3. If V (x, t) is locally positive definite and decrescent, and −
(x, t) is locally positive
definite, then the origin of the system is uniformly locally asymptotically stable.
4. If V (x, t) is positive definite and decrescent, and −
(x, t) is positive definite, then the
origin of the system is globally uniformly asymptotically stable.
Theorm-1
Consider the system
=
;
0
= 0
Suppose there exists a scalar function v(x) which for some real number > 0 satisfies the
following properties for all x in the region
()
<
(a) V(x)>0;  0 that is v(x) is positive definite scalar function.
(b) V (0) = 0
(c)V(x) has continuous partial derivatives with respect to all component of x
(d)


0 (i.e dv/dt is negative semi definite scalar function)
Then the system is stable at the origin
Theorem-2
If the property of (d) of theorem-1 is replaced with (d)


< 0 ,  0 (i.e dv/dt is negative
definite scalar function),then the system is asymptotically stable.
It is intuitively obvious since continuous v function>0 except at x=0, satisfies the condition
dv/dt <0, we except that x will eventually approach the origin .We shall avoid the rigorous of
this theorem.
Theorem-3
If all the conditions of theorem-2 hold and in addition.

Then the system is asymptotically stable in-the-large at the origin.
91
Instability
It may be noted that instability in a nonlinear system can be established by direct recourse to
the instability theorem of the direct method .The basic instability theorem is presented below:
Theorem-4
Consider a system
=
;
0
= 0
Suppose there a exist a scalar function W(x) which, for real number > 0 , satisfies the
following properties for all x in the region
< ;
(a) W(x)>0;  0
(b) W (0) = 0
(c)W(x) has continuous partial derivatives with respect to all component of x
(d)


0
Then the system is unstable at the origin.
Direct Method of Liapunov & the Linear System:
In case of linear systems, the direct method of liapunov provides a simple approach to
stability analysis. It must be emphasized that compared to the results presented, no new
results are obtained by the use of direct method for the stability analysis of linear systems.
However, the study of linear systems using the direct method is quite useful because it
extends our thinking to nonlinear systems.
Consider a linear autonomous system described by the state equation
X
= AX . (3.6)
The linear system is asymptotically stable in-the-large at the origin if and only if given any
symmetric, positive definite matrix Q, there exists a symmetric positive definite matrix P
which is the unique solution
A
T
P + PA = Q (3.7)
Proof
To prove the sufficiency of the result of above theorem, let us assume that a symmetric
positive definite matrix P exists which is the unique solution of eqn.(3.8). Consider thescalar
function.
And
The time derivate of V(x) is
92
V
X
= X
T
PX + X
T
PX
Using eqns. (3.6) and (3.7) we get
Since Q is positive definite, V(x) is negative definite. Norm of x may be defined as
=

1
2
Then
=
2

The system is therefore asymptotically stable in-the large at the origin.
In order to show that the result is also necessary, suppose that the system is asymptotically
stable and P is negative definite, consider the scalar function
V
=
 (3.8)
Therefore
V
X
= 
+

=

> 0
There is contradiction since V(x) given by eqn. (3.8) satisfies instability theorem.
Thus the conditions for the positive definiteness of P are necessary and sufficient for
asymptotic stability of the system of eqn. (3.6).
Methods of constructing Liapunov functions for Non linear Systems
As has been said earlier ,the liapunov theorems give only sufficient conditions on system
stability and furthermore there is no unique way of constructing a liapunov function except in
the case of linear systems where a liapunov function can always be constructed and both
necessary and sufficient conditions Established .Because of this draw back a host of methods
have become available in literature and many refinements have been suggested to enlarge the
region in which the system is found to be stable. Since this treatise is meant as a first
exposure of the student to the liapunov direct method, only two of the relatively simpler
techniques of constructing a liapunov‟s function would be advanced here.
Krasovskiis method
Consider a system
=
;
0
= 0
Define a liapunov function as
93
V =
 (3.9)
Where P=a symmetric positive definite matrix.
Now
V
=
+

. . (3.10)
f
=
f
X
X
t
= Jf
J =
f
1
x
1
f
1
x
2
f
1
x
n
f
2
x
1
f
2
x
2
f
2
x
n
f
n
x
1
f
n
x
2
.
f
2n
x
n
n×n
is Jacobian matrix
Substituting in eqn (3.10), we have
V
=
+

=
(
+ )
Let
=
+ 
Since V is positive definite, for the system to be asymptotically stable, Q should be negative
definite. If in addition

, the system is asymptotically stable in-the-
large.
94
POPOV CRITERION
95
96
97
MODEL QUESTIONS
Module-3
Short Questions each carrying Two marks.
1. Explain how jump phenomena can occur in a power frequency circuit. Extend this
concept to show that a ferro resonant circuit can be used to stabilize wide
fluctuations in supply voltage of a.c. mains in a CVT(constant voltage
transformer).
2. Explain various types of equilibrium points encountered in non-linear systems and
draw approximately the phase plane trajectories.
3. Bring out the differences between Liapunov‟s stability criterion and Popov‟s
stability criterion.
4. Explain what do you understand by limit cycle?
The figures in the right-hand margin indicate marks.
5. (a)Determine the describing function for the non-linear element described by
y=x
3
; where x=input and y= output of the non-linear element.
[5]
98
(b) Draw the phase trajectory for the system described by the following differential
equation
2
2
+ 0.6


+ = 0
With X(0)=1 and


0
= 0. [5]
6. Investigate the stability of the equilibrium state for the system governed by:
1

= 3
1
+
2
2

=
1
2
2
3
[7]
7. Distinguish between the concepts of stability, asymptotic stability and global
stability. [3]
8. Write short notes on [3.5×6]
(a) Signal stabilisation
(b) Delta method of drawing phase trajectories
(c) Phase plane portrait
(d) Jump resonance in non linear closed loop system
(e) Stable and unstable limit cycle
(f) Popov s stability criterion
9. (a) The origin is an equilibrium point for the pair of equations
1
= 
1
+ 
2
2
= 
1
+ 
2
Using Liapunov‟s theory find sufficient conditions on a, b, c and d such that the
origin is asymptotically stable. [8]
(b) A nonlinear system is described by
2
2
+ sin = 0.707
Draw the phase plane trajectory when the initial conditions are
0
=
3
,
0
= 0.
Use phase plane method. Compute x vs. t till t=0.1 sec. [8]
10. Determine the amplitude and frequency of oscillation of the limit cycle of the system
shown in Figure below. Find the stability of the limit cycle oscillation. [16]
11. Write short notes on Popov‟s stability criterion and its geometrical interpretation. [4]
99
12. Derive the expression for describing function of the following non-linearity as shown
in figure below. [14]
13. Describe Lyaponov‟s stability criterion. [3]
14. What do you mean by sign definiteness of a function? Check the positive definiteness
of
VX
= x
1
2
+
2
2
2
1+
2
2
[4]
15. Distinguish between the concepts of stability, asymptotic stability & global stability.
[4]
16. (a) What are singular points in a phase plane? Explain the following types of
singularity with sketches: [9]
Stable node, unstable node, saddle point, stable focus, unstable focus, vortex.
(b) Obtain the describing function of N(x) in figure below. Derive the formula used.
[6]
17. (a) Evaluate the describing function of the non linear element shown in figure below.
[6]
100
(b) This non linear element forms part of a closed loop system shown in fig below.
Making use of the describing function analysis. Determine the frequency amplitude and
stability of any possible self oscillation. [10]
18. (a) Explain of the method of drawing the trajectories in the phase plane using [10]
i) Lienard‟s construction
ii)Pell‟s method
(b) A second order non linear system is described by [6]
+25(1+0.1
2
)= 0
Using delta method obtain the first five points in the phase plane for initial condition
X(0)==1.8