Theory of Robot Control
The advent of new high-speed microprocessor technology together with the need for high-performance robots created substantial and realistic place for control theory in the field of robotics. Since the beginning of the 80's, robotics and control theory hav
- PDF / 2,089,283 Bytes
- 30 Pages / 439.37 x 666.142 pts Page_size
- 95 Downloads / 308 Views
Control background In this Appendix we present the background for the main control theory tools used throughout the book; namely, Lyapunov theory, singular perturbation theory, differential geometry theory, and input-output theory. We provide some motivation for the various concepts and also elaborate some aspects of interest for theory of robot control. No proofs of the various theorems and lemmas are given, and the reader is referred to the cited literature.
A.1
Lyapunov theory
We will use throughout the appendix a rather standard notation and terminology. R+ will denote the set of nonnegative real numbers, and Rn will denote the usual n-dimensional vector space over R endowed with the Euclidean norm n
IIxll = ( ~ Ix) 12
) 1/2
Let us consider a nonlinear dynamic system represented as
x = f(x, t}, where
f is a nonlinear vector function and
A.I.1
(A.I) x E Rn is the state vector.
Autonomous systems
The nonlinear system (A.I) is said to be autonomous (or time-invariant) if f does not depend explicitly on time, i.e.,
x = f(x};
(A.2)
364
APPENDIX A. CONTROL BACKGROUND
otherwise the system is called nonautonomous (or time-varying). In this section, we briefly review the Lyapunov theory results for autonomous systems while non autonomous systems will be reviewed in the next section. Lyapunov theory is the fundamental tool for stability analysis of dynamic systems, such as the robot manipulators and mobile robots treated in the book. The basic stability concepts are summarized in the following definitions. Definition A.I (Equilibrium) A state x* is an equilibrium point of (A.2) if f(x*) = o.
o
Definition A.2 (Stability) The equilibrium point x = 0 is said to be stable if, for any p > 0, there exists r > 0 such that if IIx(O)1I < r, then IIx(t)1I < p "It ? O. Otherwise the equilibrium point is unstable.
o
Definition A.3 (Asymptotic stability) An equilibrium point x = 0 is asymptotically stable if it is stable, and if in addition there exists some r > 0 such that IIx(O)1I < r implies that x(t) - 0 as t - 00.
o
Definition A.4 (Marginal stability) An equilibrium point that is Lyapunov stable but not asymptotically stable is called marginally stable.
o
Definition A.5 (Exponential stability) An equilibrium point is exponentially stable if there exist two strictly positive numbers a and A independent of time and initial conditions such that
IIx(t)1I ::; a exp(-At) IIx(O) II in some ball around the origin.
"It> 0
(A.3)
o
The above definitions correspond to local properties of the system around the equilibrium point. The above stability concepts become global when their corresponding conditions are satisfied for any initial state. Lyapunov linearization method Assume that f(x) in (A.2) is continuously differentiable and that x = 0 is an equilibrium point. Then, using Taylor expansion, the system dynamics can be written as
x= ~fl +o(x) ux .,=0
(A.4)
365
A.i. LYAPUNOV THEORY
where 0 stands for higher-order terms in x. Linearization of the original nonlinear system at the equilibrium point is giv
Data Loading...