Subspace Methods
Subspace methods can identify state space models if only input and output measurements are available. Since no measurements of the states are required and employed, it is not possible to come up with a unique structure, hence the model is only known up to
- PDF / 1,107,423 Bytes
- 32 Pages / 439.37 x 666.142 pts Page_size
- 104 Downloads / 229 Views
In this chapter, numerical optimization algorithms are presented, which allow to minimize cost functions even though they are not linear in parameters.
19.1 Introduction For the development of identification methods for parametric continuous-time models, the use of analog computers played an important role in the past. Their models were tunable and the model adjustment techniques or model reference adaptive identification methods were developed. Nowadays, these models are no longer realized on analog computers, but can rather be realized as part of computer programs or in special software tools. In the following, numerical optimization algorithms are presented, which allow to adjust parameters of models such that the model matches best with recorded measurements. So far, the parameter estimation methods have been limited mainly to models whose cost function had been linear in the parameters. In the following, methods will be presented that can also deal with cost functions that are non-linear in the parameters. This gives a great latitude in the design of the cost function and e.g. allows to directly determine physical parameters in non-linear process models instead of e.g. transfer function coefficients, where physical parameters are frequently lumped together. Also, constraints can be included such as the stability of the resulting system or the requirement that certain physical parameters are positive. Depending on the arrangement of the model, one has different ways of determining the error, as was shown in Fig. 1.8. The output error e.s/ D y.s/ yM .s/ D y.s/
BM .s/ u.s/ AM .s/
(19.1.1)
leads to a parallel model, the equation error e.s/ D AM .s/y.s/ BM .s/u.s/
R. Isermann, M. Münchhof, Identification of Dynamic Systems, DOI 10.1007/978-3-540-78879-9_19, © Springer-Verlag Berlin Heidelberg 2011
(19.1.2)
470
19 Iterative Optimization
Parallel Model u
Series-Parallel Model y
Process
u
y
Process
e
e
Model
-
yM
Model
Optimization Algorithm
yM
Optimization Algorithm
Fig. 19.1. Model setups for iterative optimization
to a series-parallel model and the input error e.s/ D
AM .s/ y.s/ u.s/ BM .s/
(19.1.3)
would lead to a series model or reciprocal model. Similar setups can be formulated in the time domain and for non-linear systems, see Fig. 19.1. A big advantage of the series-parallel model is the fact that it cannot become unstable as it does not contain a feedback loop. On the other hand, it cannot be guaranteed that the model obtained by the series-parallel setup can run as a stand-alone simulation model. Especially, for very small sample times compared to the system dynamics, the series-parallel model can pretend a model fidelity that is well above the true model fidelity. For very small sample times compared to the process dynamics, the model often collapses to simply yM .k/ y.k 1/. As a cost function, one can choose any even function of the error, e.g. f .; e/ D e 2 or f .; e/ D jej. One can also think about combined cost functions that rate small errors differently than
Data Loading...