Acessibilidade / Reportar erro

Iterative feedback tuning of uncertain state space systems

Abstract

Iterative Feedback Tuning is a purely data driven tuning algorithm for optimizing control parameters based on closed loop data. The algorithm is designed to produce an unbiased estimate of the performance cost function gradient for iteratively improving the control parameters to achieve optimal loop performance. This tuning method has been developed for systems based on a transfer function representation. This paper presents a state feedback control system with a state observer and its transfer function equivalent in terms of input output dynamics. It is shown how the parameters in the closed loop state space system can be tuned by Iterative Feedback Tuning utilizing this equivalent representation. A simulation example illustrates that the tuning converges to the known analytical solution for the feedback control gain and to the Kalman gain in the state observer. In case of parametric uncertainty, different choices of tuning parameters are investigated. It is shown that the data driven tuning method produces optimal performance for convex problems when it is the model parameter estimates in the observer that are tuned.

Driven Tuning; Iterative Feedback Tuning; LQG Control; Model Uncertainty


PROCESS SYSTEMS ENGINEERING

Iterative feedback tuning of uncertain state space systems

J. K. HuusomI,* * To whom correspondence should be addressed This is an extended version of the manuscript presented at the PSE 2009 -10th International Symposium on Process Systems Engineering, 2009, Salvador, Brazil, and published in Computer Aided Chemical Engineering, vol. 27, p. 1773-1778. ; N. K. PoulsenII; S. B. JørgensenI

IDepartment of Chemical and Biochemical Engineering, E-mail: jkh@kt.dtu.dk; sbj@kt.dtu.dk

IIDepartment of Informatics and Mathematical Modeling, Technical University of Denmark, DK-2800 Lyngby, Denmark. E-mail: nkp@imm.dtu.dk

ABSTRACT

Iterative Feedback Tuning is a purely data driven tuning algorithm for optimizing control parameters based on closed loop data. The algorithm is designed to produce an unbiased estimate of the performance cost function gradient for iteratively improving the control parameters to achieve optimal loop performance. This tuning method has been developed for systems based on a transfer function representation. This paper presents a state feedback control system with a state observer and its transfer function equivalent in terms of input output dynamics. It is shown how the parameters in the closed loop state space system can be tuned by Iterative Feedback Tuning utilizing this equivalent representation. A simulation example illustrates that the tuning converges to the known analytical solution for the feedback control gain and to the Kalman gain in the state observer. In case of parametric uncertainty, different choices of tuning parameters are investigated. It is shown that the data driven tuning method produces optimal performance for convex problems when it is the model parameter estimates in the observer that are tuned.

Keywords: Data Driven Tuning; Iterative Feedback Tuning; LQG Control; Model Uncertainty.

INTRODUCTION

The need for optimal process operation has rendered methods for optimization of control loop parameters an active research area. Much attention has been directed in performing control oriented system identification, which implies model estimation from closed loop data (Schrama 1992, Hjalmarsson et al. 1994, Gevers 2002). Optimizing the parameters in a control loop is an iterative procedure since the data from one experiment will depend on the current controller and repeated iterations are necessary for the loop performance to converge to a minimum. Estimating a model from closed loop data requires special techniques (Ljung 1999) and several algorithms have been published which handle the iterative scheme of closed loop system identification and model based control design (Zang et al. 1995, Gevers et al. 2003, de Callafon 1998). An alternative would be a direct data driven approach to tune the control parameters without utilizing a model estimate.

Data driven tuning methods have mainly been reported for systems given in transfer function form. Examples are the Iterative Feedback Tuning method (Hjalmarsson et al. 1998) and, in recent years, the Correlation based Tuning presented in Karimi et al. (2004) and Virtual Reference Feedback Tuning presented in Campi et al. (2002). Controllers based on a state space description of the system model are mainly tuned based on an estimated process model. Hence, the potential advantages of using a direct tuning method are not exploited. Such advantages are that direct tuning often is computationally less demanding than model identification and model based control design. The direct tuning methods can be used even when insufficient knowledge of the model structure limits the performance, where the system is tuned based on the certainty equivalence principle.

This paper investigates the use of the direct tuning method, Iterative Feedback Tuning, for optimization of the feedback gain and the state observer gain for a control loop based on a state space system description. Based on the certainty equivalence principle, an analytical solution for optimal values of these two gains exists. This renders the loop performance sensitive to model errors and bias. The data driven tuning in this paper will be investigated both for systems with full process insight and systems with parametric uncertainty. The perspective for simple tuning methods for control structures based on state space descriptions is highly interesting. The majority of advanced control strategies today is model based and relies most often on a state space description. Direct controller tuning may serve as an interesting alternative, when fine tuning a control loop or when a degrading loop performance is observed. This paper is organized as follows. First, a short introduction to the system and the control loop description is given together with the optimal model based design. Then the data driven tuning method, Iterative Feedback Tuning, is presented and the state space formulation in relation to the tuning method is analyzed. An illustrative simulation example is given dealing with full process knowledge and parametric uncertainty. The final conclusions are drawn in the last section.

THE STATE SPACE CONTROL LOOP

Given the following linear, discrete time, single input/single output, and time-invariant system description:

where xt represents the system states, ut is the manipulated variable and yt is the system output at time instant t∈Z. ePt. represents process noise and is measurement noise. The cross correlation between ePt and emt will be assumed zero in this paper. It is desired to control this system using the state feedback law

where L is a constant feedback gain matrix and M is a controller gain matrix for the reference signal. Since the exact value of the states is not known, an observer is used to generate state estimates. This is based on measurements of the process output and the process model. The observer has the form of the predictive Kalman filter with the constant gain matrix K, assuming stationary conditions.

The structure of the state space feedback loop with observer, consisting of Equations and , is shown in Fig. 1. In order to have a static gain from the reference to the process output equal to one, the following requirements can be derived based on an assumption of full state information


Introducing the state estimation error and assuming full process knowledge, the system can be represented by the set of Equations in , which provides a convenient description with a clear distinction between feedback control and state estimation dynamics (Åström 1970):

If the system is stabilizable and detectable, a set {L,K} exists which renders the system stable (Kwakernaak and Sivan 1972). Hence, if optimal values for the feedback and Kalman filter gains are used, stability is guaranteed. Computations of these optimal gains are shown in the following subsection.

Optimal Model Based Design

Optimal values for both the observer gain K and the feedback gain L exist and have known analytical solutions (Anderson and Moore 1989, Grewal and Andrews 1993). The optimal, stationary value for the gain matrix in the predictive Kalman filter can be evaluated based on the process model and information of the noise intensity by employing the certainty equivalence principle. The stationary condition is indicated by the ∞ subscript on the gain and the covariance matrices of the state prediction error.

The equation for the state prediction error variance matrix is an algebraic Riccati equation.

The optimal value for the controller gain depends on the optimization criterion. In this paper, the control design will minimize the value of a cost function for the loop performance. For a single input/single output system

where λ determines the weighting between the penalty on the output and the control. For optimal tracking, the output is replaced by the tracking error in the cost function. The optimal Linear Quadratic Gaussian controller (LQG) produces an optimal feedback gain for the quadratic cost function

Using the linear system description in Equation with Gaussian noise and assuming that the horizon in the criterion approaches infinity produces the following stationary solution for the controller gain:

This set of equations is of the same form as for the design of the predictive Kalman filter. It can be seen that the weights QR and λ in the cost functions play the same role in the equations as the noise variance in the filter equations. In case QR = , the cost function is equivalent to .

ITERATIVE FEEDBACK TUNING

This data driven tuning method was introduced by Hjalmarsson et al. (1994) and further developed and refined by Hjalmarsson et al. (1998). An extensive overview of contributions and applications for this tuning method can be found in Gevers (2002) and Hjalmarsson (2002). The tuning method optimizes a set of control parameters, ρ, based on a performance cost function like Equation . The main idea is to use closed loop data to determine an unbiased estimate of the cost function gradient with respect to the control parameters and use that estimate in a gradient based search algorithm. Iterative Feedback Tuning is designed to tune lower level controllers that are linear in the control parameters as, e.g., PID controllers (Hjalmarsson 2002, Gevers 2002). It has been tested in practice for PID control loops in Hjalmarsson et al. (1998) and, more recently, for inventory control in Huusom et al. (2007).

The Iterative Feedback Tuning method works with a system description for a feedback loop with a two degree of freedom controller, C = {Cr, Cy}, as shown in Fig. 2 and Equation . The process model G is a discrete time transfer function and the noise vt is a zero mean, weakly stationary random signal.


where S and T are the sensitivity and the complementary sensitivity function, respectively. Based on a general cost function with penalty in the tracking error , where ydt is the desired process output

and the system description in Equation , Hjalmarsson et al. (1998) showed that the cost function gradient with respect to the control parameters is

where E[∙] is the mathematical expectation. The derivative of the input and output are given by

Please note the difference between the reference, rt, and the desired closed loop response, ydt, when the loop is tuned for obtaining a smooth transition in, e.g., a step in the reference. When rt = 0, then ydt= 0 and the cost function in Equation is equal to Equation , which was used in the LQG design. The problem is then reduced to a disturbance rejection problem where the gradient expressions reduce to

Given the cost function gradient estimate, the updates of the control parameters in the optimization are performed by iterations in

where γi is the step length and Ri is some positive definite matrix, preferably the Hessian estimate of the cost function

This optimization will converge despite the stochastic nature of the cost function gradient as a stochastic approximation method as long as an optimum exists, the estimate is unbiased and the following condition on the step size is fulfilled (Hjalmarsson et al. 1998, Robbins and Monro 1951).

This condition is fulfilled by having γi=a/I, where a is some constant. In Hildebrand et al. (2005) a quantitative analysis of the convergence of the Iterative Feedback Tuning algorithm was performed. When the number of data points, N, is sufficiently large, the variance of the gradient estimate becomes so small that it can be neglected, and a faster converging gradient scheme than the stochastic approximation may perform well, e.g., a Newton algorithm with a step size of 1. This is often the preferred strategy since a poor rate of convergence implies many plant experiments.

The Tuning Algorithm

In order to form an estimate of the cost function gradient in , measurements of the systems input and output and their derivatives with respect to the control parameters are needed. The following three closed loop experiments are performed on the system, where the superscripts refer to the experiment number.

where rt is the reference signal during normal operation. The sequence of input/output data from these experiments (yj;uj) j ∈ {1,2,3}will be utilized as

It can be seen from these equations that only the noise in the last two experiments contributes as a nuisance, since these signals contribute to the variance of the gradient estimates. The noise in the first experiment, in contrast, contributes to the analytical part of the gradients from Equation . When the tuning algorithm is used for disturbance rejection, i.e., rt=0, the third experiment is redundant. The tuning algorithm can be summarized as:

1) Collect (yj;uj) j ∈ {1,2,3} from the three closed loop experiments with the controller C(ρi);

2) Evaluate the gradient of the cost function ∂F(ρi)/∂ρ, the Ri matrix and update the control parameters to ρi+1;

3) Evaluate the performance F(ρi+1) and repeat with i:=i+1 if the desired performance tolerance is not achieved.

TRANSFORMING THE STATE SPACE FORMULATION

The restrictions, which the Iterative Feedback Tuning method sets on the control strategy used, are that the controller and the partial derivatives of the controller, with respect to the control parameters, can be reformulated in transfer function form. It is required that the filters in Equations are proper and stable. If the derivative of the controller is unstable, it is required to include filters in the performance cost function to compensate and ensure a bounded output from the filtering (Hjalmarsson et al. 1998). Using the system description , the system model estimate and the observer based feedback law , a transfer function description of the system and the feedback connection can be produced by elimination of the states. The conversion from discrete time, state space description, to the equivalent discrete time transfer function form for the true system is given by

where q is the one step ahead shift operator. In general: Ψt+i = qiΨt. The transfer function for the feedback connection is

The controller, Cr, which represents the transfer from the reference to the control signal, can be derived from Equations and . The controller will include the dynamics of the observer loop.

In the special case of a first order system, i.e., a scalar state vector, Cr, this simplifies to

The interconnection between the plant and the controller transfer functions is depicted in Fig. 2. The transfer function description given in this section and for the Iterative Feedback Tuning method only includes additive noise on the output, in contrast to the state space description which provides a clear distinction between process and measurement noise. From the Equations in , the following identity can be derived, which renders the two descriptions identical in terms of input output dynamics.

Tuning Potentials

From the transformation of the state space system into a transfer function form, it is seen that a controller can be derived that makes it possible to tune the control parameters using the Iterative Feedback Tuning method. The control parameters can be both the feedback and/or the observer gains, or it could also be the parameters in the model estimate since these also are an intrinsic part of the controller. When full process knowledge is available, it does not make sense to find the optimal feedback and observer gain by data driven tuning. The purpose of analyzing this scenario in this contribution is merely to show that the results from the tuning are consistent with the well known analytical results. This is illustrated in part 1 of the simulation example in the following section and was shown in Huusom et al. (2009a).

When full process knowledge is not available, e.g., there is uncertainty in the estimated parameters or incomplete information about the noise intensity, then the values for the feedback and observer gains will be affected and so will the achieved closed loop performance. This performance deterioration is a consequence of the gains being evaluated based on the certainty equivalence principle. When the only error is related to the information about the noise intensities, it is straightforward to tune the gain in the Kalman filter, using the data driven approach. Hence, Iterative Feedback Tuning provides an alternative method to tune a Kalman filter to that of direct estimation of noise intensities (Åkesson et al. 2008). If there are errors in the parameters of the system model description as indicated by Equation , the system cannot be represented by the set of Equations in . The correct representation for this case would be

where

The case with uncertain model parameters was treated in Huusom et al. (2009b,c) and results concerning a different tuning strategy will be presented in part 2 of the simulation example in the following section.

Iterative Feedback Tuning can handle multivariable systems, but an approximation in the algorithm is necessary if the number of required experiments should not grow with the number of input/output pairings (Hjalmarsson et al. 1998, Hjalmarsson 2002). No restriction has been imposed by the derivation in this paper regarding the system dimension as long as all controller parameters subject to the tuning are stacked in the vector ρ. In practice, tuning of multivariable systems may suffer from not having sufficiently rich data, which may lead to ill conditioning of the Hessian estimate R in Equation (16).

SIMULATION EXAMPLE PART 1 – FULL PROCESS KNOWLEDGE

In order to illustrate the potential of using the Iterative Feedback Tuning method on a discrete time, state space system with observer and state feedback, the following first order system is investigated.

This system is characterized by its fairly slow dynamics and a static gain of one from the input to the output. The sample time for this system is 1 second. This system is too simple to have any industrial relevance, but the objective of this example is to show the principles of the tuning method and demonstrate its ability to converge to optimality. The general method described in the paper concerns higher order linear models too. The system will be implemented using the structure of Equation , where full knowledge of the process and the noise is assumed. For this system, the feedback gain will be tuned for tracking and noise rejection. Furthermore, both the feedback gain and the observer gain are tuned simultaneously for the noise rejection problem. The performance cost function used for the tracking problem has λ=0, which produces the minimum variance controller. For the noise rejection problem λ ∈ {0;0.001}.

Tuning for Set Point Tracking

Initially the noise is characterized by ept=0.025 and emt=0.01 which are used with the process model to calculate the optimal Kalman filter gain by Equation . It is desired to find the optimal feedback gain, which will make the step response of the closed loop system resemble that of a first order system with a settling time of 10 seconds. A time horizon of 20 seconds will be used in the cost function. Hence, this closed loop system will have a two degree of freedom controller, where Cy is given by and Cr by . In the tuning, the control structure is treated as a two degree of freedom controller. The optimal controller gain for this tracking problem has been determined numerically to be Lopt=11.959. For two different initial values of the feedback gain, L0={5;20}, 50 iterations in the data driven tuning were performed and the trajectories for the feedback gain and the performance cost function are shown in Fig. 3. It can be seen in the figures that the tuning does converge in very few iterations to the level around the optimal value of the feedback gain and, hence, to the expected value for the loop performance. The fluctuations in the parameter estimate of the feedback gain after it approaches the optimal value are caused by the stochastic element in the gradient estimate from the process data. The noise realization will change between iterations in the date driven tuning method, hence the amplitude of these oscillations can be decreased by using longer data sequences for the gradient estimate.


Tuning for Disturbance Rejection

When tuning for disturbance rejection, the process noise level has been increased so that ept=1 and the time horizon used in the performance cost function N is extended to one hour. Since this is a disturbance rejection problem, only a one degree of freedom controller has been used in the tuning, which means that only experiments one and two in the Iterative Feedback Tuning algorithm are required in each iteration and only the gradient ∂Cy/∂L needs to be evaluated. Initially, the optimal Kalman filter gain is used and only the feedback gain is tuned both for the minimum variance control problem and for λ=0.001, Fig. 4. The optimal feedback gain, L (λ=0), is very close to a limit that would make the controller Cy unstable. The optimal value for the feedback gain is calculated with Equation . A constraint is implemented in the control parameter update Equation , which will decrease the step length γi from 1 in case it is predicted that Li+1 > Lmax. Lmax produces a controller with a pole on the stability limit. Results from simultaneous tuning of both gains with λ=0.001 are shown in Fig. 5. It is seen in both cases that the tuning is able to converge to the level of the optimal values of the gains. The rate of convergence is not quite as fast as for the tracking problem. This is to be expected since the step response experiment perturbed the system more that the noise in the disturbance rejection case.



SIMULATION EXAMPLE PART 2 – PARAMETRIC UNCERTAINTY

In this section, the potential of using the Iterative Feedback Tuning method for systems given parametric uncertainty is illustrated using the process model from the previous section. The noise is characterized by ept=0.1 and emt=0.01. Only the optimization criterion with λ = 0.001 will be used in this section. In the following two subsections, different choices of tuning parameters are used for the data driven tuning. In the case when only the noise variance is unknown and all other parameters in the observer are correct, only the Kalman filter gain needs to be tuned. The parameters used to calculate the feedback gain and the other parameters in the observer are correct and the term Δ in Equation will be zero. Optimal performance is therefore achieved if the direct tuning converges to the value K, which is the optimal observer gain based on full process insight and noise characteristics. In case the b parameter is wrongly estimated, the feedback gain L will be affected and Δ, which is part of the state estimation error, will no longer be zero. Hence, the certainty equivalence feedback gain is not L, which is based on the true system parameters. This is verified by Fig. 6, which shows the performance cost as function of the feedback gain given full process knowledge when the b parameter is erroneous. Fig. 6 show that the optimal value for the feedback gain, when this erroneous b is used, is approximately 21, while L is approximately 23. The certainty equivalence design for the feedback gain is approximately 17, which is evidently not optimal for this system. In case any of the parameters a or c in the model estimate are erroneous, this will affect both the values of the Feedback and Kalman gains and the state estimate.


Compensation strategy

The idea behind the compensation strategy is to use the certainty equivalence design as an initial design for the controller and the observer. The loop performance is then gradually improved by direct tuning of these gains in order to achieve the best possible performance given the model that is available. Fig. 7 shows the result of direct tuning for two scenarios. First, only the noise intensity is unknown, which implies tuning of K and, secondly, the value of the b parameter is wrong, which implies tuning of L. Initially equal values of 1 were used for the two noise variances in the calculation of K0. The estimate for b used to find L0 was twice the true value in the model . Fig. 7 shows that the tuning converges in a few iterations. It is also observed that the feedback gain does in fact converge to a different value than L, as indicated on Fig. 6. In the case where errors occur in either the c or a parameter, the certainty equivalence design of both gains will be affected. Hence, it is necessary to tune both gains simultaneously. This is performed and the results are given in Fig. 8. The results were produced by using c=0.9 and a=0.9, respectively, in the model. The results show that the two gains converge to different values of the gains than those calculated based on full process knowledge since the erroneous parameters are used in the observer. It is not clear from the figures that the performance is improved through the iterations. Evaluation of the cost function using long simulation time in order to improve statistics provides some more convincing results. It is seen that the cost function is F(L0,K0)=1.5075 and F(L0,K0)=1.5422, respectively, when the parameters for c or a are wrong. In both cases, the performance cost converges to 1.5049, which is the same as the optimal value given full process information. Hence, for this case, the direct tuning completely compensates for the error in the model parameters and produces a loop with optimal performance. This may not be the case in general.



Adaptation Strategy

The adaptation strategy employed in this section differs from the compensation strategy. First, the model parameter estimate is tuned and then the feedback and Kalman gains are updated to ensure optimality by applying certainty equivalence in each iteration. The model parameters are normal parameters in the transfer functions of the controllers, as well as the two gains, as clearly seen in Equations and . In the following, three experiments are performed where one of the model parameters a, b or c is wrong. By direct tuning of the parameter in question with a subsequent update of the Feedback and Kalman gains, the closed loop performance is optimized and will converge to optimality if the parameter estimate converges to the true value of the system parameter. Figs. 9, 10 and 11 show the results from 15 iterations of the tuning, when the erroneous parameter is either too large or too small. All experiments are able to converge to the optimal solution in 5-10 iterations, which is very good when tuning is conducted for the disturbance rejection case. This method also allows tuning of all the model parameters simultaneously, just as the previous subsection tuned both gains.




The cost function gradient estimate used in a Newton scheme by the Iterative Feedback Tuning method employs a transfer function description of the state space control loop in Fig. 1, as seen previously. This estimate is constructed by filtering closed loop input/output data through a filter that contains the gradient of the controller with respect to the tuning parameters, see Equations and . Since the feedback and feed-forward controller in the transfer function description of the state space control loop are functions of both the model estimate and the gain matrices according to Equation and , the partial derivatives of the optimal gains are needed with respect to the tuning parameters. These have been obtained by a first order forward difference approximation in the results presented here.

The results presented by the adaptation strategy are by far superior to the results in the previous subsection where gains were tuned while the erroneous parameters in the state estimator were kept constant, i.e., the compensation strategy. The increased complexity of tuning the system parameters and adapting the gains seems to be rewarded, since convergence of the model estimate to the true system leads to optimality for the closed loop performance.

CONCLUSIONS

Equivalent forms for a closed loop control system have been found for a state space system with observer and state feedback, and for a transfer function system representation, respectively. These equivalent forms mean that a transfer function description for the feedback controller in the closed loop state space system has been derived. Hence, it is shown how the data driven controller tuning method, Iterative Feedback Tuning, is applicable also for state space control systems.

In simulation studies, it is demonstrated that the tuning method converges to known analytical solutions for the feedback gain and the Kalman filter gain in the state observer when the underlying system is known. Furthermore, a study of tuning of a system with parametric uncertainty of the model parameters has shown that direct tuning of the feedback and the Kalman gains will improve closed loop performance compared to using the certainty equivalence design. The latter choice of tuning parameters is labeled a compensation strategy and will, in general, not lead to optimal performance due to the erroneous parameter estimates used in the state estimator. A more promising strategy, labeled the adaptation strategy, tunes the model parameter estimates and readjusts the feedback and observer gain to obtain certainty equivalence in all integrations. This approach will converge to optimal loop performance if the model parameters converge to the system parameters. Hence, the adaptation strategy is superior to the compensation strategy.

ACKNOWLEGMENTS

The first author gratefully acknowledges the Danish Council for Independent Research, Technology and Production Sciences (FTP) for funding through grant no. 274-08-0059.

(Submitted: December 22, 2009 ; Revised: May 5, 2010 ; Accepted: July 28, 2010)

  • Anderson, B. D. O. and Moore, J. B., Optimal Control. Linear Quadratic Methods, Prentice-Hall (1989).
  • de Callafon, R. A., Feedback oriented identification for enhanced and robust control - a fractional approach applied to a wafer stage. Ph.D. dissertation, Technical University of Delft, the Netherlands (1998).
  • Campi, M. C., Lecchini, A. and Savaresi, S. M., Virtual reference feedback tuning: A direct method for the design of feedback controllers. Automatica, 28, pp. 1337-1346 (2002).
  • Gevers, M., A decade of progress in iterative process control design: from theory to practice. Journal of process control, 12, (4), pp.519-531 (2002).
  • Gevers, M., Bombois, X., Codrons, B., Scorletti, G. and Anderson, B. D. O., Model validation for control and controller validation in a prediction error identification framework-part I: Theory. Automatica, 39, pp. 403-415 (2003).
  • Grewal, M. S. and Andrews, A. P., Kalman Filtering: Theory and Practice. Prentice Hall (1993).
  • Hildebrand, R., Lecchini, A., Solari, G. and Gevers M., Asymptotic accuracy of iterative feedback tuning. IEEE Transactions on Automatic Control, 50, (8), pp. 1182-1185 (2005).
  • Hjalmarsson, H., Iterative feedback tuning - an overview. International Journal of Adaptive Control and Signal Processing, 16, pp. 373-395 (2002).
  • Hjalmarsson, H., Gevers, M., Bruyne, F. D. and Leblond, J., Identification for control: Closing the loop gives more accurate controllers. IEEE Proceedings of the 33rd Conference on Decision and Control, pp. 4150-4455 (1994).
  • Hjalmarsson, H., Gevers, M., Gunnarsson, S. and Lequin, O., Iterative feedback tuning: Theory and applications. IEEE Control Systems Magazine, 18, (4) pp. 26-41 (1998).
  • Hjalmarsson, H., Gunnarsson, S. and Gevers, M., A convergent iterative restricted complexity control design scheme. Proceedings of the 33rd IEEE Conference on Decision and Control, vol. 2, pp. 1735-1740 (1994).
  • Huusom, J. K., Poulsen, N. K. and Jřrgensen, S. B., Data Driven Tuning of State Space Control loops with Observers. Proceedings of the 10th European Control Conference ECC'09, pp. 1961-1966 (2009a).
  • Huusom, J. K., Poulsen, N. K. and Jřrgensen, S. B., Data driven tuning of state space control loops with unknown state information and model uncertainty. Proceedings for 19th European Symposium on Computer Aided Process Engineering ESCAPE19, pp. 441-446 (2009b).
  • Huusom, J. K., Poulsen, N. K. and Jřrgensen, S. B., Iterative feedback tuning of state space control loops with observers given model uncertainty, Proceedings for 10th International Symposium on Process Systems Engineering PSE'09, pp. 1359-1364 (2009c).
  • Huusom, J. K., Santacoloma, P. A., Poulsen, N. K. and Jřrgensen S. B., Data driven tuning of inventory controllers. Proceedings of the 46th IEEE Conference on Decision and Control - CDC46, pp. 4191-4196 (2007).
  • Karimi, A., Miković, L. and Bonvin, D., Iterative correlation-based controller tuning. Int. journal of adaptive control and signal processing, 18, (8), pp. 645-664 (2004).
  • Kwakernaak, H. and Sivan, R., Linear Optimal Control Systems. John Wiley & Sons Inc. (1972).
  • Ljung, L., System identification-Theory for the user. 2nd ed. Prentice Hall (1999).
  • Robbins, H. and Monro, S., A stochastic approximation method. Annals of mathematical statistics, 22, (3), pp. 400-407 (1951).
  • Schrama, R. J. P., Accurate identification for control: The necessity of an iterative scheme. IEEE Transactions on automatic control, 37, (7), pp. 991-994 (1992).
  • Zang, Z., Bitmead, R. R. and Gevers, M., Iterative weighted least-squares identification and weighted LQG control design, Automatica, 31(11) pp. 1577-1594 (1995).
  • Åkeson, B. M., Jřrgensen, J. B., Poulsen, N. K. and Jřrgensen, S. B., A generalized autocovariance least-squares method for Kalman filter tuning. Journal of Process Control, 18, pp. 769-779 (2008).
  • Åström, K. J., Introduction to Stochastic Control Theory. Academic Press (1970).
  • *
    To whom correspondence should be addressed
    This is an extended version of the manuscript presented at the PSE 2009 -10th International Symposium on Process Systems Engineering, 2009, Salvador, Brazil, and published in Computer Aided Chemical Engineering, vol. 27, p. 1773-1778.
  • Publication Dates

    • Publication in this collection
      29 Nov 2010
    • Date of issue
      Sept 2010

    History

    • Received
      22 Dec 2009
    • Reviewed
      05 May 2010
    • Accepted
      28 July 2010
    Brazilian Society of Chemical Engineering Rua Líbero Badaró, 152 , 11. and., 01008-903 São Paulo SP Brazil, Tel.: +55 11 3107-8747, Fax.: +55 11 3104-4649, Fax: +55 11 3104-4649 - São Paulo - SP - Brazil
    E-mail: rgiudici@usp.br