Simple implementation of optimal control for process systems
MetadataShow full item record
The main focus of this thesis is to find simple ways of implementing optimal operation of process plants. The work is in the spirit of “self-optimizing control”, which can be summarized as [Skogestad, 2000b]: “The goal is to find a self-optimizing control structure where acceptable operation under all conditions is achieved with constant setpoints for the controlled variables. More generally, the idea is to use the model off-line to find properties of the optimal solution suited for (simple, model-free) on-line implementation.” In the first part of the thesis, the problem of static output feedback is addressed. This is one of the open problems in control [Syrmos et al., 1997], and we derive a novel approximation to this problem by using links to self-optimizing control. The approximation can be used to calculate multiple input−multiple output proportional-integral-derivative (MIMO-PID) controllers, which can be of great practical interest. We further extend parts of the theory of self-optimizing control to cover changes in the active set. This is done by using results from explicit model predictive control (MPC) and the results are exact for a quadratic approximation around the optimum. By using an ammonia production plant as an example, we show that the results may also be applied to more general processes, and that the method is particularly interesting for cases where the set of active constraints is expected to change frequently. Thereafter we develop a mathematical framework for analysis of the performance loss when “speedups” are applied to an MPC formulation. Such speedups can be model reduction, move blocking, shortening the horizon in the controller or changing the sample time of the internal model in the MPC. By using the method on a model of a distillation column, we find that the so-called “delta-move blocking” has a good performance to speed ratio. We then use the same mathematical program to prove stability of simple control schemes by calculating the maximum distance to a robust controller; if the distance is within the robustness margin of the robust controller, then the simple controller is proven to be stable. Several “simple controllers” can be analyzed in this scheme, for example partial enumeration of an explicit MPC and the linear quadratic regulator with saturation. Finally, in the appendices of the thesis, we give mathematical links between the problem of self-optimizing control and explicit MPC, and we give some means of simplification of the implementation of explicit MPC. In addition we give some extra information regrading the static output feedback problem.