Show simple item record

dc.contributor.advisorSkogestad, Sigurd
dc.contributor.advisorGros, Nicolas
dc.contributor.authorAdhau, Saket
dc.date.accessioned2024-02-13T12:53:27Z
dc.date.available2024-02-13T12:53:27Z
dc.date.issued2024
dc.identifier.isbn978-82-326-7685-9
dc.identifier.issn2703-8084
dc.identifier.urihttps://hdl.handle.net/11250/3117305
dc.description.abstractThis thesis presents a comprehensive investigation into the integration of machine learning techniques to enhance the performance and applicability of Model Predictive Control (MPC). The main focus is on addressing challenging aspects of MPC through innovative methodologies, contributing to the advancement of control strategies in complex systems. The initial focus is on solving Non-Linear Model Predictive Control (NMPC) online, which is a computationally demanding task. To mitigate this challenge and improve the real-time feasibility of NMPC, a novel supervised learning framework is proposed. The framework harnesses the potential of neural networks injected with explicit constraint knowledge, and integrating insights from Karush-Kuhn-Tucker (KKT) conditions through the use of logarithmic barrier functions in the loss function. This approach effectively approximates the complex optimization problem, providing faster solutions while maintaining a fine balance between optimality and constraint satisfaction. In the next chapter, the thesis introduces an RL-based Economic Nonlinear MPC (ENMPC) scheme to improve closed-loop performance even when the system’s model is inaccurate. This scheme employs comprehensive data-based tuning for the real system, targeting performance improvements during transient and steady-state operations. Additionally, the integration of a Real-Time Optimization (RTO) layer aids in data-based tuning of the optimal control policy. Furthermore, to address the limitations of Reinforcement Learning (RL), specifically the time and computational efforts required to learn the optimal control policy, the thesis proposes a method to approximate the action-value function from the optimal solution of the value function using nonlinear programming sensitivities. This approach significantly reduces computational efforts while maintaining comparable closed-loop performance to conventional methods. Overall, this thesis contributes to the advancement of Model Predictive Control through the development of efficient supervised learning approximations, integration with reinforcement learning techniques, and data-based tuning strategies. The proposed methods open new avenues for enhancing control performance and overcoming computational challenges in real-world control applications.en_US
dc.language.isoengen_US
dc.publisherNTNUen_US
dc.relation.ispartofseriesDoctoral theses at NTNU;2024:42
dc.titleData-Driven Control Strategies: From MPC to Reinforcement Learningen_US
dc.typeDoctoral thesisen_US
dc.subject.nsiVDP::Matematikk og Naturvitenskap: 400en_US
dc.description.localcodeFulltext not availableen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record