In the field of control engineering, a global controller for a nonlinear system often comprises of several other local controllers. Each of the local controllers is responsible for a region of the operational condition and will ensure stability throughout the process. The system is forced to the reference state by control switching between these local controllers. However, finding the appropriate boundaries for the switching can be a complicated task, and also of great importance to guarantee satisfying performance.
Recent developments in the work of connecting machine learning to control engineering have shown promising results. Thus, reinforcement learning (RL), a branch of machine learning, can be utilized to find appropriate boundaries for control switching autonomously. In this work, control switching with RL is tested out for a pendulum system where the aim is to find a useful switching policy, through the Q-learning algorithm, to swing-up and balance the pendulum at the unstable equilibrium. Some of the traditional work such as linearization around working points remains the same, but when the switch between controllers should take place is the work for RL to find out. Two controller sets that include different swing-up regulators and balancing regulators were tested.
The results showed that the RL agent found a policy that successfully brought the pendulum to the reference state, and a controller policy dependent on the system’s operational condition was found. In the cases where the discretization of the statespace was relatively fine, there were non-optimal switches between the regulators for both controller sets. However, in the coarse discretization cases, these switches did not occur.