Learning an AUV docking maneuver with a convolutional neural network
Abstract
This paper proposes and implements a convolutional neural network (CNN) that maps images from a camera to an error signal to guide and control an autonomous underwater vehicle into the entrance of a docking station. The paper proposes to use an external positioning system synchronized with the vehicle to obtain a dataset of images matched with the position and orientation of the vehicle. By using a guidance map the positions are converted into desired directions that guide the vehicle to a docking station. The network is then trained to estimate, for each frame, the error between the desired direction and the orientation. After training, the CNN can estimate the error without using the external positioning system, creating an end-to-end solution from image to a control signal.