End-to-end learning and sensor fusion with deep convolutional networks for steering an off-road unmanned ground vehicle
MetadataShow full item record
Current autonomous driving policies based on deep learning are mostly learned from images of roads within rural and city areas. In this master thesis more unconventional settings are considered, such as off-road and forest terrain. Specifically, the goal is to train an off-road vehicle to make autonomous steering predictions based on input from a single camera and LiDAR scanner. To achieve this, we propose a fusion model comprising two convolutional networks and a fully-connected network. The convolutional nets are trained on images and LiDAR, respectively, whereas the fully-connected net is trained on combined features from each of these networks. Our experimental results show that fusing image and LiDAR information yields more accurate steering predictions on our dataset, than considering each data source separately. Results also show that training our networks on LiDAR and images individually produces similar root mean squared error (RMSE), and that better generalization is achieved by increasing the number of LiDAR features for training. As a secondary task, we a propose a proof-of-concept verification model for steering trustability. This model utilizes segmented images from a separately trained segmentation network, and, using projective geometry, can determine if the path generated from a given steering angle is valid or not. Combining this model with our fusion network above, steering angle predictions from this network can be accepted or discarded online. Experiments on a small test set show promising results, but additional experimentation is needed to confirm validity.