Fusion between camera and lidar for autonomous surface vehicles
MetadataShow full item record
The development of autonomous surface vessels (ASVs) has seen great progress in the last few years, and are on the verge of becoming a reality. Sensing the environment in a reliable way is a key element in making a ship fully autonomous. The sensors needed to make a ship fully autonomous exist today, but the challenge remains to find the optimal way to combine them. An ASV operating in a urban environment might need different exteroceptive sensors than a vessel operating at sea. Optical cameras and lidars (light detection and ranging) are suitable candidates for close-range sensing of the environment. Different sensors have different strengths and weaknesses, and in order to build a coherent world image on which e.g. sense-and-avoid decisions can be based on, information from the different sensors need to be fused and included into the state estimation of surrounding vessels. This is done using a target tracking system. In this thesis, a target tracking framework based on the JIPDA filter is implemented and tested on real data gathered during a series of experiments in a harbour environment. Measurement models for both the lidar and the camera are formulated, and the sensors are geometrically calibrated and integrated with the navigation system onboard the ReVolt model ship. The data from the lidar are clustered using a slightly modified version of the DBSCAN algorithm. Sensor measurements from a number of different scenarios with two maneuvering targets are recorded, and the targets are tracked with the JIPDA filter using the lidar sensor as the primary sensor. The results show that wakes behind the targets lead to many false tracks in close vicinity to the ReVolt model ship. The results also show that the detection probability of targets at range is reduced due to the spread of the laser beams. The presence of wakes did not lead to track loss for the true targets. At ranges where the targets were steadily detected, the targets were successfully tracked with few false tracks. It was also found that tracks can be lost due to occlusions, where one of the targets block the other target from being detected. The implemented Faster R-CNN detector showed limited range, where the detections at ranges over 20 meters are few and far between. At close ranges however, it shows the potential to be used in mitigating false tracks due to wakes or clutter, and could be used to aid in track formation and confirmation.