Vis enkel innførsel

dc.contributor.advisorSchaathun, Hans Georg
dc.contributor.advisorBye, Robin Trulssen
dc.contributor.authorJahangir, Muhammad Saad
dc.date.accessioned2022-08-24T17:19:26Z
dc.date.available2022-08-24T17:19:26Z
dc.date.issued2022
dc.identifierno.ntnu:inspera:113806793:64491307
dc.identifier.urihttps://hdl.handle.net/11250/3013349
dc.descriptionFull text not available
dc.description.abstract
dc.description.abstractAutonomous vehicles use a combination of sensors for accurate perception, mapping, and localization in the environment. The sensors record information in their own coordinate system, and they need to be calibrated before fusing the data from all the sensors. Extrinsic calibration requires determining the relative pose (position and orientation) of sensors with respect to a reference. It is done during vehicle assembly in the factory and in some cases, online while the vehicle is mapping the environment to ensure the consistency of calibration. The convention methods for calibration use specially designed targets in the environment that can be detected by all the sensors. These targets are not always naturally available in the urban environment hence there have been a lot of recent efforts to perform automatic calibration, which do not require specially designed targets and instead extract features in the environment to build correspondences between data from different sensors. In the case of LiDAR-LiDAR calibration, extracting the same features from the environment without the aid of external sensors can be a difficult task due to the lack of structure in unorganized and sparse point cloud data. This task becomes even more challenging when the sensors are mounted on different sides of the vehicles, which results in fewer regions of overlapping points in the point cloud data. In addition to that large body vehicles like trucks can introduce even more obstruction between sensors and the distance between the sensors can be also quite large which results in capturing different parts of the environment from each sensor. This thesis proposes a pipeline of techniques that can be used to perform automatic extrinsic calibration for the above-mentioned challenging conditions by extracting planner features in any harbor environment, without any manual interference. The proposed method extracts the points on the ground plane and a pair of orthogonal planes that can be found on any shipment container in the environment. The points on the extracted planes are then used to align both point clouds and perform a rough calibration using geometrical techniques. The proposed method does not need any information about the positions and orientation of the containers and the common planes in both point clouds are extracted automatically using a pipeline of clustering and segmentation algorithms like DBSCAN and RANSAC. The results from the rough calibration are used as an initial transformation in point cloud registration algorithms like ICP and NDT to perform refined calibration. These registration algorithms improve the calibration results by matching data on point level and perform refined calibration. Instead of using raw point clouds for refinement, only the features extracted during rough calibration are used as input to the refinement step. This removes the outliers resulting in better calibration results. The proposed method is validated by a scene-based validation using simulated data, where the accuracy is tested in different scenarios built in Gazebo simulator. The mean position error in all the scenes is under 0.06 m while the mean rotation error is also under 0.002 radians. These results are quite good as the precision of the actual LiDAR sensor itself can be up to 0.05 m.
dc.languageeng
dc.publisherNTNU
dc.titleTargetless Multi-LiDAR Calibration for Autonomous Trucks in Harbor Environment
dc.typeMaster thesis


Tilhørende fil(er)

FilerStørrelseFormatVis

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel