Vis enkel innførsel

dc.contributor.authorBavirisetti, Durga Prasad
dc.contributor.authorBrobak, Eskild
dc.contributor.authorEspen, Peder Hanch-Hansen
dc.contributor.authorKiss, Gabriel Hanssen
dc.contributor.authorLindseth, Frank
dc.date.accessioned2024-04-11T12:14:20Z
dc.date.available2024-04-11T12:14:20Z
dc.date.created2024-03-19T13:56:48Z
dc.date.issued2023
dc.identifier.issn1892-0713
dc.identifier.urihttps://hdl.handle.net/11250/3126103
dc.description.abstractThe topic of object detection, which involves giving cars the ability to perceive their environment has drawn greater attention. For better performance, object detection algorithms often need huge datasets, which are frequently manually labeled. This procedure is expensive and time-consuming. Instead, a simulated environment due to which one has complete control over all parameters and allows for automated image annotation. Carla, an open-source project created exclusively for the study of autonomous driving, is one such simulator. This study examines if object detection models that can recognize actual traffic items can be trained using automatically annotated simulator data from Carla. The findings of the experiments demonstrate that optimizing a trained model using Carla’s data, along with some real data, is encouraging. The Yolov5 model, trained using pre-trained Carla weights, exhibited improvements across all performance metrics compared to one trained exclusively on 2000 Kitti images. While it didn’t reach the performance level of the 6000-image Kitti model, the enhancements were indeed substantial. The mAP0.5:0.95 score saw an approximate 10% boost, with the most significant improvement occurring in the Pedestrian class. Furthermore, it is demonstrated that a substantial performance boost can be achieved by training a base model with Carla data and fine-tuning it with a smaller portion of the Kitti dataset. Moreover, the potential utility of Carla LiDAR images in reducing the volume of real images required while maintaining respectable model performance becomes evident. Our code is available at: https://tinyurl.com/3fdjd9xb.en_US
dc.language.isoengen_US
dc.publisherBibsys Open Journal Systemsen_US
dc.relation.urihttps://www.ntnu.no/ojs/index.php/nikt/article/view/5665
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleSimulated RGB and LiDAR Image based Training of Object Detection Models in the Context of Autonomous Drivingen_US
dc.title.alternativeSimulated RGB and LiDAR Image based Training of Object Detection Models in the Context of Autonomous Drivingen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holderCopyright © 2023 Norsk IKT-konferanse for forskning og utdanningen_US
dc.source.volume1en_US
dc.source.journalNIKT: Norsk IKT-konferanse for forskning og utdanningen_US
dc.identifier.cristin2255785
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal