Vis enkel innførsel

dc.contributor.authorBrekke, Åsmund Halse
dc.contributor.authorVatsendvik, Fredrik
dc.contributor.authorLindseth, Frank
dc.date.accessioned2020-09-08T14:50:19Z
dc.date.available2020-09-08T14:50:19Z
dc.date.created2020-03-05T18:04:35Z
dc.date.issued2019
dc.identifier.citationCommunications in Computer and Information Science. 2019, 1056 CCIS 102-113.en_US
dc.identifier.issn1865-0929
dc.identifier.urihttps://hdl.handle.net/11250/2676950
dc.description.abstractThe need for simulated data in autonomous driving applications has become increasingly important, both for validation of pretrained models and for training new models. In order for these models to generalize to real-world applications, it is critical that the underlying dataset contains a variety of driving scenarios and that simulated sensor readings closely mimics real-world sensors. We present the Carla Automated Dataset Extraction Tool (CADET), a novel tool for generating training data from the CARLA simulator to be used in autonomous driving research. The tool is able to export high-quality, synchronized LIDAR and camera data with object annotations, and offers configuration to accurately reflect a real-life sensor array. Furthermore, we use this tool to generate a dataset consisting of 10 000 samples and use this dataset in order to train the 3D object detection network AVOD-FPN, with finetuning on the KITTI dataset in order to evaluate the potential for effective pretraining. We also present two novel LIDAR feature map configurations in Bird’s Eye View for use with AVOD-FPN that can be easily modified. These configurations are tested on the KITTI and CADET datasets in order to evaluate their performance as well as the usability of the simulated dataset for pretraining. Although insufficient to fully replace the use of real world data, and generally not able to exceed the performance of systems fully trained on real data, our results indicate that simulated data can considerably reduce the amount of training on real data required to achieve satisfactory levels of accuracy.en_US
dc.language.isoengen_US
dc.publisherSpringer Verlagen_US
dc.titleMultimodal 3D object detection from simulated pretrainingen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionacceptedVersionen_US
dc.source.pagenumber102-113en_US
dc.source.volume1056 CCISen_US
dc.source.journalCommunications in Computer and Information Scienceen_US
dc.identifier.doi10.1007/978-3-030-35664-4_10
dc.identifier.cristin1799987
dc.description.localcodeThis is a post-peer-review, pre-copyedit version of an article. The final authenticated version is available online at: http://dx.doi.org/10.1007/978-3-030-35664-4_10en_US
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel