Vis enkel innførsel

dc.contributor.authorAkhtar, Malik Javed
dc.contributor.authorMahum, Rabbia
dc.contributor.authorButt, Faisal Shafique
dc.contributor.authorAmin, Rashid
dc.contributor.authorEl-Sherbeeny, Ahmed M.
dc.contributor.authorLee, Seongkwan Mark
dc.contributor.authorShaikh, Sarang
dc.date.accessioned2023-01-24T07:32:21Z
dc.date.available2023-01-24T07:32:21Z
dc.date.created2022-11-28T12:58:56Z
dc.date.issued2022
dc.identifier.citationElectronics. 2022, 11 (21), .en_US
dc.identifier.issn2079-9292
dc.identifier.urihttps://hdl.handle.net/11250/3045612
dc.description.abstractObject recognition is the technique of specifying the location of various objects in images or videos. There exist numerous algorithms for the recognition of objects such as R-CNN, Fast R-CNN, Faster R-CNN, HOG, R-FCN, SSD, SSP-net, SVM, CNN, YOLO, etc., based on the techniques of machine learning and deep learning. Although these models have been employed for various types of object detection applications, however, tiny object detection faces the challenge of low precision. It is essential to develop a lightweight and robust model for object detection that can detect tiny objects with high precision. In this study, we suggest an enhanced YOLOv2 (You Only Look Once version 2) algorithm for object detection, i.e., vehicle detection and recognition in surveillance videos. We modified the base network of the YOLOv2 by reducing the number of parameters and replacing it with DenseNet. We employed the DenseNet-201 technique for feature extraction in our improved model that extracts the most representative features from the images. Moreover, our proposed model is more compact due to the dense architecture of the base network. We utilized DenseNet-201 as a base network due to the direct connection among all layers, which helps to extract a valuable information from the very first layer and pass it to the final layer. The dataset gathered from the Kaggle and KITTI was used for the training of the proposed model, and we cross-validated the performance using MS COCO and Pascal VOC datasets. To assess the efficacy of the proposed model, we utilized extensive experimentation, which demonstrates that our algorithm beats existing vehicle detection approaches, with an average precision of 97.51%.en_US
dc.language.isoengen_US
dc.publisherMDPIen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleA Robust Framework for Object Detection in a Traffic Surveillance Systemen_US
dc.title.alternativeA Robust Framework for Object Detection in a Traffic Surveillance Systemen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber0en_US
dc.source.volume11en_US
dc.source.journalElectronicsen_US
dc.source.issue21en_US
dc.identifier.doi10.3390/electronics11213425
dc.identifier.cristin2082577
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal