Vis enkel innførsel

dc.contributor.authorIrfan, Muhammad
dc.contributor.authorMuhammad, Khan
dc.contributor.authorSajjad, Muhammad
dc.contributor.authorMalik, Khalid
dc.contributor.authorAlaya Cheikh, Faouzi
dc.contributor.authorRodrigues, Joel J.P.C.
dc.contributor.authorAlbuquerque, Victor Hugo C. de
dc.date.accessioned2022-10-05T06:52:17Z
dc.date.available2022-10-05T06:52:17Z
dc.date.created2021-10-28T12:35:03Z
dc.date.issued2021
dc.identifier.issn2327-4662
dc.identifier.urihttps://hdl.handle.net/11250/3023854
dc.description.abstractThe industrial demands of immersive videos for virtual reality/augmented reality applications are crescendo, where the video stream provides a choice to the user viewing object of interest with the illusion of “being there”. However, in industry 4.0, streaming of such huge-sized video over the network consumes a tremendous amount of bandwidth, where the users are only interested in specific regions of the immersive videos. Furthermore, for delivering full excitement videos and minimizing the bandwidth consumption, the automatic selection of the user’s Region of Interest in a 360∘ video is very challenging because of subjectivity and difference in contentment. To tackle these challenges, we employ two efficient convolutional neural networks for salient object detection and memorability computation in a unified framework to find the most prominent portion of a 360∘ video. The proposed system is four-fold: preprocessing, intelligent visual interest predictor, final viewport selection, and virtual camera steerer. Firstly, an input 360∘ video frame is split into three Field of Views (FoVs), each with a viewing angle of 120∘. Next, each FoV is passed to object detection and memorability prediction model for visual interestingness computation. Further, the FoV is supplied as a viewport, containing the most salient and memorable objects. Finally, a virtual camera steerer is designed using enriched salient features from YOLO and LSTM that are forwarded to the dense optical flow to follow the salient object inside the immersive video. Performance evaluation of the proposed system over our own collected data from various websites as well as on public datasets indicates the effectiveness for diverse categories of 360∘ videos and helps in the minimization of the bandwidth usage, making it suitable for industry 4.0 applications.en_US
dc.language.isoengen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.titleDeepview: Deep Learning based Users Field of View Selection in 360° Videos for Industrial Environmentsen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionacceptedVersionen_US
dc.rights.holder© IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.source.journalIEEE Internet of Things Journalen_US
dc.identifier.doi10.1109/JIOT.2021.3118003
dc.identifier.cristin1949250
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel