Deepview: Deep Learning based Users Field of View Selection in 360° Videos for Industrial Environments
Peer reviewed, Journal article
MetadataShow full item record
The industrial demands of immersive videos for virtual reality/augmented reality applications are crescendo, where the video stream provides a choice to the user viewing object of interest with the illusion of “being there”. However, in industry 4.0, streaming of such huge-sized video over the network consumes a tremendous amount of bandwidth, where the users are only interested in specific regions of the immersive videos. Furthermore, for delivering full excitement videos and minimizing the bandwidth consumption, the automatic selection of the user’s Region of Interest in a 360∘ video is very challenging because of subjectivity and difference in contentment. To tackle these challenges, we employ two efficient convolutional neural networks for salient object detection and memorability computation in a unified framework to find the most prominent portion of a 360∘ video. The proposed system is four-fold: preprocessing, intelligent visual interest predictor, final viewport selection, and virtual camera steerer. Firstly, an input 360∘ video frame is split into three Field of Views (FoVs), each with a viewing angle of 120∘. Next, each FoV is passed to object detection and memorability prediction model for visual interestingness computation. Further, the FoV is supplied as a viewport, containing the most salient and memorable objects. Finally, a virtual camera steerer is designed using enriched salient features from YOLO and LSTM that are forwarded to the dense optical flow to follow the salient object inside the immersive video. Performance evaluation of the proposed system over our own collected data from various websites as well as on public datasets indicates the effectiveness for diverse categories of 360∘ videos and helps in the minimization of the bandwidth usage, making it suitable for industry 4.0 applications.