Multi-feature-based crowd video modeling for visual event detection
Peer reviewed, Journal article
Published version
View/ Open
Date
2020Metadata
Show full item recordCollections
Original version
http://dx.doi.org/10.1007/s00530-020-00652-xAbstract
We propose a novel method for modeling crowd video dynamics by adopting a two-stream convolutional architecture which incorporates spatial and temporal networks. Our proposed method cope with the key challenge of capturing the complementary information on appearance from still frames and motion between frames. In our proposed method, a motion flow field is obtained from the video through dense optical flow. We demonstrate that the proposed method trained on multi-frame dense optical flow achieves significant improvement in performance in spite of limited training data. We train and evaluate our proposed method on a benchmark crowd video dataset. The experimental results of our method show that it outperforms five reference methods. We have chosen these reference methods since they are the most relevant to our work.