Vis enkel innførsel

dc.contributor.authorUllah, Habib
dc.contributor.authorIslam, Ihtesham Ul
dc.contributor.authorUllah, Mohib
dc.contributor.authorAfaq, Muhammad
dc.contributor.authorKhan, Sultan Daud
dc.contributor.authorIqbal, Javed
dc.date.accessioned2021-09-13T07:51:52Z
dc.date.available2021-09-13T07:51:52Z
dc.date.created2021-01-20T22:36:11Z
dc.date.issued2020
dc.identifier.issn0942-4962
dc.identifier.urihttps://hdl.handle.net/11250/2775364
dc.description.abstractWe propose a novel method for modeling crowd video dynamics by adopting a two-stream convolutional architecture which incorporates spatial and temporal networks. Our proposed method cope with the key challenge of capturing the complementary information on appearance from still frames and motion between frames. In our proposed method, a motion flow field is obtained from the video through dense optical flow. We demonstrate that the proposed method trained on multi-frame dense optical flow achieves significant improvement in performance in spite of limited training data. We train and evaluate our proposed method on a benchmark crowd video dataset. The experimental results of our method show that it outperforms five reference methods. We have chosen these reference methods since they are the most relevant to our work.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.titleMulti-feature-based crowd video modeling for visual event detectionen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.journalMultimedia Systemsen_US
dc.identifier.doihttp://dx.doi.org/10.1007/s00530-020-00652-x
dc.identifier.cristin1876121
dc.description.localcodeThis version of the article will not be available due to copyright restrictions (c) 2020 by Springeren_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel