Vis enkel innførsel

dc.contributor.authorLi, Zuhe
dc.contributor.authorHuang, Zhenwei
dc.contributor.authorPan, Yushan
dc.contributor.authorYu, Jun
dc.contributor.authorLiu, Weihua
dc.contributor.authorChen, Haoran
dc.contributor.authorLuo, Yiming
dc.contributor.authorWu, Di
dc.contributor.authorWang, Hao
dc.date.accessioned2024-06-10T08:00:22Z
dc.date.available2024-06-10T08:00:22Z
dc.date.created2024-06-05T13:43:53Z
dc.date.issued2024
dc.identifier.citationExpert Systems With Applications. 2024, 252 .en_US
dc.identifier.issn0957-4174
dc.identifier.urihttps://hdl.handle.net/11250/3133224
dc.description.abstractMultimodal sentiment analysis aims to extract sentiment cues from various modalities, such as textual, acoustic, and visual data, and manipulate them to determine the inherent sentiment polarity in the data. Despite significant achievements in multimodal sentiment analysis, challenges persist in addressing noise features in modal representations, eliminating substantial gaps in sentiment information among modal representations, and exploring contextual information that expresses different sentiments between modalities. To tackle these challenges, our paper proposes a new Multimodal Sentiment Analysis (MSA) framework. Firstly, we introduce the Hierarchical Denoising Representation Disentanglement module (HDRD), which employs hierarchical disentanglement techniques. This ensures the extraction of both common and private sentiment information while eliminating interference noise from modal representations. Furthermore, to address the uneven distribution of sentiment information among modalities, our Inter-Modal Representation Enhancement module (IMRE) enhances non-textual representations by extracting sentiment information related to non-textual representations from textual representations. Next, we introduce a new interaction mechanism, the Dual-Channel Cross-Modal Context Interaction module (DCCMCI). This module not only mines correlated contextual sentiment information within modalities but also explores positive and negative correlation contextual sentiment information between modalities. We conducted extensive experiments on two benchmark datasets, MOSI and MOSEI, and the results indicate that our proposed method offers state-of-the-art approaches.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.titleHierarchical denoising representation disentanglement and dual-channel cross-modal-context interaction for multimodal sentiment analysisen_US
dc.title.alternativeHierarchical denoising representation disentanglement and dual-channel cross-modal-context interaction for multimodal sentiment analysisen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.rights.holder© Copyright 2024 Elsevieren_US
dc.source.pagenumber0en_US
dc.source.volume252en_US
dc.source.journalExpert Systems With Applicationsen_US
dc.identifier.doi10.1016/j.eswa.2024.124236
dc.identifier.cristin2273749
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel