Advancing Fake News Detection: Hybrid Deep Learning With FastText and Explainable AI
Hashmi, Ehtesham; Yildirim-Yayilgan, Sule; Yamin, Muhammad Mudassar; Ali, Subhan; Abomhara, Mohamed Ali Saleh
Journal article, Peer reviewed
Published version
Permanent lenke
https://hdl.handle.net/11250/3125410Utgivelsesdato
2024Metadata
Vis full innførselSamlinger
Originalversjon
10.1109/ACCESS.2024.3381038Sammendrag
The widespread propagation of misinformation on social media platforms poses a significant concern, prompting substantial endeavors within the research community to develop robust detection solutions. Individuals often place unwavering trust in social networks, often without discerning the origins and authenticity of the information disseminated through these platforms. Hence, the identification of media-rich fake news necessitates an approach that adeptly leverages multimedia elements and effectively enhances detection accuracy. The ever-changing nature of cyberspace highlights the need for measures that may effectively resist the spread of media-rich fake news while protecting the integrity of information systems. This study introduces a robust approach for fake news detection, utilizing three publicly available datasets: WELFake, FakeNewsNet, and FakeNewsPrediction. We integrated FastText word embeddings with various Machine Learning and Deep Learning methods, further refining these algorithms with regularization and hyperparameter optimization to mitigate overfitting and promote model generalization. Notably, a hybrid model combining Convolutional Neural Networks and Long Short-Term Memory, enriched with FastText embeddings, surpassed other techniques in classification performance across all datasets, registering accuracy and F1-scores of 0.99, 0.97, and 0.99, respectively. Additionally, we utilized state-of-the-art transformer-based models such as BERT, XLNet, and RoBERTa, enhancing them through hyperparameter adjustments. These transformer models, surpassing traditional RNN-based frameworks, excel in managing syntactic nuances, thus aiding in semantic interpretation. In the concluding phase, explainable AI modeling was employed using Local Interpretable Model-Agnostic Explanations, and Latent Dirichlet Allocation to gain deeper insights into the model’s decision-making process.