Electronics, Vol. 13, Pages 2958: Vae-Clip: Unveiling Deception through Cross-Modal Models and Multi-Feature Integration in Multi-Modal Fake News Detection

1 month ago 47

Electronics, Vol. 13, Pages 2958: Vae-Clip: Unveiling Deception through Cross-Modal Models and Multi-Feature Integration in Multi-Modal Fake News Detection

Electronics doi: 10.3390/electronics13152958

Authors: Yufeng Zhou Aiping Pang Guang Yu

With the development of internet technology, fake news has become a multi-modal collection. The current news detection methods cannot fully extract semantic information between modalities and ignore the rumor properties of fake news, making it difficult to achieve good results. To address the problem of the accurate identification of multi-modal fake news, we propose the Vae-Clip multi-modal fake news detection model. The model uses the Clip pre-trained model to jointly extract semantic features of image and text information using text information as the supervisory signal, solving the problem of semantic interaction across modalities. Moreover, considering the rumor attributes of fake news, we propose to fuse semantic features with rumor style features using multi-feature fusion to improve the generalization performance of the model. We use a variational autoencoder to extract rumor style features and combine semantic features and rumor features using an attention mechanism to detect fake news. Numerous experiments were conducted on four datasets primarily composed of Weibo and Twitter, and the results show that the proposed model can accurately identify fake news and is suitable for news detection in complex scenarios, with the highest accuracy reaching 96.3%.

Read Entire Article