Saealal, Muhammad Salihin and Ibrahim, Mohd Zamri and Shapiai, Mohd Ibrahim and Fadilah, Norasyikin (2023) In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy. In: 5th International Conference on Computer Communication and the Internet, ICCCI 2023, 23 June 2023 through 25 June 2023, Fujisawa.
Text
In-the-wild deepfake detection using adaptable CNN models with visual class activation mapping for improved accuracy.pdf Restricted to Repository staff only Download (2MB) |
Abstract
Deepfake technology has become increasingly sophisticated in recent years, making detecting fake images and videos challenging. This paper investigates the performance of adaptable convolutional neural network (CNN) models for detecting Deepfakes. In-the-wild OpenForensics dataset was used to evaluate four different CNN models (DenseNet121, ResNet18, SqueezeNet, and VGG11) at different batch sizes and with various performance metrics. Results show that the adapted VGG11 model with a batch size of 32 achieved the highest accuracy of 94.46% in detecting Deepfakes, outperforming the other models, with DenseNet121 as the second-best performer achieving an accuracy of 93.89% with the same batch size. Grad-CAM techniques are utilized to visualize the decision-making process within the models, aiding in understanding the Deepfake classification process. These findings provide valuable insights into the performance of different deep learning models and can guide the selection of an appropriate model for a specific application.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Uncontrolled Keywords: | Deepfake, Deep learning, Convolution neural network, Batch size, Grad-CAM visualization |
Divisions: | Faculty Of Electrical Technology And Engineering |
Depositing User: | Anis Suraya Nordin |
Date Deposited: | 17 Oct 2024 12:22 |
Last Modified: | 17 Oct 2024 12:22 |
URI: | http://eprints.utem.edu.my/id/eprint/28039 |
Statistic Details: | View Download Statistic |
Actions (login required)
View Item |