Document Type
Original Study
Keywords
Computer Engineering
Abstract
Deep fake videos are a threat to information integrity by using advanced deep learning techniques to manipulate visual content and make it hard to distinguish from real videos. This includes videos where a person’s image is changed or replaced using deep learning techniques. In this paper, we propose a hybrid approach for deep fake video detection that combines spatial and temporal features. We use a pre-trained VGG16 to extract deep spatial features from individual frames, and then global average pooling at both spatial and temporal level to convert variable length video sequences into fixed dimensional representations. A fully connected classifier with dropout regularization is then used to classify the video as real or fake. We evaluated our method on the DFDC dataset and got 95% accuracy and 0.81 AUC. Our framework not only uses transfer learning to reduce the need for large amount of training data but also computational efficiency, so it’s suitable for large scale deep fake detection.
How to Cite This Article
Abdulsahib, Muna Ghazi
(2025)
"Video Deep Fake Detection Based on Spatiotemporal Analysis,"
Iraqi Journal of Computers, Communications, Control and Systems Engineering: Vol. 25:
Iss.
2, Article 3.
DOI: 10.33103/uot.ijccce.25.2.3
Available at:
https://ijccce.researchcommons.org/journal/vol25/iss2/3