News & Update Submission Deadline: 11:59 PM (AoE), June 15, 2022

Special Session on

Challenging Deepfake through Explainable AI

Deepfake technology is being widely used across the world and the amount of deepfake content generated is likely to increase at a rapid rate as various applications are being created to make the technology easily accessible to the masses. In fact, the field is evolving so rapidly it seems that deepfake content can be generated without the need for any human supervision. Deepfakes have the potential to be extremely useful and dangerous at the same time depending on the purpose of its use. Deepfakes have numerous positive applications in entertainment, education, health sector and other fields, particularly for modelling and predicting behaviour. But in the form of a tool for identity theft, extortion, sexual exploitation, reputational damage, ridicule, intimidation, and harassment, deepfakes have the potential to cause significant damage. The possibilities for abuse are growing exponentially as digital distribution platforms become more publicly accessible and the tools to create deepfakes become relatively cheap, user-friendly, and mainstream. Reports of misrepresentation and deception could undermine trust in digital platforms and services and increase general levels of fear and suspicion within society. This is the right time to prepare ourselves against the challenges imposed by deepfake.
As the prominent use of this technology has been for malicious purposes, big companies like Facebook, Amazon and Microsoft came with the Deepfake Detection Challenge to develop a technology to detect deepfakes. Still, we are far away in developing a fool-proof technology against deepfake attacks. We need advanced technological solutions to fight against the spread of deepfakes over the long term. It is the time to evolve agile methods and add many different factors to fight with deepfake threats as they become more sophisticated or more available. As deepfakes are based on AI, we can look to AI to provide us solutions for harmful deepfake applications. The most successful approaches in deepfake detection are deep learning methods that rely on convolutional neural networks as the backbone. Deepfake generation models leave traces of its convolution operations in the image. Although these reproductions are almost indistinguishable, these fakes can be detected by deep learning methods. In the presence of good amount of training data, supervised deep learning methods can detect the convolutional traces left in deepfake images. Considering less-than-perfect accuracy of deepfake detection and widened target range, interpretability of deepfake detection has become a critical consideration. The aim of this special session is to encourage the development of “Explainable AI based Solutions to Combat Deepfakes”.
To summarize, deepfake detection has become an utmost important issue across the world especially looking at its impact on the society. While deepfake technology has already made its place in society, the construction of interpretable and easily explainable models is essential to restore the trust of humans. This special issue will address the increasing demand of explainable AI techniques to combat deepfakes. This dedicated session is expected to publish 10-12 high quality research publications focussed on explainable AI based techniques to challenge deepfake. The submissions expected for this special session is around 40-50.


Topics of interest include, but are not limited to:

  1. Deep Learning Techniques to Analyse and Detect Deepfakes
    1. Deep CNNs in the Detection of Deepfake
    2. Generative Adversarial Networks for Deepfake Detection
    3. Spatial, Spectral, and Temporal techniques for Deepfake Detection
    4. Deep Reinforcement Learning in Deepfake Detection
    5. Deepfake Detection Techniques under Adversarial Attacks
    6. Ensemble of Deep Learning Models to generalize Deepfake Detection Capability
  2. Explainable AI to Combat Deepfake
    1. Combatting Deepfake through Visual Interpretability
    2. Deepfake through Audio /Voice Interpretability
    3. Rationalizing Neural Predictions for Deepfake Detection
    4. Attention-based Explainable Deepfake Detection
    5. AI Methods for Learning Semantic Association to Combat Deepfake
    6. Guided Backpropagation for visualizing features learned by CNNs
    7. Use of popular explainable AI methods, such as Deep Taylor, Integrated Gradients, and Layer-wise Relevance Propagation (LRP) for Deepfake Detection
    8. Local Interpretable Model-Agnostic Explanations (LIME) models for Deepfake Detection
  3. Blockchain Technology and Deepfake
    1. Distributed Ledgers and Consensus Methods to prevent Deepfake
    2. Blockchain and Smart Contracts to Combat Deepfakes
    3. Blockchain Security and Privacy Threats through Deepfake
  4. Hybrid Approaches to Combat Deepfakes
    1. Human individualities to detect Deepfake Attacks
    2. New Datasets for AI Synthesized Media Detection
    3. Social Impacts of Deepfake
    4. Resilience against the use of Deepfakes during Social Interaction
    5. Probable New Dimensions of Deepfake