Project Details
Description
Forged and deceptive images and videos that not only appeal real to human eyes but also fool existing computer programs can now be generated by advanced artificial intelligent techniques, colloquially called 'deepfake' techniques. Malicious parties can utilize the new techniques to swap a victim's face into uncomfortable or fictional scenes and damage that person's reputation. Deepfake techniques may be exploited to create false news, to affect results in election campaigns, to create chaos in financial markets, to fool the public with false disaster scenes, or to inflame public violence and increase conflict between nations. The objective of this project is to design an intelligent deepfake detector that will be capable of assessing the integrity of digital visual content and automatically detect falsified images or videos in real time and prevent them from spreading. The success of the proposed research will benefit our society by providing a more trustworthy and healthy environment for billions of social network users and ensuring the authenticity of visual content for digital forensics.
The project team consists of two researchers with complementary expertise in image processing and cybersecurity. The project will significantly advance the state of the art in falsified visual content detection. The uniqueness of the proposed system is its ability of self-learning and self-evolving to capture altered and deceptive visual content generated by currently unknown deepfake algorithms over time. The proposed self-evolving mechanisms will allow a deepfake detector to quickly adapt to new types of forged images or videos with only a small number of samples, overcoming the limitation of limited samples in existing data-hungry learning algorithms. The proposed defensive mechanisms will ensure the robustness of the deepfake detector and prevent it from misclassifying camouflaged or obscured forged visual content as genuine content. The project will address false content detection and mitigate existing unresolved adversarial attacks in machine learning. The proposed lifelong learning mechanism will enable the deepfake detector to leverage accumulated knowledge to achieve self-improvement over time.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Status | Active |
---|---|
Effective start/end date | 1/10/20 → 30/9/24 |
Links | https://www.nsf.gov/awardsearch/showAward?AWD_ID=2027114 |
Funding
- National Science Foundation: US$627,811.00
ASJC Scopus Subject Areas
- Law
- Computer Networks and Communications