It Is Not My Responsibility: Failures in Preventing Malicious Deepfakes
Research Paper
orcid.org/0009-0007-1096-9551The rapid evolution of artificial intelligence (AI) brings many new possibilities, as well as many potential harms if no safety measures are in place. One example is deepfakes: fake visual and / or audio content generated by AI to portray the targeted person performing some action they never did. Obviously, deepfakes have many dangerous applications, including pornography, misinformation, and fraud, and, as of now, little action has been taken against them. The growing presence of malicious deepfakes and the lack of protection leave people vulnerable and render victims helpless. As the first step in restricting the harms of malicious deepfakes, we need to understand what led to their current prevalence. A closer investigation uncovers the actor network behind malicious deepfakes and how the failures of various actors allowed their development. Specifically, the government, the developers of deepfake software, and the deepfake creators play a crucial role by not shouldering their share of the responsibility, allowing for the increasing misuse of deepfakes for malicious activities. In this paper, we understand in detail how the failures of different actors to act against malicious deepfakes led to their proliferation and how to improve the situation. However, more research is needed to find effective measures and prepare society for all that deepfakes, and AI in general, can bring.
Deepfakes, Pornography, Hughes Award 2024 Finalist, Hughes Award 2024
All rights reserved (no additional license for public reuse)
English
University of Virginia
May 2024
School of Engineering and Applied Science
Bachelor of Science in Computer Science
STS Advisor: Caitlin Wylie