Governing Harms from Deepfakes in Crisis Situations
Comparing Legal and Regulatory Frameworks of G7 Countries and the EU
Keywords:
risk mitigation, AI legislation, natural disasters, violent conflict, AI regulationAbstract
This study analyses deepfake-related initiatives of the Group of Seven (G7) countries—Canada, France, Germany, Italy, Japan, the United Kingdom, the United States—and the United Nations and the European Union from a comparative perspective to examine in what ways, if any, AI-generated inaccurate content, generated in times of crises such as natural disasters, is regulated in these countries. Using the Social Amplification of Risk Framework (SARF), a theory that explains how risk perceptions and communication create ripple effects, we demonstrate why the potentially detrimental risks that might come from deepfakes, which aim to distort societies in times of crises, should be accounted for in national and global initiatives to regulate the AI-generated content. We collected and thematically analysed documents using a qualitative open coding approach. The findings demonstrated that while existing and proposed country-specific laws and regulations reviewed offer useful principles, they were not designed to address the kinds of digital harms arising from the use of deepfakes in crises such as natural disasters. Global initiatives also contained the same limitations: despite encouraging responsible innovation and digital transparency, existing frameworks did not address the kinds of harms associated with deepfake use in disaster scenarios. Overall, these initiatives failed to provide concrete strategies for crisis management or harm mitigation from deepfakes deployed to mislead the public in natural disasters or to initiate or escalate violent conflict. Based on the analysis, the article offers implications and recommendations for policymakers and for future studies.
