Ofcom Publishes Paper on Mitigating Deepfake Harms

Deepfakes, created using AI to generate or manipulate audio-visual content, have become a significant concern due to their potential to misrepresent individuals and events. Ofcom’s latest discussion paper, published on 23 July 2024, examines the increasing prevalence of deepfakes and explores the harms they cause, as well as strategies to mitigate these harms.

Context and Impact: Deepfakes are increasingly sophisticated due to generative AI tools, enabling even those with modest technical skills to create convincing fakes. These fakes can cause severe harm by humiliating individuals, facilitating scams, or spreading disinformation. High-profile incidents, like the deep nude images of Taylor Swift and fake audio of London Mayor Sadiq Khan, highlight the risks. Ofcom’s poll found that 43% of respondents aged 16+ and 50% of respondents aged 8-15 believe they encountered deepfakes in the past six months.

Types of Deepfakes:

  1. Demeaning Deepfakes: These are used to humiliate or abuse individuals, often depicting non-consensual sexual acts. Victims, predominantly women, suffer severe emotional and reputational damage.
  2. Defrauding Deepfakes: These facilitate scams by misrepresenting someone’s identity, leading to financial losses. Examples include fake advertisements and romance scams.
  3. Disinforming Deepfakes: These aim to spread false information to influence public opinion on political or societal issues. Notable instances include fake videos or audio clips of political figures.

Mitigation Measures:

  1. Prevention: Model developers can filter harmful content from training datasets and block prompts that generate harmful outputs. Red teaming exercises can assess and mitigate risks before models are deployed.
  2. Embedding: Techniques like watermarking, metadata, and labeling can indicate the synthetic nature of content. Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) help standardize these efforts.
  3. Detection: Forensic techniques and machine learning classifiers can identify deepfakes by analyzing inconsistencies in audio-visual content. Hash matching databases can also help identify and track known deepfakes.
  4. Enforcement: Online platforms must enforce clear rules about synthetic content and take action against users who create or share harmful deepfakes.

Deepfake technology continues to evolve, posing challenges to individuals, institutions, and society. Ofcom’s discussion paper underscores the need for a multi-faceted approach involving prevention, detection, and enforcement to mitigate the harms caused by deepfakes.

Read it here.

♻️ Share this if you found it useful.
💥 Follow me on Linkedin for updates and discussions on privacy, digital and AI education.
📍 Subscribe to my newsletter for weekly updates and insights – subscribers get an integrated view of the week and more information than on the blog.

Scroll to Top