The Year of Deepfakes

Written by: Suher Adi

Once an afterthought in the broader policy debates around AI, deepfakes have taken center stage. SIIA has been leading industry on the topic for the last few years working with partners to ensure deepfakes are appropriately managed to minimize harms on society. 

Industry has been working to meet the challenges of deepfake content and move policies ahead of current standards protecting users and the integrity of their platforms. Recently, Meta has developed a policy to detect and label AI-generated images posted by users on all its platforms. It will also be incorporating visible and invisible watermarks for images generated using Meta’s AI tools. Additionally, Google has announced that its Gemini AI models will incorporate digital watermarks and restrict the ability to generate images of named people. Google has joined the Coalition for Content Provenance and Authenticity (C2PA) serving on its steering committee to enhance industry collaboration on this issue. 

The most recent industry collaboration on the topic of deepfakes was announced a few days ago as companies like Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X signed a pledge to combat the use of deceptive artificial intelligence (AI) in this year’s elections. It has been noted that about 20 companies have joined this pledge taking important steps to ensure that technology is not used by bad actors to undermine democracy. The pledge extends beyond the United States and is focused on securing elections in over 40 countries which are set to have elections throughout the year. 

These steps are all necessary to ensuring companies do all within their power to minimize the consequences of deepfake content on their platforms. However, stopping the harms of deepfakes will require agility and collaboration beyond the private sector. 

Congress has multiple pieces of legislation introduced discussing the impacts of deepfakes. H.R.5808 – Preventing Deep Fake Scams Act, a bipartisan piece of legislation focuses on preventing AI deepfake content from being used to scam consumers and those vulnerable to identity theft and fraud in the financial sector. The DEEPFAKES Accountability Act (HR- 5586) requires any deepfake content of any person to include a disclaimer using a digital watermark. 

Two bills have been linked to developments of AI’s use in pop culture. H.R.6943 – No AI FRAUD Act was introduced in January of this year to protect the voices and likeness of individuals in the AI age effectively making the use of AI generated voice content illegal if created without consent. HR 3106- Preventing Deepfakes of Intimate Images Act, which regained traction following the controversy of explicit AI-generated images of Taylor Swift surfaced online. The bill would provide an avenue for civil cases against individuals who post deepfake images of individuals without their consent, including images of children. 

In addition to Congressional action, the White House and federal agencies have also developed guidance to deal with the issue of deepfake content. The White House Executive Order on Artificial Intelligence issued in October of 2023 directed the Department of Commerce to develop guidance on watermarking and labeling of AI generated content. The National Security Agency (NSA) and other U.S. Federal Agencies have issued guidance on “synthetic media content known as deepfakes.” The Federal Elections Commission (FEC) has also issued a notice about their intent to develop rules around “deliberately deceptive artificial intelligence campaign advertisements.” 

Together, industry and Congress and federal agencies can work together to develop policies and strategies that protect consumers and users of online platforms from fraud. Our democracy and society at large are better off with these collaborative policy efforts.  

To watch the event mentioned earlier in the post held with Adobe and the Center for Democracy and Technology – you can click here: https://www.youtube.com/watch?v=D1Y_5JhUhCs.

Comments are closed.