Google will mandate disclosure of digitally altered election ads

Google's update to its political content policy introduces a requirement for advertisers to actively select a checkbox indicating the use of altered or synthetic content in their ad campaign settings.

Specifically, for ads appearing in feeds, shorts on mobile devices, and in-streams on computers and televisions, Google will automatically generate an in-ad disclosure. This disclosure is designed to alert viewers.

For other ad formats, such as display ads and other placements, advertisers must include a prominent disclosure that is easily noticeable to users. This requirement underscores Google's commitment to transparency.

The language used in these disclosures will be tailored to fit the context of each individual ad, ensuring that the message is clear and understandable across different platforms and formats.

The update comes amid growing concerns over the misuse of generative AI, which can create highly realistic but fabricated content, including deepfakes and manipulated videos.

During India's recent general election, for instance, AI-generated deepfake videos surfaced that falsely depicted Bollywood actors criticizing Prime Minister Narendra Modi, aiming to sway voter sentiment.

Beyond political contexts, AI technologies also pose challenges in voice cloning and other forms of digital alteration, further highlighting the need for proactive measures to safeguard online discourse.

OpenAI, a prominent organization in AI research, has reported disrupting several covert influence operations that sought to exploit its AI models for deceptive activities online.