Skip to Content

Facebook and Instagram will require political advertisers to disclose AI deepfakes

By Brian Fung, CNN

Washington (CNN) — Meta, parent company of Instagram and Facebook, will require political advertisers around the world to disclose any use of artificial intelligence in their ads, starting next year, the company said Wednesday, as part of a broader move to limit so-called “deepfakes” and other digitally altered misleading content.

The rule is set to take effect next year, the company added, ahead of the 2024 US election and other future elections worldwide.

The policy covers any political or social issue advertisement on Facebook or Instagram that uses digital tools to create images of people that do not exist; that distort the true nature of an event as it actually occurred; or that make a person appear to say or do things that they did not, according to a company blog post.

Minor uses of AI that “are inconsequential or immaterial to the claim, assertion, or issue” in the ad, such as image cropping or color correction, won’t be subject to the disclosure rule.

The announcement comes a day after Meta said it would restrict political advertisers from using the company’s own AI advertising tools that can generate backgrounds, suggest marketing text or supply music to accompany videos.

On Tuesday, Microsoft made a similar move when it announced a tool, which it said will be provided for free to political campaigns starting in the spring, that can apply a “watermark” to campaign content to assure viewers it is authentic.

“These credentials become part of the content’s history and travel with it, creating a permanent record and context wherever it’s published,” wrote Microsoft President Brad Smith in a blog post. “When a user encounters an image or video that contains Content Credentials, they can learn about its creator and origin by clicking on an embedded pin that reveals the asset’s history.”

The crackdown on politicians’ use of AI in ads reflects widespread warnings from civil society groups and policymakers about the potential risks to democracy of letting AI-generated content loose in political discourse. The rise of disinformation by foreign and domestic actors could be supercharged by artificial intelligence, many have said, a threat they say could be exacerbated by recent cuts across the industry to content moderation teams.

It also highlights a rare move by Meta to regulate political speech. The platform has long received blowback for allowing politicians to lie in their campaign ads, and for exempting politicians’ speech from third-party fact-checking. In the past, Mark Zuckerberg, the company’s CEO, has argued that politicians should be given the leeway to make false claims and that viewers and voters should decide how to hold them accountable.

But the decisions to force Meta’s political advertisers to disclose their use of AI, and to restrict Meta’s own AI tools from being used in political ads, suggests there may be limits to how far Zuckerberg is willing to let politicians roam with new technology.

“If we determine that an advertiser doesn’t disclose as required,” Meta said in its blog post Wednesday, “we will reject the ad and repeated failure to disclose may result in penalties against the advertiser.”

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Money

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KION 46 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content