Facebook owner Meta is barring political advertisers from using its new generative AI advertising products, a company spokesperson said Monday, denying campaigns access to tools that lawmakers have warned could accelerate the spread of election misinformation.
Meta has not publicly announced the decision in any update to its advertising standards, which prohibit ads with content debunked by the company’s fact-checking partners but appear to have no specific rules for AI.
The policy comes a month after Meta – the world’s second-largest digital advertising platform – announced it was starting to expand advertisers’ access to AI-powered advertising tools that can instantly create backgrounds, image adjustments and ad copy variations in response to simple text. request.
The tool was initially only available to a small group of advertisers in the spring. They are on track to roll out to all advertisers worldwide next year, the company said at the time.
Meta and other tech companies have been rapidly rolling out generative AI advertising products and virtual assistants in recent months, responding to the frenzy surrounding the debut last year of OpenAI’s ChatGPT chatbot, which can provide human-like written answers to questions and other prompts.
The companies have so far released little information about the security measures they will implement on these systems, making Meta’s decision regarding political ads one of the most significant AI-related policy choices to date.
Alphabet’s Google, the largest digital advertising company, last week announced the launch of a similar AI advertising tool for generating images. The company plans to keep politics out of its products by blocking a list of “political keywords” from being used as clues, a Google spokesperson told Reuters.
Google has also scheduled a policy update in mid-November to require election-related ads to include a notice if they contain “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Snapchat owner Snap and TikTok blocks political ads, while X, formerly known as Twitter, has yet to launch a generative AI advertising tool.
Nick Clegg, Meta’s head of policy, said last month that the use of generative AI in political advertising was “clearly an area where we need to update our regulations”.
Ahead of a recent summit on AI safety in the UK, he warned that both the government and tech companies must prepare for the possibility that the technology could be used to interfere with the upcoming 2024 election, and called for special attention to election-related content that is “moving from one content to other content related to the election. platform to another platform.”
Clegg previously told Reuters that Meta blocked its virtual assistant, Meta AI, from creating photorealistic images of public figures. This summer, Meta committed to developing a system for “watermarking” AI-generated content.
Meta prohibits misleading AI-generated videos in all content, including organic, unpaid posts, except for parody or satire.
The company’s independent Oversight Board said last month it would review the wisdom of that approach, taking the case over a doctored video of US President Joe Biden, which Meta said had been abandoned because it was not generated by AI. (Reporting by Katie Paul in San Francisco, editing by Kenneth Li and Matthew Lewis)
“Coffee trailblazer. Analyst. General music geek. Bacon maven. Devoted organizer. Incurable internet ninja. Entrepreneur.”