- OpenAI estimates that ChatGPT rejected more than 250,000 requests to generate images of the 2024 U.S. presidential candidates in the lead up to Election Day, the company said in a blog on Friday.
- The rejections included image-generation requests involving President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz and Vice President-elect JD Vance, OpenAI said.
- The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections taking place around the world in 2024.
OpenAI estimates that ChatGPT rejected more than 250,000 requests to generate images of the 2024 U.S. presidential candidates in the lead up to Election Day, the company said in a blog on Friday.
The rejections included image-generation requests involving President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz and Vice President-elect JD Vance, OpenAI said.
The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections taking place around the world in 2024.
The number of deepfakes has increased 900% year over year, according to data from Clarity, a machine learning firm. Some included videos that were created or paid for by Russians seeking to disrupt the U.S. elections, U.S. intelligence officials say.
In a 54-page October report, OpenAI said it had disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." The threats ranged from AI-generated website articles to social media posts by fake accounts, the company wrote. None of the election-related operations were able to attract "viral engagement," the report noted.
Get Tri-state area news delivered to your inbox. Sign up for NBC New York's News Headlines newsletter.
In its Friday blog, OpenAI said it hadn't seen any evidence that covert operations aiming to influence the outcome of the U.S. election using the company's products were able to successfully go viral or build "sustained audiences."
Money Report
Lawmakers have been particularly concerned about misinformation in the age of generative AI, which took off in late 2022 with the launch of ChatGPT. Large language models are still new and routinely spit out inaccurate and unreliable information.
"Voters categorically should not look to AI chatbots for information about voting or the election — there are far too many concerns about accuracy and completeness," Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, told CNBC last week.
WATCH: AI likely to be less regulated and more volatile under second Trump presidency