• By Mayukh Debnath
  • Sat, 01 Jun 2024 09:24 AM (IST)
  • Source:JND

Lok Sabha Election 2024: OpenAI, the creators of ChatGPT, has said it acted within 24 hours to disrupt deceptive uses of its artificial intelligence (AI) models in a covert influence operation targeting the ongoing Lok Sabha elections in India. The AI research organisation said the intervention prevented the influence operation from gaining any significant audience increase. In a report published on its website, OpenAI said STOIC, a political management company based out of Israel, generated some content pertaining to the Indian elections.

The AI-generated social media content, as per OpenAI's report, was directed against the BJP. "In May, the network began generating comments that focused on India, criticized the ruling BJP party and praised the opposition Congress party," it said. "In May, we disrupted some activity focused on the Indian elections less than 24 hours after it began." OpenAI said it banned a cluster of accounts operated from Israel that were being used to generate and edit content for an influence operation that spanned X, Facebook, Instagram, websites, and YouTube.

"This operation targeted audiences in Canada, the United States and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content." Reacting to the report, Union Minister of Sate for Electronics & Technology Rajeev Chandrasekhar said, "It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties."

"This is very dangerous threat to our democracy. It is clear vested interests in India and outside are clearly driving this and needs to be deeply scrutinized/investigated and exposed. My view at this point is that these platforms could have released this much earlier, and not so late when elections are ending," he added. OpenAI said its disruptive operation was "part of a broader strategy to meet our goal of safe AI deployment".

"In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet. As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services," it said in the report.

Elaborating on its operation against STOIC's alleged influence campaign, OpenAI said, "We nicknamed this operation Zero Zeno, for the founder of the stoic school of philosophy. The people behind Zero Zeno used our models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation."

OpenAI said several other influence operations were also uncovered and disrupted by it. The other covert campaigns focused on a wide range of global issues, including Russia's invasion of Ukraine, the armed conflict in Gaza, politics in Europe and the US, and criticisms of China's government by Chinese dissidents and foreign governments.

(With inputs from agencies)

Also In News