- By Ashish Singh
- Thu, 10 Oct 2024 01:04 PM (IST)
- Source:Reuters
OpenAI, creator of ChatGPT, stated in a report on Wednesday that it has observed several attempts to create phoney content using its AI models, including long-form articles and social media comments, with the intention of influencing elections.
According to the startup, cybercriminals are increasingly employing AI tools, like ChatGPT, to help them with their harmful operations, like making and debugging malware and producing bogus content for websites and social media platforms.
According to the company, it has blocked more than 20 attempts of this kind so far this year. In August, it blocked a group of ChatGPT accounts that were being used to write articles about subjects related to the US elections.
READ: Meta Unveils Movie Gen AI Model To Rival OpenAI, Promises Realistic Video And Audio Creation
In July, it also suspended several Rwandan accounts that were being used to publish comments on the country's elections on social networking platform X. According to OpenAI, none of the initiatives aimed at influencing the results of the world's elections generated viral engagement or long-lasting viewership.
Concern over the creation and dissemination of election-related fake news via AI technologies and social media platforms is growing, particularly as the US prepares for presidential votes.
The US Department of Homeland Security reports that the country is increasingly concerned about attempts by China, Russia, and Iran to sway the results of the elections on November 5th, including the use of AI to spread false or polarising material.
OpenAI this week completed a $6.6 billion investment round, solidifying its status as one of the most valuable private corporations in the world. Since its November 2022 launch, ChatGPT has amassed 250 million active weekly users.
-1728545250829_v.webp)