- By Ashish Singh
- Wed, 28 Feb 2024 03:00 PM (IST)
- Source:REUTERS
OpenAI has confirmed that the New York Times has hacked the chatbot ChatOPT along with the other AI systems to generate misleading evidence for the case. The Times prompted the technology to replicate its content, according to a Monday filing made by OpenAI in Manhattan federal court, by using "deceptive prompts that blatantly violate OpenAI's terms of use."
According to OpenAI, the claims made in the Times' complaint do not adhere to its renownedly strict standards of journalism. "The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI's products." The "hired gun" that OpenAI claims the Times employed to control its systems was not identified and the publication was not charged with violating any anti-hacking legislation.
READ: Why Apple Don't Want To Allow Third-Party Apps On iPhones? Senior Executive Explains
The Times's lawyer, Ian Crosby, said in a statement on Tuesday that "what OpenAI bizarrely mischaracterizes as 'hacking' is simply using OpenAI's products to look for evidence that they stole and reproduced The Times's copyrighted work."
In December, the Times filed a lawsuit against OpenAI and Microsoft, the company's biggest investor, alleging that they had improperly used millions of its stories to train chatbots to answer user queries.
Copyright holders, including The Times, are suing tech companies for allegedly using their works for AI training. Publishers of music, associations of writers, and visual artists are among the other copyright holders.
Tech giants contend that the lawsuits endanger the expansion of this potentially multitrillion-dollar industry and that their AI algorithms lawfully use copyrighted material.
The crucial question of whether AI training falls under fair use under copyright law has not yet been addressed by courts. Judges have thrown out several infringement claims regarding the output of generative AI systems thus far because there is insufficient proof that content produced by AI is similar to works protected by copyright.
In its complaint, The New York Times listed multiple incidents wherein, when asked, chatbots from OpenAI and Microsoft provided readers with nearly exact snippets of its articles. It claimed that Microsoft and OpenAI were attempting to "free-ride on the Times's massive investment in its journalism" by developing an alternative to the print publication.
The Times required "tens of thousands of attempts to generate the highly anomalous results," according to OpenAI's complaint.
"In the ordinary course, one cannot use ChatGPT to serve up Times articles at will," noted OpenAI. According to OpenAI's brief, it and other AI companies would ultimately prevail in their legal battles over the fair-use issue.
"The Times cannot prevent AI models from acquiring knowledge about facts, any more than another news organisation can prevent the Times itself from re-reporting stories it had no role in investigating," stated OpenAI.