- By Vikas Yadav
- Sun, 28 May 2023 07:56 PM (IST)
- Source:JND
CHATGPT, arguably the world's favourite AI innovation these days, tricked a lawyer by giving fake citations of made-up cases in a lawsuit against Colombian airline Avianca. Lawyer Steven A Schwartz, appearing on behalf of Roberto Mata, a man who sued the airline for an injury caused by a serving cart, admitted that he used the OpenAI's tool for research purposes, citing The New York Times, IANS reported.
The legal team cited these cases to substantiate why Mata's case should be taken forward. Once the opposing counsel flagged them as fake, Kevin Castel, Judge of the US District Court, confirmed six of them as "bogus" and demanded an explanation from the legal team of Schwartz, according to BBC.
Also Read: Man Still Ahead Of AI: Twitter User Tricks ChatGPT With Twisted Questions; Read Here
"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge Castel wrote.
As per the lawyer in question, he did not ask the language model whether it was lying. However, upon inquiry, the chatbot insisted the cases were real. ChatGPT cited "reputable" databases, which were found nowhere.
Mashable India quoted all six cases in a report, which are as follows: Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.
Also Read: AI In The Newsroom: ChatGPT-Like Bots Reportedly Used To Create News Content
Schwartz said he was "unaware of the possibility that its content could be false." The lawyer, who holds nearly three decades of experience in the discipline, "greatly regrets" using the AI model for AI research and will only "supplement" its use with absolute caution and validation in future.
In another instance, the language model falsely accused a Law Professor of sexual harassment. It cited a non-existent Washington Post article to assert the claim. Jonathan Turley, a law professor who never taught at the University, was quoted in ChatGPT's response. The academician was among the five examples on the list. Out of the total, three accusations were baseless in that response.