Amidst the escalating excitement around AI, concerns have surfaced regarding the potential displacement of individuals from their jobs by ChatGPT. One particular New York attorney is currently facing an unexpected and accelerated realization of this fear.
However, the cause behind it is quite different from what one might expect. According to a report from The New York Times, attorney Steven Schwartz, affiliated with the law firm Levidow, Levidow, and Oberman, sought aid from OpenAI’s chatbot to assist in crafting a legal brief. Unfortunately, the outcome of this collaboration turned out to be far from favorable, as the results proved to be predictably disastrous.
Roberto Mata, who asserts that he sustained injuries during a trip to John F. Kennedy International Airport in New York City, has initiated a lawsuit against Avianca on behalf of Schwartz’s law firm. Recently, as the airline sought to have the case dismissed, Mata’s legal representatives responded by submitting a comprehensive 10-page brief, presenting arguments in favor of continuing the case. The document referenced over six court decisions, including notable cases such as “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines,” and “Miller v. United Airlines.”
However, to the surprise and dismay of all parties involved, none of the court decisions cited in Mata’s attorneys’ brief could be located by anyone. The reason for this perplexing situation? All of the court decisions were inventions of ChatGPT.
A regrettable oversight indeed.
In an affidavit submitted on Thursday, Schwartz disclosed that he had utilized the chatbot as a means to “supplement” his case research. He expressed his lack of awareness regarding the potential for the content generated by ChatGPT to be inaccurate. To demonstrate his diligence, Schwartz presented photographs depicting his inquiries about the legitimacy of the cited court cases to the program. In response to his queries, ChatGPT assured him that the cases could be found in well-regarded legal databases such as Westlaw and LexisNexis.
Expressing deep regret, Schwartz has acknowledged his use of ChatGPT and emphasized that he will refrain from relying on it in the future without obtaining unequivocal verification of its authenticity. The question of whether Schwartz will have another opportunity to draft a legal brief remains uncertain.
The presiding judge, recognizing the unprecedented nature of the situation, has scheduled a hearing for June 8 to consider potential sanctions stemming from Schwartz’s actions. This hearing will serve as an avenue to address the consequences of the “unprecedented circumstance” brought about by Schwartz’s reliance on ChatGPT.
ChatGPT’s take
- When presented with this news article, ChatGPT provided its perspective on the matter:
“After reviewing the news article, it is evident that the utilization of ChatGPT in this particular legal case has resulted in unfavorable and disastrous outcomes. The incident highlights the potential risks associated with relying solely on AI language models for legal research and drafting.
The inaccurate information generated by ChatGPT led to the inclusion of non-existent court decisions in the legal brief, causing confusion and potentially jeopardizing the case.
This highlights the importance of exercising caution and obtaining unequivocal verification of the authenticity and accuracy of AI-generated content in the legal field. The scheduled hearing to consider potential sanctions reflects the seriousness with which the court is addressing this unprecedented circumstance.”
Thanks for the warning, ChatGPT!