Lawyers are not obsolete… yet

Since it burst onto the scene in November 2022, ChatGPT has been stealing headlines and shaking up multiple industries. The legal industry has not been immune, with some commentators even suggesting that ChatGPT may have the potential to eventually replace human lawyers – a comment that sends a shiver down the spine of lawyers the world over.

However, a lawyer in the United States is currently facing sanctions after using ChatGPT, highlighting the dangers of overreliance on generative AI, and ChatGPT’s shortcomings – including its propensity to invent facts.

Background

Levidow, Levidow & Oberman, a law firm in New York, was suing Avianca Airlines for negligence, over an alleged personal injury sustained by their client from a metal serving cart.

In their submissions to the United States District Court, the firm relied on several relevant cases that had been researched by Steven A. Schwartz, an attorney at the firm with over 30 years’ experience. According to Mr Schwartz, he had used ChatGPT to find and cite the cases and had even asked the program to verify that the cases were real – to which ChatGPT said yes.

Unfortunately for Mr Schwartz, six of the cases relied on were completely made up by ChatGPT. In the words of Manhattan federal Judge P. Kevin Castel, “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”[1]

Mr Schwartz has now been given an order to show cause as to why he ought not to be sanctioned for citing non-existent cases and submitting copies of non-existent judicial opinions. A humiliating and potentially career-limiting outcome for a highly experienced attorney.

Commentary

ChatGPT generates realistic responses by making guesses about which fragments of text should follow other sequences, based on a statistical model that has ingested billions of examples of text pulled from all over the internet. In Mr Schwartz’s case, the program appears to have discerned the framework of a written legal argument but populated it with names and facts from an array of existing cases – all put in the wrong order.

Here at Cornwalls, we decided to run some tests by posing ChatGPT some relatively niche legal research questions and asking it to provide us with the relevant Australian authorities, if any existed. Interestingly, every case put forward by ChatGPT did not exist and consisted of made-up party names and court citations. When prompted that a case did not exist, the system would admit its error, but then suggest another similarly non-existent case.

The matter of Mr Schwartz, as well as our own experiments with ChatGPT, serve as a useful reminder that whilst ChatGPT may produce results which look to be reliable, generative AI is certainly fallible and should not be relied upon, particularly in respect of legal advice.

Whilst there is no denying that ChatGPT may provide a useful starting point, at this point in time, it is no substitute for spending a few hours trawling through Westlaw. Lawyers can breathe a sigh of relief that their jobs are safe… for now.

[1] Roberto Mata v Avianca, Inc. No 22-cv-1461 (PKC) 2023.

Queries

For further information regarding this article, please contact the authors or any member of our Corporate & Commercial team.

Disclaimer

This information and the contents of this publication, current as at the date of publication, is general in nature to offer assistance to Cornwalls’ clients, prospective clients and stakeholders, and is for reference purposes only. It does not constitute legal or financial advice. If you are concerned about any topic covered, we recommend that you seek your own specific legal and financial advice before taking any action.