Featured Partner

Lawyers Are Getting in Trouble for Using AI

Featured Partner Contributor
Font Size:

When you hire a lawyer, you expect a competent human professional to take on your case, tap into their vast body of knowledge, and use their argumentative and persuasion skills to put together a compelling structure for your case. You don’t expect them to use a glorified search engine to concoct an argument on their behalf without fact checking it.

But unfortunately, multiple lawyers have resorted to using this high-tech, but lazy trick. ChatGPT, the phenomenal AI-based language model, has taken the world by storm – but some people in the legal profession have overestimated its abilities and used it as a crutch to do their most important work.

Why are lawyers getting in trouble for using AI? And what can they do about it?

AI Isn’t a Bad Thing

First, let’s make an important point: AI isn’t a bad thing. ChatGPT is a truly remarkable tool, and if you haven’t used it yet, we highly recommend that you do. This tool can put together coherent paragraphs of information, rivaling human beings in terms of linguistic understanding, and it can be a huge help for certain types of tasks. When used responsibly, it can save hours of time, facilitate better brainstorming, and sometimes help you polish a better-finished product.

Already, there are law firm marketing agencies and other third parties rising to the occasion to help lawyers and law firms navigate the increasingly complex world of AI. When you truly understand its strengths and weaknesses, and you’re able to implement it effectively in your business, it can be an economic and operational advantage.

Unfortunately, many lawyers and law firms are not using AI responsibly.

Recent Lawyer Mistakes

In New York, a lawyer used an AI chatbot (ChatGPT) to conduct legal research; the information appeared to be false. In a case involving a man suing an airline due to a personal injury, the lawyer assembled a list of past cases to use as precedent. However, the opposing legal team couldn’t find several of the cases that were referenced.

The judge in the case, Judge Castel, wrote, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” The lawyer’s defense was that he didn’t know that the content provided by the chatbot could be incorrect. He is now facing potential sanctions.

This is hardly the first time a lawyer has made a mistake due to overly relying on ChatGPT or other chatbots – and it certainly won’t be the last.

The Problems With (Current) AI

AI is truly impressive. But it has its limits.

These are just some of the factors holding current AI back:

  •       Limited access to information. People sometimes treat ChatGPT as if it’s a search engine that freely and regularly crawls the web. But this is far from the case. While this tool was trained on online content, it doesn’t have current access to all the information on the web. If you’re trying to use the tool to better understand something that happened recently, you’ll be setting yourself up for failure.
  •       Inaccuracies and falsehoods. While many of the things that ChatGPT says are accurate, it’s also capable of generating inaccuracies and falsehoods. As we’ve seen, it will willingly make up fake cases and present them as real; it can also misstate facts, present flawed arguments, and even contradict itself.
  •       Misleading verification. Here’s an idea: when ChatGPT presents something to you, simply ask whether this statement is true or based in reality. If it’s fake, the tool will tell you, right? Wrong. People have already gotten in trouble for following this methodology. ChatGPT will insist that what it says is true, despite the fact that it’s completely invented.
  •       Repetitive, predictable nature. Though it’s a comparatively minor issue, AI tools are essentially prediction engines; they produce content that’s highly repetitive and predictable. Accordingly, it can’t match the best human writers in terms of coherence or fluidity.
  •       No real “understanding.” ChatGPT and rival AI tools don’t truly “understand” what you’re saying. It just seems like they do. AI can’t solve your problems for you because it doesn’t even know what they are; it’s like a sophisticated parrot, mirroring whatever it reads.
  •       Lack of personality. Good writing and good legal arguments have personality – but AI can’t replicate this.

How to Use AI Responsibly

So how can lawyers and law firms use AI responsibly, with these current problems in mind?

It all boils down to one important concept: using AI as a supplement, rather than a replacement. In other words, you can use AI to start brainstorming, but you need to come up with the ideas yourself. You can use AI for research, but you need to verify the information you receive. You can use AI to generate swaths of text, but it’s your job to edit and polish them. You can even use AI to review your work, but it needs an additional pass from a skilled human.

Hopefully, as we collectively gain a better understanding of what AI can and can’t do, lawyers and law firms will begin using AI responsibly. Relying exclusively on AI to do knowledge work is a mistake, but completely discarding or avoiding this technology is also a mistake. In time, we’re likely to find the right balance – if we commit to working toward it.

Members of the editorial and news staff of the Daily Caller were not involved in the creation of this content.