Why Attorneys Must Use AI Carefully

When generalized AI is used for legal work, it tends to "hallucinate"---to cite cases that don't actually exist.  In some situations, it can even provide "copies" of these false cases.  When an AI-produced brief citing false cases is submitted to a judge, the result is usually sanctions.  The award of sanctions catches the attention of the media, resulting in a large amount of negative publicity for the sanctioned lawyers.  Damien Charlotin maintains a list of cases worldwide in which attorneys were sanctioned for misuse of AI.  As of this writing, the list contains 539 cases.

For sample opinions sanctioning attorney for improper use of AI, see the following:

Kruse v. Karlen, Mo. Ct. App., https://www.lawnext.com/wp-content/uploads/2024/02/Opinion_ED111172.pdf

Smith v. Farwell, MA Superior Court https://www.lawnext.com/wp-content/uploads/2024/02/12-007-24.pdf

Ader v. Ader, MA Superior Court, NY Supreme Court, https://caselaw.findlaw.com/court/ny-supreme-court/117805834.html

 

The attorney in Ader was sanctioned twice, once for submitting false citations in a brief on the merits, and again for submitting false citations in a memorandum defending himself from being sanctioned for including false citations in his initial brief.

The above cases are not easy reading.  But ignoring these cases will not make their holdings go away.  Improper use of generalized AI can result in sanctions, negative publicity, and even loss of employment.

 

At one point, attorneys caught using AI improperly could argue that they were unaware of the risks.  But the problems with use of AI have been widely discussed in the legal community, and judges are now holding that ignorance of these problems is no excuse.  The Ader court concisely summarized the developing law on this point:

"Use of AI is not the problem per se. The problem arises when attorneys abdicate their responsibility to ensure their factual and legal representations to the Court—even if originally sourced from AI—are accurate. . . . When attorneys fail to check their work—whether AI-generated or not—they prejudice their clients and do a disservice to the Court and the profession. In sum, counsel's duty of candor to the Court cannot be delegated to a software program."

Ader v. Ader, 2025 NY Slip Op 51563(U),  2025 N.Y. Misc. LEXIS 7848 at *10-11 (emphasis added).

 

As rulemakers have become aware of the problems with AI, they are starting to require, as part of any brief, a certificate signed under oath, stating that all AI sources used in the brief have been reviewed by a human being.  And state bar associations are starting to impose similar ethical requirements that AI cannot be used unsupervised. See, e.g., 2024 N.C. Formal Ethics Opinion 1, https://www.ncbar.gov/for-lawyers/ethics/adopted-opinions/2024-formal-ethics-opinion-1/; 2024 Pennsylvania Joint Former Option 200, https://www.lawnext.com/wp-content/uploads/2024/06/Joint-Formal-Opinion-2024-200.pdf.

 

Review by a real person takes time and effort.  Suppose you ask generalized AI to write a brief that would take you six hours to write yourself.  Assume it takes you one hour to create an appropriate prompt---not a simple task!---and to review the AI brief.  You are likely to find that much of the cited authority is either hallucinated or misconstrued.  You have wasted a valuable hour of time; and you still have a six hour brief to write.  AI is not the simple, "push a button, get a perfect brief" product that its backers claim it is.

By contrast, with NLRG, you get real human attorneys writing products without undue reliance on AI.  We cite only real cases and provide real case copies on request.  If you have questions or concerns, the attorney who wrote your product is just a phone call away.  And while our billing rates vary, they are almost always materially less than the rate you would charge your client to do the same work.

 

Another serious problem with generalized AI is confidentiality.  There is a saying in the computer industry: "If you are not paying for the product, you are the product."  Creators of generalized AI product offer no confidentiality guarantees.  To the contrary, many may well be using any materials you type into their product to train their AI.  Some may well be using your materials for purposes of advertising.  Nothing prevents material entered into general AI from being subpoenaed, or even from being released to the general public.  Read the user agreements of generalized AI products carefully.  You may not like what you find.

And confidentiality may not be a problem you are allowed to ignore.  Some authority suggests there is an ethical duty to investigate the confidentiality practices of generalized AI before using it---so that failure to investigate would not be a defense if use of AI results in a breach of confidentiality.  

With NLRG, on the other hand, you are working with a real human attorney.  That attorney falls under the umbrella of your work-product privilege.  Communications with an attorney at NLRG are no different from communications with an associate working in your office.  We are working as part of your legal team, and our communications with you are fully protected.

 

Still another problem with AI products is that they are not easy to use.  Volumes are being written on how to prompt AI properly.  When answering a question, they are quite sensitive to what they perceive the desired answer to be.  In fact, a recent Washington Post study found that ChatGPT is almost ten times more likely to begin an answer with “yes” than with “no.” NLRG's research attorneys are certainly aware of your preferred answer, but when you ask for an objective response, you will get one---even if the law on the question is not what you would like it to be.

 

The bottom line is, lawyers cannot consult AI, push a button, and get a 100% accurate answer to any question.  Lawyers are ethically required to investigate the confidentiality of any AI service before signing up.  Lawyers are required to use careful judgment in prompting AI.  And most important, lawyers are required to review the output of AI for substantive accuracy and false citations.  To rely upon the AI output without human review is to run a substantial risk of getting sanctioned.  And given extensive public awareness of problems with hallucinated citations, judges are starting to have very little sympathy for lawyers who submit false citations to the court.

These problems are easily avoided by using NLRG.  Our research is done only by genuine human attorneys.  We do not, as a general rule, use AI, although we are exploring the use of a legally-specific system AI offered by a leading computer legal research platform.  When we do use legally-specific AI, we carefully review the output before relying upon it in any product, just as we do with any other legal research platform.  We stand behind the substantive content of any authority we cite.  Using NLRG is safer than the hasty and uncritical use of AI, and results in a better final product.