La Grada
  • Economy
  • Mobility
  • News
  • Science
  • Technology
  • About us
    • Legal Notice
    • Privacy Policy & Cookies
  • La Grada
La Grada
No Result
View All Result

Goodbye to blind trust in AI—a Maryland court orders mandatory training for a lawyer after using ChatGPT in a divorce case

by Victoria Flores
November 20, 2025
in News
Goodbye to blind trust in AI—a Maryland court orders mandatory training for a lawyer after using ChatGPT in a divorce case

Goodbye to blind trust in AI—a Maryland court orders mandatory training for a lawyer after using ChatGPT in a divorce case

Farewell to the penny—the Philadelphia Mint is minting the last coins, and experts such as John Feigenbaum warn that some could exceed $5 million

Confirmed—the 5 Disney winter products that Costco just released for under $50 and are selling out fast

Confirmed—The fatal mistake lottery winners make right after collecting their prize, according to Sergi Torrens

A recent Maryland case shows how easily things can go wrong when lawyers use AI without reviewing its results first. In a divorce and custody case, a family attorney filed court documents that were prepared with ChatGPT’s assistance. Everything appeared normal at first glance. However, the judge discovered that many of the legal references in the paperwork were wrong or fraudulent.

The court issued a strong warning and a clear message after the discovery; AI can be helpful, but attorneys are still entirely accountable for their work. In the end, the lawyer was mandated to finish “legal education courses on the ethical use of AI” and modify the way his firm examines court documents.

How the lawyer got into trouble with AI

The case, which involved a lawyer defending a mother in a custody battle, made its way to the Maryland Court of Appeals. In his written arguments, the attorney included several legal citations—references to previous court decisions that should support his position. These citations are essential in legal writing, because judges rely on them to understand how previous cases relate to the current one.

However, after closer examination, the court discovered that many of the citations “turned out either to be non-existent or to contradict the arguments presented.”

The court’s ruling states that the lawyer, “relied on a law clerk who used ChatGPT to generate and edit the brief,” and he “was not involved directly in the research of the offending citations”; Neither the attorney nor the law clerk thoroughly examined the cases that the AI generated because the clerk was not a lawyer.

Sometimes artificial intelligence (AI) tools can create sources or “hallucinate” information that sounds authentic but isn’t. That kind of error is unacceptable in a legal and serious environment like a courtroom, particularly when someone’s rights and family life are at risk.

What the judge said about competence and responsibility

In her court opinion, Judge Kathryn Grill Graeff made it quite evident why this conduct was improper. “it is unquestionably improper for an atorney to submit a brief with fake cases generated by AI.” she said. The problem wasn’t that the lawyer used technology, but that he didn’t perform the fundamental, essential work of verification.

“[C]ounsel admitted that he did not read the cases cited. Instead, he relied on his law clerk, a non-lawyer, who also clearly did not read the cases, which were fictitious.” She added. “In our view, this does not satisfy the requirement of competent representation. A competent attorney reads the legal authority cited in court pleadings to make sure that they stand for the proposition for which they are cited.”

The court then stated that attorneys can’t simply replicate what AI provides, nor depend on someone who isn’t a lawyer to do all the checking. As a lawyer, you have to understand and verify everything in a document before signing it.

Professional judgment cannot be replaced by tools like ChatGPT. They can help with drafts, summarizing, or giving suggestions and ideas, but they are not able to “be” responsible—that is the job of humans.

Consequences and lessons

The court decided that the attorney should complete “legal education courses on the ethical use of AI,” then, it mandated that his firm establish verification protocols with formal measures to ensure that citations and authorities are verified. Finally, the court sent the case to the Attorney Grievance Commission where additional sanctions may or may not be added.

If you watch closer, nobody is asking to stop using AI, but if you do, be careful, check constantly, and use it only as an assistant.

  • Legal Notice
  • Privacy Policy & Cookies
  • Homepage

© 2025 La Grada

No Result
View All Result
  • Economy
  • Mobility
  • News
  • Science
  • Technology
  • About us
    • Legal Notice
    • Privacy Policy & Cookies
  • La Grada

© 2025 La Grada