A Maryland court has mandated a family lawyer to undergo training after he used ChatGPT to prepare legal documents for a divorce and custody case. The court’s decision highlights the significant risks associated with using artificial intelligence without proper oversight. In this instance, the attorney’s reliance on AI resulted in numerous incorrect legal references, prompting the court to issue a stern warning about the responsibilities of legal professionals.
In the case, which centered on custody arrangements for a mother, the attorney submitted a brief that included various legal citations intended to support his arguments. These citations are crucial in legal proceedings as judges use them to understand the applicability of previous cases to the current matter. However, upon review, the Maryland Court of Appeals found that many of these citations were either fictitious or misrepresented the law.
The ruling indicated that the attorney depended on a law clerk who had employed ChatGPT to generate and edit the brief. The court noted that neither the attorney nor the clerk properly vetted the citations. This oversight is particularly concerning in a legal context, where accuracy is paramount, especially when the stakes involve family rights and wellbeing.
In her opinion, Judge Kathryn Grill Graeff articulated the gravity of the situation. She stated, “It is unquestionably improper for an attorney to submit a brief with fake cases generated by AI.” The judge emphasized that the failure lay not in the use of technology but in the attorney’s lack of due diligence. According to her, “A competent attorney reads the legal authority cited in court pleadings to make sure that they stand for the proposition for which they are cited.”
The court’s decision mandates that the attorney complete legal education courses on the ethical use of AI and requires his firm to implement formal verification protocols for checking the accuracy of legal citations. This ruling underscores the message that while AI tools can assist in drafting and summarizing, they cannot replace the responsibility and professional judgment of legal practitioners.
As a final step, the case has been referred to the Attorney Grievance Commission, where further disciplinary measures may be considered. The court’s actions reflect a broader trend in the legal profession, which is grappling with the implications of integrating AI into practice. The message is clear: the legal community must embrace technology responsibly, ensuring that human oversight remains a critical component in legal work.
This incident serves as a reminder that while AI can be a valuable assistant, it is essential for attorneys to verify and understand the information they present in court. Legal professionals must remain vigilant, ensuring that they maintain the integrity of their work and protect the rights of those they represent.






































