Artificial Intelligence (AI) seems to be the new crypto in that one can’t open their browser or email without seeing a take on how it will change the world. Now that crypto has ceded its spot in the headlines; the challenge is to provide insights about AI that have not already been said. In this post, I’ll try to do just that by discussing how AI may affect the future of civil litigation.

Definition

First, let’s define what AI is in the context of this discussion and how it works.

Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. The traditional test of whether something is AI centers on if it can learn or adapt independently.

To distinguish an AI system from a standard algorithm like an Excel spreadsheet or search engine where for a given input, there is one correct output, in an AI system, when a user inputs a directive with a fixed goal, the system will run that directive through its algorithm. It then iteratively modifies its algorithm so that the output will change for any given input to improve the algorithm’s performance in achieving the goal.

The most note-worthy AI system right now, ChatGPT, is what is called a large language model AI, whose primary use is to help users answer queries that range from Internet searches to essay-length answers to complex questions—but that barely scratches the surface of its capabilities. In the legal world, many law firms and third-party vendors now use AI for discovery in complex litigation, and every day new applications are being presented, including data analysis and legal research assistance through summarizing or analyzing cases. AI will only be controversial when humans transfer work to the algorithms and courts have to decide how much human oversight is reasonable for malpractice or due diligence standards.

AI and Legal Pleadings

In this post, I will explore the implications of giving ChatGPT or another AI a set of facts or raw evidence and asking it to draft a legal pleading. While this has been proposed in a criminal setting, with the predictable response from the bar that this usurps the practice of law, other issues may arise in a civil litigation setting.

The essence of legal drafting is to take familiar forms and adapt them to the specific facts and law of a case or controversy; AI may be able to do so more quickly (saving clients money), but an attorney has to claim agency over any final document, certifying that, at the very least, the attorney has reviewed it and can be held responsible for its contents. This test is no different than where an attorney uses a template or prior pleading, applies the evidence, and takes recommendations from Microsoft Word on grammar and spelling. No confidential information has left the attorney’s possession until filing, and the attorney holds ultimate responsibility for the contents of the filing.

In the case of a pro se litigant, the same standards apply, with a much murkier area surrounding whether the drafter or provider of the form has practiced law. For example, producing a fill-in-the-blank template, such as a commercial lease form or standard will is not practicing law, whereas drafting a bespoke legal document for use by another person or party likely is practicing law.

I would expect that the bar would find that the generation of legal documents by an AI system for use by a non-lawyer would likely be considered practicing law. However, no one would allege Microsoft Word was practicing law by suggesting edits to any legal document being revised by a non-lawyer. However, should Microsoft use an AI system to enhance its editing function in a way that it makes suggestions about legal terms or gives a legal analysis, it could easily cross over into engaging in something resembling the practice of law. Even a potential generally available AI search function that guides pro se litigants to forms and laws and performs an analysis of their case or rights could be found to have crossed that line in a way that a Google search with a non-AI algorithm likely does not.

Another complex question relates to attorney-client privilege, where disclosing facts to an artificial intelligence system that is not in a contractual relationship with counsel could waive privilege. Because open AI systems like ChatGPT operate by incorporating all data they receive into the data set they use for answering queries and then iteratively improve their work with that data, the use of an open AI system in drafting filings could waive privilege concerning the data entered. Another intriguing scenario would be where counsel on both sides of litigation are entering their data into the same open system for drafting or analysis; in effect, the same AI system could end up arguing with itself or drafting competing documents. Even if legal-specific AI systems are developed to address the privilege issues,  any system that multiple parties could use runs the risk of a conflict of interest within its system, where it could functionally argue both sides of a case.

Conclusion

Of course, once a single AI system has all the facts and the law in a given case and comes up with all of the arguments, it is a small step to that system coming to its own conclusions and generating its rulings, but that is a dystopian idea for another day…

 

Gordon Wikle is an attorney at Venn Law Group with more than 14 years of experience serving as an assistant district attorney with the State of North Carolina. He focuses on commercial litigation and enjoys analyzing problems and finding creative solutions that are in the best interest of his clients. Navigating difficult situations and resolving business disputes are areas where he excels. Gordon earned his J.D. from Duke University School of Law and has his B.A. in Economics from Vanderbilt University.