image

Law Firms: Considerations When Utilizing Generative AI

The use of Artificial Intelligence (AI) by law firms has recently become a hot topic of discussion. Definitive answers regarding the appropriateness of its use remain elusive. While technology evolves quickly and the conversation may change over time, it’s important to consider the potential issues around the use of generative AI. Currently, discussion with various law firms indicate that many firms are working to develop internal policies regarding the use of AI. Some have shared that they’re trying to get out ahead of the issue even as many lawyers are likely already utilizing AI. If so, policies are likely needed in relatively short order.

 

82% of surveyed law firms believe that ChatGPT and generative AI can be applied to legal work, while only 51% indicated it should be applied. Source 7

WHAT IS AI AND HOW IS IT MOST LIKELY TO BE USED IN THE LEGAL FIELD?

AI is the simulation of human intelligence processes by machines, especially computer systems. In general, AI works by ingesting large volumes of labeled training data, analyzing the data for patterns and correlations, and then using those patterns to predict future states. This is how a chatbot that is fed examples of text can learn to produce life-like exchanges with people. New, rapidly improving generative AI techniques are capable of creating realistic text, images, music, and other media.1

When it comes to the legal field, at the most basic level, AI is most likely to be utilized in the context of legal research and legal writing. According to openAI chatbot, ChatGPT, there are a number of other more legal field-focused platforms available such as LawGeex, Kira Systems, Ross Intelligence, Casetext, and Lexis Answers. Cursory research indicates that all of these platforms will allow a user to pose a question. The platform then generates a written legal research memorandum, administrative or government form, or brief. LawGeex and Kira Systems both also seem to provide contract analysis.

The first apparent hurdle is that LawGeex, for one, can “read and analyze legal documents, identify key issues, and provide suggestions for improving your legal research memorandum.” And, as noted above, both LawGeex and Kira Systems can provide “contract analysis.” So, it appears that some law-focused AI platforms would require the lawyer to upload legal documents including draft contracts or other client information to perform some tasks. This, of course, raises issues of client confidentiality which can be problematic.

WHAT POTENTIAL ISSUES ARISE WHEN USING GENERATIVE AI PLATFORMS IN THE LEGAL FIELD?

As a starting point, firms should emphasize to all of their lawyers that maintaining client confidentiality is extremely important and that disclosing client information to individuals or entities outside of the firm (including the uploading of client information onto any outside platform) is strictly prohibited. ABA Model Rule 1.6 states, with limited exceptions, “a lawyer shall not reveal information related to the representation of a client unless the client gives informed consent.” 2 Protecting confidentiality is a foundational principle of the legal field, but it’s worth emphasizing that AI platforms are public in nature and they can use the information provided to them in the future for non-firm work.

But what about using an AI platform for legal research on generic topics? Some lawyers view these AI platforms as nothing more than an aggregator of information otherwise available in a typical Google search. On one level, that may seem marginally correct, but there is a big difference between AI and a Google search - these generative AI platforms are language-based models that produce a written product as opposed to a Google search that only provides a list of links deemed relevant to the query.

The work product produced by an AI platform can be extremely useful, or extremely problematic depending on how it’s used. If an attorney utilizes AI as a starting point for actual legal research to frame arguments and approaches, then AI can be very useful. But the lawyer must take what is generated by AI and independently confirm the information provided. When ChatGPT was asked why a lawyer should not use AI platforms for legal research, it produced a very good answer: “One concern is the accuracy and reliability of the information produced by AI algorithms. AI platforms may make mistakes or draw incorrect conclusions, which could have serious consequences for legal cases.”

There are also concerns about how AI handles misinformation that may appear on the internet. While efforts have been made to minimize the use of misinformation in AI-generated responses, there is a risk that AI will pick up misinformation and use it in its responses. Apparently, AI systems can go further and actually generate misinformation, a phenomenon called “hallucinations.” 3 That, of course, should cause everyone to pause.

It’s critical that lawyers recognize the current limitations of AI and verify all information independently. Again, AI can be a valuable tool as a starting point for legal research, and it can identify issues that the lawyer may not have initially considered. However, things can go wrong quickly if a lawyer takes the AI-generated response at face value and simply adopts the work product as his or her own. Doing so would run afoul of ABA Model Rule 5.3, which governs non-lawyer assistance. That rule requires, in part, that a lawyer having “direct supervisory authority over the non-lawyer shall make reasonable efforts to ensure that the person’s (emphasis added) conduct is compatible with the professional obligation of the lawyer.” 4 It is arguable that this provision extends to legal software and AI-driven legal programs.

Most lawyers who have read a brief that contains citations will know some cases cited as supporting a given position actually don’t offer support. Somewhere along the way, the lawyer simply picked up a citation and never actually read the case. On one level, that’s not really any different than not independently verifying what is generated by AI. However, while having a case distinguished or shown to be inapposite to the point being made is not good, having a position shown to be totally off base is something else. Again - independent verification is of utmost importance.

34% of recently surveyed firms were considering the use of generative AI for legal operations, and 6% of responding firms have banned unauthorized usage outright.7

It's vital that firms confirm that what they’re utilizing is the work product of the lawyer presenting the brief or the memorandum. AI platforms designed to detect AI-generated work have already been developed and are known as AI detectors or AI discriminators. They have been used to determine if work presented has actually been written by a person. These detectors work by examining patterns and characteristics of data to generate a conclusion about whether the work is likely to have been generated by AI or written by a human. This is one possible AI solution that firms can employ that would allow them to police their work product.

It's also wise to consider what would happen if a firm were to submit AI-generated work which was later found to be incorrect. The most egregious scenario is likely one in which a lawyer utilized an AI-generated document (for now putting aside issues surrounding the uploading of client information) without reviewing that document prior to submission. This scenario could offer a number of potential problems including:

  1. Most lawyers professional liability policies cover individuals for claims resulting from acts, errors, or omissions arising out of their capacity as lawyers. The definition of those insured by a policy is generally all-encompassing, but at this time does not extend to an AI platform. A firm would likely try to argue that the issue is one of failure to supervise, but that begs the question: Who was being supervised?
  2. Model Rule 2.1 requires in part that “in representing a client, a lawyer shall exercise independent professional judgment and render candid advice.” 5 That independence would seem to run counter to simply adopting positions from outside nonlawyers, including AI.
  3. Clients expect that a lawyer will be performing the work assigned to the firm. Has the firm advised the client that they will be submitting AI-generated work product without review by a lawyer? ABA Model Rule 1.6 also recommends that practitioners “consider incorporating a digital information and AI software disclosure statement in their engagement letters.” 2
  4. Can a lawyer submit AI-generated work, again without review, ethically? ABA Model Rule 1.1 states in part that “a lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness, and preparation necessary for the representation.” 6 It’s tough to satisfy that rule by relying solely on work produced by an AI platform.
  5. Is the lawyer who submits AI-generated work, again without review, somehow involved in the unauthorized practice of law?

BOTTOM LINE

Lawyers throughout time have been trained to critically think through complex issues presented by their clients. They discuss issues with peers and revise or modify their views and draft documents memorializing their conclusions. If lawyers rely too heavily on AI, how will those skill sets be learned or refined? At this point at least, AI isn’t going to hold meetings with clients, attend depositions, or argue a case in court. A lawyer must be able to convey complex legal theories and react to opposing positions in real time. How will that happen if the lawyer hasn’t been properly trained? At this point, there is definitely a place for AI in the practice of law - if utilized properly as a starting point in shaping arguments and positions, it can be very useful and efficient. But ultimately, the work product and the advice provided to the client must reflect the experience and expertise of the lawyer assigned to the matter. Whatever level of use ChatGPT and other generative AI platforms eventually reach among law firms and throughout the legal field, this technology has the potential to change the industry. As the industry evolves, so will risks and insurance needs. CRC Group is home to insurance brokers with the product knowledge and market access to ensure your clients can protect themselves as their coverage needs change.

CONTRIBUTORS

  • Dennis Mullins is the President of Huntersure, an MGA within Starwind that provides professional liability products for law firms of all sizes.
  • Jason White is the Managing Director & National Practice Leader for CRC Group’s Executive Professional Practice Group.

ABOUT HUNTERSURE

Huntersure LLC provides professional liability products for law firms of all sizes for a variety of carriers and specializes in small firms on an admitted basis in 45 states. They also provide professional liability products for a broad base of other professionals including Accountants, Management Consultants, Miscellaneous Professionals, Technical Consultants, Architects and Engineers, and Allied Healthcare. Learn more about Huntersure here.

END NOTES

  1. Artificial Intelligence (AI), TechTarget. https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence
  2. Rule 1.6: Confidentiality of Information, American Bar Association. https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/
  3. What Makes A.I. Chatbots Go Wrong?, The New York Times, March 29, 2023. https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html
  4. Rule 5.3: Responsibilities Regarding Nonlawyer Assistance, American Bar Association. https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_3_ responsibilities_regarding_nonlawyer_assistant/
  5. Model Rule 2.1: Advisor, American Bar Association. https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_2_1_advisor/
  6. Rule 1.1: Competence, American Bar Association. https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/
  7. New Report on ChatGPT & Generative AI in Law Firms Shows Opportunities Abound Even as Concerns Persist, Thomson Reuters, April 17, 2023. https://www.thomsonreuters.com/en-us/posts/technology/chatgpt-generative-ai-law-firms-2023/#:~:text=Attitudes%20are%20 evolving%20around%20this,using%20generative%20AI%20right%20now