Sign In
  • April 14, 2023

    Can a Chatbot Assist with Legal Practice? Embrace the Potential of Generative Artificial Intelligence with Caution

    Since Open AI released ChatGPT in November 2022, a wave of astonishment, intrigue, and concern has followed as the technology advances. The legal profession will ultimately benefit from its potential to scale efficiency and improve finished products, but it’s unlikely that robots will soon replace lawyers, says David S. Blinka.

    David S. Blinka

    human-ai cooperation

    “What hath God wrought?”

    These words were delivered when Samuel Morse opened the first telegraph line in Washington, D.C., on May 24, 1844. Morse was keenly aware that this innovation in communication would transform the structure of American society.

    Are we on the brink of a similar transformation with the advent of generative artificial intelligence (AI) chatbots?

    When Open AI released “ChatGPT” to the public last November, it sent reverberations through the tech industry, inducing giants like Microsoft, Google, and Meta to race toward releasing their own models.1

    Chatbots are a form of generative AI, which have the capacity to create text responses. Open AI’s latest model, GPT-4, released in March, passed the uniform bar exam at the 90th percentile (the previous version was at the 10th percentile).2

    Still, while ChatGPT has some legal proficiency and tantalizing potential, the technology is not a replacement for customary methods of legal research and writing.

    What is an AI Chatbot?

    When asked, GPT-4 described itself as an advanced AI language model, trained on vast amounts of text data from the internet, such as books, articles, and websites.3

    It performs various natural language processing tasks, such as summarizing information, answering questions and engaging in conversational responses to queries. It is also trained for “fine-tuning,” meaning that human reviewers help “align the AI’s behavior with human values” to ensure its output is safe and more useful. It warns, however, that the data with which it is trained produces responses that “may not always be accurate or up-to-date” (the data on which it is trained has a cutoff date of 2021).

    How Might Chatbots Be Useful to Lawyers?

    The possibilities for using chatbots to support legal work are far-ranging. Though reliability and quality remain works in progress, GPT can produce letters, pleadings, and summaries of any written content you provide it.4

    David S. Blinka David S. Blinka, U.W. 2012, is a shareholder in the Madison office of Habush Habush & Rottier, S.C., where he litigates various personal injury claims.

    Because the chatbot is so versatile and user-friendly, lawyers and their employees should proceed with utmost caution. Ethical rules, particularly client confidentiality, and any protective order governing use of litigation materials, must be considered before providing information to an AI program that will end up in that company’s servers and be used to further train the model.5

    The quality of its output depends significantly on your search prompts. For example, if you forget to add a fact or specify a jurisdiction for your query, the chatbot will regenerate its response, within seconds, in accordance with further instruction. GPT advises that the ideal input should be one to three sentences or up to 50 words, because a longer query may result in less accurate or relevant responses. In other words, “complex questions” should be broken down into “multiple smaller, more focused queries.”

    Resembling a lawyer, GPT’s responses to various legal questions often advise that an answer depends on the facts of the case or – wisely – that you should consult an attorney.

    Having only recently been granted access to Google’s Bard, I found that it operates similarly to GPT, though less reliably for some legal questions, as explained below.

    Can GPT Help Draft Pleadings?

    I presented a typical personal injury complaint to GPT-4 and asked it to generate an answer.

    Given that chatbots are proficient in identifying language patterns, I was not surprised that its answer arrived in standard format, numbered for each allegation to which it responded. GPT-4 included the expected formulaic responses, admitting certain allegations, denying others, or stating that “it lacks knowledge and information sufficient to form a belief as to the truth of the allegations.”

    Though impressive, its flaw is that it cannot understand why some allegations should be admitted and others denied. GPT-4 failed to admit that a defendant was insured under the company’s liability policy. How would it know otherwise unless you take the time to educate it? At that point, you might as well draft your own answer.

    Upon requesting affirmative defenses, it produced four – including failure to state a claim and comparative fault on the plaintiff. GPT-4, however, failed to include any jurisdictional defect, and did not specify other affirmative defenses listed under Wis. Stat. section 802.06, which attorneys often perfunctorily include.

    Do Chatbots Know the Rules of Evidence?

    The proficiency of a chatbot to correctly answer an evidentiary question will, like most other questions, vary widely based on the issue’s complexity.

    To keep it simple, I asked GPT-4 to explain if a weather report is admissible at trial under Wisconsin law. Having produced a generic response, I asked GPT to explain the “criteria” it referenced and to cite applicable legal authority in Wisconsin. Although it advised that the Wisconsin rules of evidence govern admissibility, it cited to federal rules (901, authentication, and 803, hearsay exceptions) without explaining how those concepts allow for admission. It did not include Wisconsin statutes, and it recommended consulting with a qualified attorney.

    Google’s chatbot, Bard, produced a problematic response to the same question. It responded that the Wisconsin Supreme Court held that weather reports are admissible into evidence and cited two cases from 1910 and 1935 using North Western Reporter cites. I then verified that the two cases do not exist.

    Neither bot suggested asking a court to take judicial notice.

    In sum, I would not entrust GPT to draft motions in limine.

    Can Chatbots Interpret Insurance Contracts?

    I asked GPT whether an insured could recover additional funds under their Underinsured Motorist Coverage (UIM) if they had already collected a sum that was greater than the UIM coverage limit.

    I supplied GPT with a sample UIM policy and gave it several facts to assume. I was impressed that GPT produced a short yet cogent analysis (if you can ascribe this human behavior to it), which identified the applicable policy definition and then applied the facts to the policy language before concluding that the insured would not likely be able to recover from their UIM policy.

    What Are Other Uses for Chatbots?

    Having been trained on vast data sets, GPT has a foundation for utilizing the structure of language. It can tell stories, write poetry and songs, even in the style of a popular musician or author. Trial lawyers will undoubtedly consider how it can help tell a compelling story when formulating a closing argument.

    Indeed, one should expect that future AI models will have the capacity to process daily trial transcripts and assist with summarizing key trial issues, perhaps useful to plug into PowerPoint slides.

    Legal service providers and firms will implement this technology in various products so that practitioners can further automate legal processes – producing template documents, including “form” correspondence or pleadings, as well as analyzing and summarizing large volumes of records, all tailored to one’s specific practice areas.6

    Another strength of the AI chatbot is its power to rapidly acquaint users with learning new subject matter. If you are preparing for a deposition that requires immersion in a certain trade or concept, such as an accounting method, why not chat with GPT to develop a foundation and formulate new questions or ideas in conjunction with your typical preparation habits?

    Conclusion: What AI Cannot or Should Not Replace

    While generative AI has seemingly limitless potential, it’s still a nascent technology. It will build websites, serve as an interactive foreign language tutor, interpret images to aid low vision persons, and more.

    Yet, it remains difficult to predict how or to what extent these programs will transform a given profession or society, more broadly.

    The buzz surrounding ChatGPT and other AI models warrants reflection on the hallmark traits of a lawyer: the ability to exercise sound judgment, apply logical reasoning, and to think creatively. In law school, we are taught to be intellectually nimble and fair minded, and to argue for a side and then take a contrary position so that perspective and experience can inform judgment.

    AI cannot and should not replace the deliberative nature of legal practice. I choose, therefore, to think of chatbots, like GPT, as fun tools that can, for some tasks better than others, supplement proven and reliable methods for practicing law.

    And just because a robot passed a bar exam does not mean it should practice law.7

    This article was originally published on the State Bar of Wisconsin’s Litigation Section Blog. Visit the State Bar sections or the Litigation Section webpages to learn more about the benefits of section membership.

    Endnotes

    1 As of the date of writing this article, I have been using Open AI’s ChatGPT, primarily its GPT-4 model since it was released. On a more limited basis, I have experimented with Google’s Bard, to which I was recently granted access.

    2 Debra Cassens Weiss, “Latest Version of ChatGPT aces bar exam with score nearing 90th percentile,” ABA Journal, March 16, 2023.

    3OpenAI’s website offers a useful overview of ChatGPT, explaining that it interacts with users in a “conversational way” so that it can answer “followup questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.”

    4 For a thoughtful overview of Chat GPT and examples of legal content it can generate, see Christopher Shattuck, “ChatGPT Artificial Intelligence: Will it Replace Lawyers and Legal Staff,” Wisconsin Lawyer, February 2023.

    5 For a more complete overview of ethical considerations involving use of ChatGPT, see, Aviva Kaiser, “Ethical Obligations When Using ChatGPT” Wisconsin Lawyer, February 2023. Employees must also be very careful with their employer’s confidential data and records: See Mack DeGeurin, “Oops: Samsung Employees Leaked Confidential Data to ChatGPT,” Gizmodo, April 6, 2023.

    6See Chris Stokel-Walker, “Generative AI is Coming For the Lawyers,” Wired, Feb. 21, 2023, which explains one large law firm’s adoption of generative AI.

    7 This author did not have to pass a bar exam due to Wisconsin’s diploma privilege.




    Need help? Want to update your email address?
    Contact Customer Service, (800) 728-7788

    Litigation Section Blog is published by the State Bar of Wisconsin; blog posts are written by section members. To contribute to this blog, contact Matthew Lein and Heather L. Nelson and review Author Submission Guidelines. Learn more about the Litigation Section or become a member.

    Disclaimer: Views presented in blog posts are those of the blog post authors, not necessarily those of the Section or the State Bar of Wisconsin. Due to the rapidly changing nature of law and our reliance on information provided by outside sources, the State Bar of Wisconsin makes no warranty or guarantee concerning the accuracy or completeness of this content.

    © 2024 State Bar of Wisconsin, P.O. Box 7158, Madison, WI 53707-7158.

    State Bar of Wisconsin Logo

Join the conversation! Log in to leave a comment.

News & Pubs Search

-
Format: MM/DD/YYYY