The “home cooking” or DYI of business documents is nothing new. There have always been people who have chosen to draft important documents without the help of an attorney, often to their detriment.
The recent rise of AI tools which offer content generation in moments has seen many business owners and entrepreneurs use them to generate contracts, corporate documents, and even complex legal strategies without the involvement of attorneys.
Why AI-drafted Agreements Often Fall Short
While AI tools typically provide disclaimers and direct users to seek legal advice, the reality is that many people assume that what AI generates will work for them simply because it looks good and sounds reasonable.
William E. Wallo, University of Oklahoma 1992, is a shareholder with
Bakke Norman, S.C., Eau Claire, where he focuses on business transactions, commercial litigation, and bankruptcy and insolvency.
As a result, these users may place more confidence in AI than perhaps they should. AI-generated contracts
can lack key legal provisions such as choice of law, dispute resolution, or integration clauses. They may feature contradictory payment or termination provisions. Sometimes they incorporate provisions that are void under relevant provisions of the Uniform Commercial Code or other state statutes. They may not add necessary contingencies or triggers. They may default to generalized boilerplate and inadvertently create ambiguity.
As a result, business owners may find out too late that the agreement they thought “looked” polished in fact was inadequate for their needs.
One Risk: ‘AI Slop’
They are not alone, of course. Lawyers have become increasingly aware of AI “hallucinations,” where software generates citations to fake cases or statutes, or creates a fictionalized quote linked to a real statute, case, or legal treatise. “AI slop” is
often defined as low-quality unverified AI output that appears complete but is actually inaccurate, incomplete, or contradictory.
Numerous instances in which attorneys have been censured or sanctioned for including unverified legal research in legal briefs have garnered widespread attention, but the issue filters down into other areas as well. No one – neither lawyers nor their clients – should make assumptions about AI content without additional review and verification. As the
American Bar Association has indicated, attorneys have ethical obligations to verify AI output against relevant sources and they face the real world risk of being wrong when they fail to do so. Clients who “do it themselves” run risks as well.
Why AI ‘Hallucinates’
Part of this may be inherent in AI systems themselves. Research into the hallucination phenomenon highlights that it occurs primarily because of how AI language models are designed or optimized. As OpenAI discussed in a recent
research publication, large language models
learn to predict the next most likely word for a response based on the collection of text used in their “training.”
Ironically, because the systems have been designed to provide answers (rather than admit a lack of knowledge), they often skew toward predicting – or perhaps
assuming – a more definitive response. Since they also remain
reactive in the sense that they respond to a user query, the phrase “garbage in, garbage out” still applies. Stronger, well-constructed queries will inherently produce more effective content.
However, a year ago, research indicated hallucinations occurred in
roughly 1 in 6 legal research queries and that AI providers’ claims of accuracy were
often overstated. Empirical evidence suggests the problem of “AI slop” remains ongoing.
6 Tips to Minimize Risks
Practically speaking, what can a business lawyer do to minimize the risks associated with client-generated AI documents? Here are six ideas:
Educate your clients. Discuss with clients the practical use and potential limitations of AI. It is good at boilerplate and generalized terms. It is effective for summarizing data and providing suggestions. If it is used to draft a contract, the contract should be reviewed for key clauses that might be missing or misapplied. Ambiguous contracts increase litigation risk. Moreover, nothing a client tells AI is covered by attorney-client privilege, which means any prompts or responses (i.e., their chat history) will be subject to discovery if there is a dispute.
Ask about AI use early and often. Ask new – or existing – clients whether they used AI tools or templates to draft any agreement that is the subject of discussion (or dispute). Request copies of any AI-generated drafts, along with any chat history that would reflect the prompts used to generate the document. This may help you to understand what the client may rely on as to their intent or understanding of the agreement (even if that understanding is misplaced). It may also help you assess any litigation risks.
Create review protocols. Develop a checklist for reviewing common issues or errors. In transactional documents, these may include provisions related to governing law, payment terms, termination, liability, risk of loss, jurisdiction, enforceability, as well as any other specific provisions that might be warranted to meet the requirements of specialized issues such as IP, data privacy, bankruptcy, or the like. In corporate governance documents, critical topics to review are likely provisions related to voting, fiduciary duties, or transfer restrictions. Financing agreements should receive scrutiny to confirm that there are no improper descriptions of collateral or other issues which might impact perfection or priority.
Consider contractual disclaimers in engagement agreements. With increased reliance on AI, attorneys have been advised to develop policies on AI usage and disclose their policy (and their use of AI) to clients. The corollary may be to make it clear to clients that the firm is not responsible for the accuracy of documents clients prepare or generate using AI without attorney involvement, and that clients are responsible for providing any relevant information about their use of AI. Also consider advising clients to refrain from consulting AI as a source of a “second opinion” or to review an attorney’s advice given that any disclosures they make during those exchanges will, again, not be subject to attorney-client privilege.
Keep a record of the cleanup. Track the flaws you identify in client-supplied AI drafts. This will help to maintain review protocols and provide concrete reference points for clients and younger attorneys. It may also help in any later dispute.
Stay engaged. The reality is that AI models continue to change and develop. Track emerging developments. Incorporate AI tools which include necessary confidentiality protections into practice. Recognize the crucial importance of review and verification of any AI output. Properly utilized, AI can increase efficiency and productivity, but its output needs to be reviewed, vetted, and verified.
Careful Review Is Required
DIY contracts have historically posed challenges because they often do not adequately reflect the intent of the parties or lack certain critical components. Often, that leads to litigation. Similar considerations arise with documents a client generates with the use of generative AI, or even when an attorney accepts what AI generates without substantive review.
These agreements may be more than adequate on many issues but falter on critical points. It remains important for attorneys to recognize their ongoing role – and value – in making certain that agreements adequately address a client’s particularized needs. The cost of fixing or fighting over an agreement after it is executed is usually far greater than an attorney’s initial participation would have been.
This article was originally published on the State Bar of Wisconsin’s
Business Law Blog. Visit the State Bar
sections or the
Business Law Section webpages to learn more about the benefits of section membership.