Providers, patients and regulators alike are assessing and questioning the appropriate use of artificial intelligence (AI) technologies in the health care space. One such technology is ambient listening tools, which passively capture physician-patient clinical conversations in real time and generate draft clinical notes by transcribing and summarizing these encounters using trained AI algorithms.
So, can providers record their patient appointments? The short answer: generally, yes. And are they doing so? Likely. If not, with major electronic health record (EHR) vendors integrating ambient listening and AI documentation capabilities directly into their platforms, most providers probably will be in the near future.
Stephane P. Fabus, Marquette 2012, is a shareholder with Hall, Render, Killian, Heath & Lyman, PC in Milwaukee, where she focuses her practice on assisting health care clients in a wide range of areas.
Zachary T. Renier, U.W. 2025, is an associate with Hall, Render, Killian, Heath & Lyman, PC in Milwaukee. He focuses on health information technology and advising clients on privacy and data security requirements.
Rapid adoption and implementation of AI “scribes” is being driven by the promise of easing documentation burdens, reducing clinician burnout, improving accuracy and quality of documentation, and increasing face-to-face time between physicians and their patients – all positives supporting both provider and patient satisfaction that early studies seem to support.[1]
However, once the recording begins, so do a host of legal questions. This article uses the example of the AI scribe to discuss several common considerations regarding the use of AI in health care that could apply generally to other AI technologies.
While the short answer to whether a provider can record their patient encounter is generally yes, the details matter, and often the question is more about whether a provider “should” do so and within what parameters.
Evaluating the use of AI scribes and other AI technologies requires assessing a patchwork of state and federal laws, including recording laws, data privacy requirements, emerging AI and biometric privacy statutes, and practical implementation and educational steps for those both using and overseeing these tools. Health care organizations must navigate this dense regulatory landscape carefully before layering ambient listening tools and other AI technologies into clinical workflows.
Recording Clinical Encounters
The permissibility of capturing audio from clinical encounters is primarily governed by state wiretapping and eavesdropping laws, which can vary from state to state.
The federal Electronic Communications Privacy Act establishes a one-party consent baseline, meaning only one participant in a conversation needs to consent to a recording. While many states have aligned with the one-party standard, other jurisdictions require that all parties to the conversation consent to the recording. Wisconsin, for example, adopted the federal, one-party consent rule.[2] Its neighboring state of Illinois, however, opted to require that all parties to the communication consent.[3]
The method of communication also matters. Some states treat in-person conversations differently from phone, digital or electronic interactions, creating more nuance for health systems that are operating across state lines.
Notice and Consent
A provider in a one-party state like Wisconsin may legally consent to the recording of the conversation and record a patient interaction. They are not legally required to obtain the patient’s consent. However, organizational transparency and patient expectations often matter for an entity reliant on the provider-patient relationship. Because AI scribes are still relatively new, patients may not realize that their doctor’s office is recording their conversation with their provider.
Further, because of the often intimate and sensitive nature of those conversations, patients may be displeased to learn they were recorded without notice, the ability to ask questions, or an opportunity to object, whether they were legally entities to it or not.
Therefore, notice of recording and, perhaps, even notice of the use of AI technologies is becoming more common in health care, and could become a future legal standard. Notice can occur through exam and waiting room signage, notices of privacy practices or registration forms, disclosures in patient portals, verbal explanations prior to enabling the technology at the beginning of a visit, or using audible or visible indicators to signal when an AI scribe is active.
Notice gives patients and providers the opportunity to discuss the benefits, address any concerns and understand the security protocols in place to balance patient privacy with the accuracy and time-saving benefits of these tools. The rationale is simple: transparency maintains trust between patients and their health care providers, which supports patient satisfaction. Moreover, these processes can usually be implemented in predictable and brief manners that do not disrupt the visit.
Telehealth Challenges
Telehealth creates another layer of practical challenge as the patient may be physically located in an all-party consent state even if the clinician is not. Additionally, for health systems with operations in multiple states, it can be a challenge to implement differences in policies, procedures, or electronic system operations. Therefore, health systems may choose the more conservative consent approach to ensure legal compliance while streamlining process.
In addition to the provision of clear notice, they will also obtain affirmative consent from all parties before recording provider-patient interactions regardless of jurisdiction. This may be verbal consent that is documented during the encounter, language included in a consent to treat form, or a specific consent form that the patient is asked to sign.
When a Patient Refuses Consent
Providers face another challenge if a patient refuses to consent to the recording or otherwise refuses the use of an AI scribe during their encounter. While at present many tools give the provider the ability to disable use of the tool for that particular encounter, this may not always be the case.
As this technology continues to expand and become more engrained in health care systems and tools, disabling use of the technology may no longer be an option or potentially could become a standard of care concern for the provider. The necessity to obtain patient consent prior to recording will ultimately be determined through existing laws or new legislation.
However, there is another important question to address in tandem: What is the provider’s obligation to provide care or continue the relationship where the patient refuses to consent to use of a tool that may have become the standard of care? The answer to this question may still be subject to debate.
The Biometric Privacy Layer: Voiceprints
Depending on system architecture, some AI scribes may collect and store “voiceprints,” or patterns of curved lines and whorls made by a system that measure human vocal characteristics for the purpose of identifying an individual speaker. Biometric privacy laws often include voiceprints within their scope and so such collection by an AI scribe could trigger various obligations and prohibitions depending on jurisdiction.
While Wisconsin does not currently have such a biometric privacy law, its neighbor to the south does. Illinois’ Biometric Information Privacy Act (BIPA) requires written notices and consents, publishing, and making publicly available retention policies, and also prohibits the sale or unpermitted disclosure of voiceprints and other biometric identifiers.
As a threshold issue, health care providers implementing these tools (or the lawyers representing them) must determine whether the contemplated AI scribe even generates a voiceprint. Courts interpreting BIPA have stated that it is this hallmark of identifying or verifying the identity of an individual that makes voice data a “voiceprint.”[4] Therefore, the limited capture of generic voice data, i.e., without the intent or functionality to identify individuals, does not meet this threshold.
Various AI transcription tools operate in a manner that would not constitute the collection of a voiceprint even though it may have recorded a voice. Speech may instead be identified via metadata (i.e., account identity) and transcribed to text, where the tool then queries the transcript using speaker attribution to provide its response.
Rather than constituting voiceprint analysis, this is merely a metadata-driven and text-analytics workflow. AI scribes using large language models may not identify the speaker by voice but rather by contextual clues in the conversation to determine who is talking. Utilizing role-based speaker classification, the AI scribe will transcribe conversations between clinicians, patients, and other participants (e.g., spouses, family members, etc.) to automate and draft clinical notes for review and approval.
While these tools cannot rely on account identities for speaker attribution to structure responses, they instead infer general roles – such as clinician versus patient – based on conversation patterns and linguistic cues. In all these cases, the AI scribe does not identify “who” is speaking beyond these general role-based labels and therefore operate without creating or storing biometric data.
Other states, including Connecticut, Florida, Texas, and Washington, have their own biometric data frameworks. However, many of these frameworks exempt protected health information (PHI) or information collected for treatment, payment, or operations under the Health Insurance Portability and Accountability Act (HIPAA) from their definitions of “biometric data” or otherwise include exemptions for HIPAA covered entities and business associates.
Therefore, based on the current state of biometric privacy law, it is unlikely that a health care provider’s use, collection, and/or storage of such voiceprints in the course of a traditional physician-patient clinical encounter for treatment purposes will trigger the obligations or penalties under these state frameworks. However, as AI advances and states legislate in an attempt to regulate it, this is another area that could be a consideration.
Notwithstanding the various exemptions and underlying technological nuances, there has been a strong appetite for litigation in this space. The theme to avoiding such litigation again seems to be transparency with all parties involved – physicians, patients, family members or friends, and employees – to help alleviate certain litigation triggers, such as the distrust and anger that may accompany learning that sensitive health information was recorded during a private conversation.
Additional Privacy and Security Considerations
When an AI scribe records provider-patient communication, synthesizes, assesses, and processes the interaction through an algorithm based on its training, then converts the resultant data into draft clinical documentation for the provider to review, it is receiving and creating, and also potentially transmitting and maintaining PHI for or on behalf of HIPAA covered entities.
Therefore, understanding the type of solution and vendor is also important to ensure compliance with HIPAA. Cloud-based solutions where the process above occurs in a vendor-hosted environment will require a business associate agreement (BAA). However, an on-premises solution that is loaded onto the covered entity’s own system and behind its firewalls may not require a BAA. Understanding how and where data will flow and be processed is another important aspect of assessing legally compliant use of AI.
Deployment architecture adds another layer of complexity. Most AI scribes are cloud-based given the computational demands of real-time audio processing. They also often integrate and connect with the health care provider’s own information technology systems, such as the EHR, to more efficiently allow draft documentation to appear where the provider can review and finalize it. This can create risk to data both within the cloud where the data is processed and within the system where the data ultimately end up and is stored.
Other relevant considerations to assessing overall risk may include whether subcontractors are involved, how encryption and key management are structured, whether data is stored or accessed outside the United States, and whether system logs contain PHI and, if so, how long they are retained.
Therefore, it is imperative that organizations have a clear understanding of the tool’s architecture and the safeguards in place to protect against security threats. Health care organizations may also want to obtain necessary assurances from vendors with respect to data security or request copies of any third-party security certifications.
Data retention practices should align with the health care organization’s operational and compliance needs. Understanding where data, including PHI, resides, how long it is retained, and how access is governed are important to both regulatory compliance and good data stewardship.
To help minimize regulatory and litigation exposure, organizations may prefer AI scribe tools that favor shorter retention periods that are only long enough for audio to be converted to text before being automatically deleted. If audio is retained longer – for example, for quality assurance or dispute resolution purposes – those retention practices should be transparent, time-limited, and supported by appropriate access controls and audit logs.
Organizations should also be aware that draft notes and intermediate “derivative data” may also contain PHI, and therefore are potential areas of increased risk.
Can the Information Be Used Elsewhere?
Finally, a major consideration for health care organizations evaluating AI tools is whether the vendors may use, retain, or analyze such information to train or refine its current models or develop new models.
HIPAA puts limits on how PHI may be used and disclosed, including generally requiring patient authorization before PHI is sold. It also requires covered entities to notify patients of all the ways that their PHI may be used and disclosed through the notice of privacy practices.
However, HIPAA permits PHI to be de-identified (by covered entities or their business associates) and provides that appropriately de-identified health information is no longer protected under HIPAA. Therefore, vendors will often request broad rights to be able to de-identify and use de-identified information for such purposes, something a covered entity may have to include in its notice of privacy practices.
Additionally, covered entities may want to place limits on vendor ability to both de-identify data and use and disclose the resultant de-identified information for a variety of reasons. This could include keeping data usage in line with the provider’s mission, vision, or values, based on initiatives related to patient transparency and satisfaction, concerns about legal compliance, or because they understand the value of the resultant data and want to protect themselves financially.
Limits on AI Decision-making
The regulatory landscape surrounding AI continues to evolve as regulators try to determine when to regulate AI and how much regulation strikes the appropriate balance between technological advancement and other key interests like professional judgment and patient privacy. Certain AI clinical decision support tools may qualify as “devices” subject to registration with and oversight by the U.S. Food and Drug Administration.
Several states are also moving toward direct oversight of AI systems. For example, Colorado’s recently enacted AI law, scheduled to take effect in 2026, imposes obligations on developers and deployers of “high-risk AI systems” that make or significantly influence “consequential decisions,” including decisions about health care services.
While purely documentation-focused AI scribes would generally fall outside this category, because the clinician – not the AI – determines diagnosis and treatment, as AI tools expand and take on additional functionality, the lines may begin to blur. If an AI scribe’s output feeds into downstream workflows – such as automated triage, care routing, or clinical decision support – such tools may be characterized as contributing to consequential decision-making. Even where clinicians maintain ultimate authority, heavy reliance on AI-generated summaries or structured outputs may trigger obligations under emerging state laws.
Challenges Ahead
This regulatory balancing act appears to be focused on supporting the development and use of technology that can help providers more efficiently, accurately, and effectively treat patients while still requiring that health care provider remain responsible for their use of the tool and their own professional judgment.
Providers must stay aware of the limitations and shortfalls of the technology itself and implement processes to account for and minimize error as a result. An AI scribe may “hallucinate” (i.e., create fake data) as it tries to fill in certain gaps while processing what it actually heard versus what it believes it needs to create the draft clinical documentation.
For example, if it did not hear or speech was garbled when the provider was taking a patient’s vitals, the AI might just make up or insert random numbers in line with what it would commonly expect for heart rate or blood pressure.
Identifying situations where data may more frequently be less reliable and implementing a process to address the risk (such as requiring a provider to annotate actual vital numbers in real time rather than relying on AI summaries later) are part of good AI governance processes.
Similarly, if AI-generated documentation leads to improper or inaccurate billing and coding, it will still be the provider’s responsibility to pay back a false claim. Therefore, ensuring that AI is monitored for error, bias, and risk as well as taking steps to address any findings both within the technology and through external checks and balances will continue to be key to the compliant and ethical use of AI in health care.
Note: AI was used in the initial drafting of this blog post. However, given the limitations of AI, it was heavily reviewed and revised by the authors prior to submission.
Endnotes
[1] See, e.g.,Aaron A. Tierney, et al., “Ambient Artificial Intelligence Scribes: Learnings After 1 Year and Over 2.5 Million Uses,” 6 NEJM Catalyst Innovations in Care Delivery, no. 5, 2025 (highlighting time savings in documentation of more than 15,000+ hours for users compared to nonusers over 1 year of use); Cheryl D. Stults, et al., “Evaluation of an Ambient Artificial Intelligence Documentation Platform for Clinicians, 8 JAMA Network Open, no. 6, 2025 (finding improved clinical well-being and improved connection with patients with decreased amount of time in notes). ↩
[2] See Wis. Stat. § 968.31(2)(c). ↩
[3] See 740 Ill. Comp. Stat. 5/14-2(a). ↩
[4] See, e.g., Zellmer v. Meta Platforms, Inc., 104 F.4th 1117, 1124 (9th Cir. 2024) (“The unifying theme behind each term here is that each identifies a person.”). ↩