Wait! Don’t Inadvertently Give Generative AI Your Privileged and Confidential Information

By: Carl A. Misitano

Published: October 20, 2025

As the landscape of generative artificial intelligence (AI) continues to evolve at an explosive rate, it is important to step back and consider the risks of generative AI tools. These pauses are essential to identify emerging threats that may not have been initially considered. Attorneys are inundated with warnings associated with the unfettered use of generative AI tools. It has become routine to learn about new incidents in which an attorney has been sanctioned for filing documents created by generative AI that included case law and/or facts that are incorrect, nonsensical, or fabricated. These fictitious outputs from a generative AI platform are referred to as “hallucinations.” Cognizant of these risks, Steptoe & Johnson PLLC has implemented policies, trained our attorneys, and invested in tools to protect against the perils associated with generative AI. The development of our internal security measures has exposed an external vulnerability: clients may not be equally prepared for or secured against the risks of generative AI.

The convenience of generative AI platforms makes the appeal of these tools understandable, but this convenience also magnifies the risks. Let’s imagine a scenario where a client receives a draft of a brief from their attorneys. Faced with a short deadline to review and provide feedback on the draft, a client might turn to generative AI for assistance. By simply uploading the draft to a generative AI platform and inputting a few prompts, the client saves hours of work time and responds to the attorneys with an updated version of the draft, a series of insightful questions, suggestions about additional arguments, and even some case law it believes will bolster the arguments. These benefits, and their accessibility, camouflage the dangers of what just occurred.

First, by uploading the document into a generative AI platform, the client may have disclosed privileged and confidential information and jeopardized the attorney-client privilege. The attorney-client privilege protects confidential communication between lawyers and their clients. This privilege prevents the discovery of deeply sensitive communications. Generally, the privilege attaches when there is communication, made in confidence, between an attorney and a client for the purpose of giving legal advice. By uploading this document to a generative AI platform, the client has invited an unknown third party into the attorney-client relationship. This act could be considered a waiver of the privilege and open the communication up to discovery because it suggests that the client was indifferent to the secrecy of the communication. The privilege requires the use of certain safeguards. Furthermore, many generative AI systems learn from the input received by their users. Depending on the tool being used and the security features that are in place, uploading the draft could place it in the public domain, and any comments, notes, or proposals in that draft have now been added to the generative AI’s database, which it will use to generate new content for other users.

A secondary risk is that the feedback and edits created by generative AI may introduce false or fabricated information into the draft. As discussed above, generative AI hallucinations of case law and even facts are common and can expose attorneys to embarrassment and sanctions. Clients should be wary of any edits made by generative AI tools.

Sometimes, the risks associated with generative AI are not from the direct actions of a client but exist in tools that are imbedded in software that clients routinely use. Meeting software, specifically the transcription and recording services embedded in them, is a common culprit. These tools can be helpful by quickly providing transcriptions of meetings for future reference; however, generative AI transcription software can generate inaccurate or incomplete transcripts, and more importantly, these tools present significant security, privacy, and attorney-client privilege risks if they are not properly configured and secured. Allowing generative AI software to observe and transcribe a meeting could threaten the attorney-client privilege by exposing communications to a third party. Attorneys and clients should approach these tools with caution. When a meeting begins and a notification relating to transcription services in the software appears, the parties should opt out unless they are certain that adequate security measures are in place. Additionally, if a client requires the creation of a meeting transcript, it is good practice for attorneys to ask to review the transcript for accuracy and completeness.

Notwithstanding these risks, generative AI is an extremely useful tool, and taking steps to avoid it altogether is not a realistic or profitable approach. At Steptoe & Johnson, we have implemented numerous internal security measures that allow for the safe and effective use of generative AI tools to help better serve our clients. Similar measures can be implemented on the client side of the relationship to help expand the scope of protection. Additionally, educating clients and their employees on these risks helps develop a strong culture of safe generative AI use.

Steptoe & Johnson’s attorneys are here to answer your questions and help you safely navigate the dynamic legal landscape concerning the use of generative AI. Please contact the author of this alert if you need counsel on the use of generative AI.

Stay informed. Sign up for our mailing lists.

Stay Informed

All of our news and resources are shared electronically. Select your preferred list(s) below.(Required)
This field is for validation purposes and should be left unchanged.