The growing adoption of AI in the legal field is hardly surprising—its ability to streamline processes, enhance efficiency, and accelerate workflows all contribute to firm profitability, more vacation, and less stress. Tools that started for completing school assignments or drafting basic essays have now evolved into highly advanced systems capable of handling complex tasks. Recent news of AI applications, such as ChatGPT, successfully passing the bar exam (in the 90th percentile!) highlights AI's immense potential and transformative power. This advancement signals a future where AI can drive significant, large-scale changes across the legal industry.
Given the remarkable advancements in AI, it's no surprise that not just lawyers are affected by this changing landscape. Clients are eager to leverage these tools to address their legal challenges. Navigating the legal system and deciphering complex legal jargon can be overwhelming, often leaving clients of lawyers struggling to fully grasp what their lawyer says. The allure of AI, which can simplify and clarify these matters with impressive accuracy, is understandably tempting. It can offer clients a sense of control and understanding over their legal issues, it can take jargon and rewrite it in plain English. It can translate between languages in a multilingual home.
But, despite its benefits, using AI in legal contexts is not without its risks. Below we'll detail some of the main risks that clients may be engaging in without their lawyer's awareness:
The core of attorney-client relationships is the attorney-client privilege. This doctrine legally protects conversations between lawyers and their clients. However, clients may inadvertently forfeit this protection when they upload legal documents or reveal information during communication to AI platforms such as ChatGPT. These platforms aren't bound by the same confidentiality rules as attorneys, potentially exposing sensitive information to mishandling or disclosure. AI systems often handle user data in complex ways that may not be fully transparent to users. While established platforms may claim to protect privacy, they might store information temporarily or long-term, increasing the chances of unauthorized access, cyber-attacks, or unintended information leaks. For instance, the New Jersey Task Force on Artificial Intelligence, assembled under the New Jersey Bar Association, indicates the following: [P]ublic AI tools generally unsuitable for sensitive or private information include, but are not limited to, ChatGPT, Gemini and Claude. While this guidance is in the context of lawyers, the underlying logic maintains that the public AI tools, by design, do not protect information input into them in ways that meet the requirements of the profession.
AI platforms often use user input data to enhance their models. While some services like ChatGPT may claim they don't use certain data for training without permission, data management and processing specifics are often unclear. Users might not realize that by using AI tools, they're agreeing to terms allowing unexpected uses of their data. This data can include the prompts the user asks the AI, the outputs from the AI, or files given to the technology to make it answer the users' needs. Some platforms might gather information to improve AI accuracy or generate usage statistics, potentially compromising confidentiality. If an AI provider uses a client's data, it could become part of the platform's broader learning model, potentially making it accessible to future users in some form. This might lead to unintended exposure of confidential legal strategies, personal details, or sensitive business information.
Legal documents use specific, technical language that needs expert interpretation. Often, the law has legal terms of art that carry unique meaning outside of the parlance of everyday people. While AI can process general language, public AI may-or-may-not have an understanding of legal concepts, laws, and court decisions. Further, and most important, a non-lawyer reading an AI output may misinterpret the legal turns of art out of a lack of knowledge. Everyday people seeking summaries of legal texts or documents may miss crucial details or provide inaccurate interpretations. They might provide accurate summaries, but nuances in the documents may be lost on the readers. This can lead to clients misunderstanding legal materials and taking potentially harmful actions. For instance, important clauses might be overlooked or misunderstood when using AI to decode complex contracts. If an AI incorrectly portrays an agreement as less restrictive than it truly is, a client could unknowingly breach the contract, risking legal troubles.
The above sections highlight the importance of understanding how clients are using AI tools, even if you or your firm is not. While it may be unrealistic to prevent clients from using AI entirely, it's crucial to recognize that AI is now deeply integrated into our daily lives. Clients must be guided on how to use AI cautiously, particularly when handling sensitive information. For instance, advising them not to upload confidential data is essential, but it also requires helping them understand what qualifies as confidential. As lawyers, it is vital to educate clients on distinguishing between confidential and non-confidential information to ensure the proper use of AI tools.
Of course, some issues go beyond matters of confidentiality. Certain AI companies have user agreements with their users that can lead to clients not owning their intellectual property. Many AI tools, like ChatGPT, have terms of service that may grant the platform broad rights over any content users upload or generate, potentially compromising ownership of proprietary information. In some cases, users may unknowingly grant the AI provider a license to use, modify, or store the data, which can jeopardize their control over intellectual property or trade secrets. Additionally, AI-generated content can create ambiguity around ownership, especially if the AI uses prior data to generate derivative works. This opens up the risk of IP infringement or third-party claims.
Businesses relying on trade secrets also face risks because uploading confidential information to AI platforms could undermine their ability to protect those secrets. Clients must be vigilant in reviewing AI user agreements, paying attention to data use, IP ownership, and data retention clauses. For intellectual property-related matters, it is essential to consult legal professionals and establish internal policies on AI use to safeguard sensitive information and ensure legal protections remain intact.
Consequently, lawyers should guide clients in using AI by describing when and how to leverage these technologies appropriately within the legal context. This approach ensures that clients benefit from AI's efficiencies while relying on the irreplaceable human insight and experience of legal professionals. AI should supplement, not replace, the legal process and advice. While AI tools can be useful for certain tasks, they cannot substitute the expertise, nuance, and judgment that an attorney brings to complex legal issues.