What Small Firms Can Learn from Judge Kevin Newsom

Sep 23, 2024
Small FirmLegal Tech

A U.S. Circuit Court judge recently and openly used artificial intelligence (AI) to assist in his legal analysis—specifically to determine the common definition of a term at issue in a case. Hon. Kevin C. Newsom, U.S. Circuit Judge for the United States Court of Appeals for the Eleventh Circuit, applied generative AI to clarify the ordinary meaning of the word "landscaping" in a legal dispute. In his opinion, Judge Newsom outlined how and where he integrated AI into the process, highlighting both its benefits and the potential risks. His approach offers a clear roadmap for incorporating AI tools into legal practice without compromising professional standards. For small law firms, his method demonstrates that a thoughtful, bite-sized approach to AI adoption can streamline operations and enhance client services while preserving the integrity of the legal profession.

Case Background

In Snell v. United Specialty Insurance Co., the central issue revolved around whether the installation of an in-ground trampoline qualified as "landscaping" under the terms of an insurance policy. The dispute arose after the insurance company denied coverage for injuries sustained on the trampoline, claiming the installation did not meet the policy's definition of landscaping. After consulting traditional legal sources, such as dictionaries, Judge Newsom found the definitions lacking in clarity.

In his concurring opinion, Judge Kevin Newsom explored whether artificial intelligence, specifically large language models (LLMs) like ChatGPT and Google Bard, could assist in interpreting the ordinary meaning of the term "landscaping." His goal in using AI in this way was to gain insights on how the term is commonly used in everyday language. His decision to use AI was driven by the potential for these models to provide context and language usage that might be missed by conventional resources, ultimately serving as a supplemental tool in his legal analysis.

The Application of AI to Legal Analysis

Initial Skepticism and Exploration

Judge Newsom approached the use of artificial intelligence (AI) in legal reasoning with initial skepticism, acknowledging that it might seem "absurd" to some. However, his curiosity and openness to new tools led him to explore the utility of AI in interpreting language for legal purposes. The case in question involved the definition of "landscaping," and after consulting traditional legal sources like dictionaries, he found them insufficient for resolving whether installing an in-ground trampoline could fall under this term. To explore further, Judge Newsom decided to experiment with generative AI tools, specifically ChatGPT and Google Bard. He submitted two key prompts: "What is the ordinary meaning of 'landscaping'?" and "Is installing an in-ground trampoline 'landscaping'?"

Responses from AI Tools

Both AI models—ChatGPT and Google Bard—returned responses that aligned with Judge Newsom's personal understanding of the term "landscaping." Each AI tool suggested that installing an in-ground trampoline could reasonably be considered landscaping, as it involved modifying the land. This result was consistent with Judge Newsom's interpretation, demonstrating that generative AI can serve as a useful supplement to traditional research methods by providing language-based insights that reflect common usage. These AI-generated responses confirmed that the inclusion of in-ground trampolines under the broader definition of landscaping wasn't far-fetched, thereby reinforcing his legal reasoning.

Benefits Identified by Judge Newsom

Through his exploration, Judge Newsom identified several key benefits of using generative AI in legal analysis:

  • Reflects Common Speech: AI models, trained on vast amounts of real-world language data, capture how people typically use language. This capacity aligns well with the foundational legal principle of "ordinary meaning," which seeks to interpret statutory language as it would be understood by a reasonable person.
  • Contextual Understanding: Unlike a static dictionary definition, advanced AI models can analyze and understand the context in which a term is used, making them powerful tools for legal interpretation where nuance and subtlety are crucial.
  • Accessibility: These AI tools are not only readily available to judges and lawyers but also to the general public. This makes them a highly accessible resource alongside traditional legal tools, particularly useful for smaller firms with limited research capabilities.
  • Transparency: AI tools provide a transparent process for generating their responses, allowing users to see the reasoning behind their conclusions. This can be more insightful than traditional dictionaries, which provide static definitions without context.

Potential Pitfalls and Cautions from Judge Newsom

Judge Newsom was also careful to acknowledge the potential pitfalls of using AI in legal contexts:

  • Hallucinations: One major concern is that AI models can sometimes generate text that appears factual but is, in reality, incorrect or misleading. This phenomenon, known as hallucination, necessitates thorough human oversight to ensure accuracy.
  • Bias: AI models are trained on data that may not fully represent all demographics, which can lead to biased interpretations. Language use by underrepresented populations, for example, may not be adequately captured by AI, resulting in skewed or incomplete analysis.

Strategic Use

Judge Newsom highlighted the importance of using AI as a complementary tool, rather than a sole source of information. AI tools can support legal reasoning, but they should always be cross-referenced with traditional research methods. There's also a risk that AI outputs could be manipulated by users to fit preferred outcomes, which presents a challenge for legal professionals who must navigate these ethical considerations. Thus, careful consideration and guidelines are essential when integrating AI into legal research to ensure its strategic use enhances rather than distorts legal reasoning.

Practical Lessons for Small Law Firms

Judge Newsom's use of AI offers important lessons for small law firms considering whether advanced technologies like artificial intelligence can create a competitive advantage, reduce the cost of delivering legal services, and improve client outcomes. His experience demonstrates that AI can be a powerful tool for enhancing the effectiveness of legal practice without compromising professional standards.

One clear takeaway from Judge Newsom's experience is the logical use case for AI when common language usage is a critical issue. In cases where the meaning of everyday terms is in dispute, AI tools provide a more robust and context-based interpretation of those terms, which can allow a small firm to represent clients more effectively. This approach could potentially disrupt the traditional methods that opposing counsel might rely on, giving your firm a strategic edge. By leveraging AI to offer a deeper understanding of language in a legal context, small law firms can present arguments that are not only more aligned with modern usage but also more persuasive.

For small law firms, especially when facing opponents who employ a strategy of overwhelming you with document-heavy productions, AI can be an invaluable resource. Legal AI assistants can quickly sift through large volumes of documents to identify critical evidence, cutting through the noise and helping you focus on the most important elements of a case. This not only allows you to feel confident that you've reviewed everything produced, but it also helps you develop a more informed and strategic approach for your client. With the ability to review vast amounts of data efficiently, small firms can confidently stand toe-to-toe with larger opponents, delivering superior legal strategies and outcomes for their clients.

Addressing Concerns about AI in Legal Practice

While AI offers significant advantages to legal practice, particularly for small firms, it's essential to address concerns related to confidentiality, privacy, and the accuracy of AI outputs. Judge Kevin Newsom's experiment with AI reflects a careful approach to these issues, providing guidance for lawyers who are considering incorporating AI into their workflows.

Confidentiality and Privacy

One of the primary concerns with using AI in legal practice is maintaining client confidentiality and ensuring privacy. In Judge Newsom's case, his use of publicly available AI models like ChatGPT and Google Bard did not involve sensitive or confidential data, making it a safe experiment. However, for law firms, the stakes are higher when handling client information. It's critical to use legal-specific AI tools that are designed with confidentiality and privacy in mind. These tools must comply with strict data protection protocols to ensure that sensitive information is never compromised.

Best practices for integrating AI into a law firm's workflow include ensuring that AI providers have robust security measures in place, such as end-to-end encryption and secure data storage. Many AI tools tailored for legal practice also include strict confidentiality protections, enabling lawyers to benefit from advanced technologies without violating client trust or ethical responsibilities. By prioritizing tools that meet these standards, firms can confidently use AI to enhance their legal work without risking data security.

Bias is another important consideration when using AI in legal practice. AI models can sometimes produce skewed interpretations if they are trained on data that doesn't fully represent all demographics or legal contexts. To overcome this, small law firm lawyers should look for AI solutions that are purpose-built for legal work and allow them to upload their own documents for analysis. By providing your own documents for context, these tools can tailor the AI's understanding to the specifics of your case, ensuring more accurate and relevant outputs.

Human review remains critical, but there are tools designed to make this verification process easier. For example, David AI from 2nd Chair doesn't just give you answers—it shows exactly where the information comes from within your documents, allowing you to verify every insight with confidence. Unlike other tools that may simply provide a link or a reference point, David AI highlights the exact location in the document, enabling you to quickly check both the accuracy and context of every response. Tools like David AI have been specifically designed to make the use of AI easier by reducing the time you need to feel confident it's accurate.

Conclusion

There are a growing number of examples where judges are not only using AI but also openly discussing how it has assisted them in analyzing evidence in their cases. This trend is a clear signal to small law firms that the time to begin integrating these tools is now in order to stay aligned with how the law is being adjudicated. By incorporating AI into their workflows, small firms can confidently stand toe-to-toe with larger opponents while maintaining the highest standards of professionalism and client care.

You can try David AI by 2nd Chair for free—no credit card required. Want to learn more about how David helps small law firms? Click to explore the benefits tailored to your practice.