Evaluating Legal AI Tools

Jun 7, 2024
Legal TechAI Tools

Introduction

Artificial intelligence provides the opportunity to evaluate, sort, and retrieve data in ways that weren't possible before. But, for all its opportunity, there are risks involved in using AI, particularly in data-sensitive fields like law. In order to harness the power of these tools, it's important to understand their capabilities and what safeguards exist. Even in light of well-publicized cases of misuse in the legal industry, the potential of these tools is too great to ignore. However, It's important that legal professionals understand how to evaluate these tools for use in their own practices. Products like 2nd Chair's David offer legal-specific solutions that can improve accuracy and verifiability of AI outputs. Here are some questions that can clarify which solutions are best.

Questions to Ask

How does this tool align with my desired use case?

AI tools can have very different goals. Some, like ChatGPT, are considered generative AI tools. This means that the goal of the model is to create something new. This can lead to "hallucinations," or instances where the AI presents as fact something that it has made up. This doesn't necessarily mean that there is something wrong with the software; instead, it's a characteristic of generative AI to sometimes fabricate information. In contrast, analytical AI is created to help sort, evaluate, and review existing data. It can process huge amounts of information and extract useful items quickly, allowing users to find data that would have taken a human hours or days to find. The implications for discovery and due diligence are immense. Some tools combine generative and analytical AI to create something that can analyze data and present it in an understandable format through narrative reports. These tools can be incredibly powerful, but it's important that they are built with strict safeguards in place. For fields that work with sensitive information, like law or health care, general tools won't suffice. Instead, industry-specific tools provide the appropriate blend of utility and security. When you're considering an AI tool, it's important to understand what that tool was created to do, and if that goal is aligned with your own particular use case. David was created for use in the legal community. It combines the best of generative and analytical AI models to produce outputs that are accurate, in-depth, and verifiable. David allows you to see exactly where it has pulled data from, providing you with the ability to double-check and confirm accuracy at any point.

What data security and privacy measures has this tool put in place?

Not all AI tools offer the same level of security. AI that is intended to extract class notes from a lecture will have very different security needs from one that contains protected personal information like health records or social security numbers. Tools used in legal practice need to have the highest level of data security and privacy protections. This includes built-in failsafes that serve as a safety net in event of a failure in primary security methods. Some questions to ask about these tools include:

  • How secure is the data storage method? Is that method designed to handle sensitive information?
  • What is the data retention policy?
  • Who has access to each individual user's data?
  • Most AI services contract with outside vendors to provide a portion of their service. What is the data retention policy for these outside vendors?
  • Are inputs used to further refine the AI model?
  • How is user behavior inside the tool being tracked and stored?
  • How is user behavior inside the tool being used for any additional AI purposes?
  • To what degree is data anonymized? To what degree is data confidentialised?

Here at 2nd Chair, we take our responsibility to safeguard your data very seriously. Here's a quick overview of the data security methods we have in place. If you'd like additional information, please check out our Trust Center.

How accurate is this tool? Does it hallucinate?

Hallucinations are a major issue for those who rely on generative AI models to perform legal research. These hallucinations have caused problems for people, including members of the legal community, who believe generative AI to be a research tool rather than a creative one. When choosing an AI solution, be certain that you understand the specific goal of the tool. If you work with a general generative model, chances are good that hallucinations will creep in. And, even if you're aware of this, it's almost impossible to know what is true and what is fiction without fact-checking every statement the AI makes. You should consider to what degree accuracy matters in the particular work that you are doing. The problem is that most wide-scale AI models cannot tell you exactly where they located the information they used. Although some models will provide you with web links, you still have to comb through the source page to find the information you're looking for. David is designed to make it simple to know where information is coming from. It will provide you with a link to exactly where it is pulling information from your uploaded documents. This allows for easier fact-checking and a lower incidence of hallucinations. For any sentence that the AI created and draws on files that you gave it, David will show you the document, and the page, paragraph, or sentence that David used to create the answer it gave. Further, it may synthesize from multiple files, or multiple locations in a single file, to get you your answer. When accuracy matters, you can explore each of these underlying references quickly and easily.

Does the tool understand legal-specific language, or is it geared more for a general audience?

Although generative AI does have the ability to consider context, it typically defaults to the standard meaning of terms unless it is told otherwise (for example, by telling it to use 'relief' in a legal sense, rather than a general sense). It can take a lot of time and energy to keep reminding an AI model that you are discussing matters in a legal sense, rather than a general one. However, models can be fine-tuned for specific use cases. This primes them with specialized information that provides definition to what they're outputting. If you're using an AI tool in a legal setting, it makes sense to use one that is fine-tuned for that use case. We built David for the express purpose of working with documents in a legal context. Instead of continually fighting to remind a general AI model of the environment you're working in, David is designed for the legal field.

Don't Rely on a General AI Product

Artificial intelligence is a leap forward in our ability to analyze and process data. However, major AI models are not created for use in specific fields. Especially in highly-regulated fields like law, where practitioners are held to high ethical standards, these mainstream products are insufficient. It's not a matter of if they will provide incorrect information; it's a matter of when. Don't put your reputation at risk by using a product that isn't made for the legal field. We'd love to talk to you about how David can help you incorporate the time savings and data retrieval accuracy of AI into your legal practice. Contact us today to set up a live demo and let us show you how David can revolutionize the practice of law.