What's in a name?

Dec 17, 2024
Builders' Blog

Historically, assistants were flesh and blood humans. As such, they did flesh and blood things. Classically, low-level intern assistants get coffee. Personal or executive assistants might do physical tasks too: they might take phone calls, make photocopies, or (at least as movies would have us believe) pick up gifts for our lovers and family who we forget about while working. At a higher level, more sophisticated assistants might plan a person's private calendar, ghostwrite emails and make judgment-driven decisions about who matters enough to even get onto the calendar.

Delegative vs. Collaborative Assistance: How AI Gets the Job Done

AI legal assistants necessitate conceptual clarity. The heuristic of "assistant" is useful to convey that this software tool category has many capabilities. Like a (good) human assistant, an AI legal assistant has a collection of tasks that it may assist with. Some tasks the AI may even be able to handle through delegation entirely. For other tasks, the software may serve as a collaborative partner; that is, taking portions of the work but not entirely completing the task. Let me provide a brief example of each:

  • A firm provided records of its cases to ChatGPT and asked it to draft copy for their website that briefly described the firm's best wins. The output was sufficient on the first pass. In this example, the tool entirely completed the task, operating like a fully competent assistant.
  • An immigration lawyer's summer intern used 2nd Chair legal AI assistant to provide over 100 administrative decisions that were not indexed in Westlaw or LexisNexis. The AI drafted the summaries, but the intern didn't trust the summaries to be dispositive. He used the internal citation structure to check that the summaries were right and explored key facts that were relevant to the work his firm did. Here the AI acted as a collaborating assistant who required managerial oversight.

In both cases, the AI operated as an assistant, similar to how human assistants might take on work. But digital assistants and human assistants are also dissimilar. Your AI assistant can't bring you coffee. Your human assistant can't process information at thousands of words per second. Calling a legal AI tool or software an "assistant" can also constrain our usage. If we imagine the tool to have "assistant" capabilities, this belief can then foreclose our behaviors with that very tool to only behaviors that we might expect an assistant to have. In other words, if you think of the software as capable of doing only assistant things, you may also only try to use it for only assistant things. In doing so, you might miss a wealth of opportunities.

Bridging the Gap Between AI & Human Interaction

From a builder's perspective, naming the thing in the first place is hard. Artificial intelligence as a title risks anthropomorphizing this non-human entity. Further, assistants were formerly exclusive to the human domain. Calling a software an AI assistant begins to move conceptually closer to humanness. Here are some examples that take this seemingly esoteric topic and instead ground it in daily user behavior. A person is more likely to interact with an AI that has a human name. Some research suggests that being "nice" when interacting with an AI can yield better results (though earlier research disagrees). And the original decisions to name digital assistants with women's names calls on the ugly legacy of women assistants to men (to be reductive to the article cited to). This expands as far back as the 1950's. Our team struggled extensively with the decision to name our AI tool David. It remained nameless for several months.

There is more than just naming that complicates the human-machine relationship. Our users have developed parasocial relationships with our AI, David. Previous research suggests that users can develop friendship-like behavior toward AI. Our users have humanized the tool such that they use "he" instead of "it" to describe the tool. This was likely of our own doing, by giving it the human name of David. But our users also project a "personality" onto David. They describe David as sassy, funny, and rude. This outcome, that David is a sassy, funny assistant is the end result of a series of imaginations and decisions. That is, the esoteric decision to call the software an AI assistant; to give it a human name such as David and for users to themselves think of David as an assistant; has all come together to create a theory of a product and a theory of use for that product.

Humanizing Your AI Legal Assistant: Naming David

Even the name David has a history - we have 2 stories here. One is that we are a little, David sized company taking on some Goliaths in our industry. The second? In the list of movie major depictions of androids and general artificial intelligence, the majority are non-human names. Further, the fictional depictions of androids and AI move over into mostly human names only after 2010. David does have a recurring name across movies, however. First, in the science fiction and horror film Screamers, but later in Steven Spielberg's turn of the century movie: A.I. Artificial Intelligence. Here, David is the first AI designed (or mimicking), human love and is a boyish android. Lastly, David is an android and recurring character and villain in Ridley Scott's Alien franchise, appearing both in Prometheus and in Alien: Covenant. In fact, David is one of the most recurring human names for AI across movies, video games, and TV.

Looking Forward: What's to Come for David?

I don't yet know if David being humanized and thought of as an assistant is good, or bad. At 2nd Chair, we hope to soon expand David's capabilities beyond textual documents like financial records, medical files, judicial opinions, or any other textual documents that lawyers touch, and expand into multimodality such that David can "see" video and images or "listen" to audio files.

Building these capabilities will be fast and easy, but further risks combining additional modes of human perception into the anthropomorphic AI assistant. I worry about the way that people are inclined to interact with male-sounding and female-sounding computer voices differently based on gender stereotypes. I worry about the way that people direct abuse toward chatbots, and that this occurs largely by men harassing "women" bots. I'm not worried that people will be mean to our David tool. It's not a human, it can take it. I do worry about the ways that 2nd Chair, and the industry more generally, think about, talk about, and so build tools. In thinking about, talking about, and ultimately building "assistants", I wonder about creating paradigms that then manifest in the tools we build and reaffirm how we think about the tools in the first place.

This piece is just to get your brain going. It's an invitation to think about your legal AI tools with wonder and excitement. It's an invitation to play and experiment with how legal AI assistants are (dis)similar to human assistants while encouraging caution and care to avoid promulgating stereotypes. It's a peek into the window of the great care that some companies put into their products and the degree of research that is synthesized into the creation of a product.