Gold Dots of Dark Background
AAJ Holiday Schedule:

Please note that AAJ's office will be closed starting on December 24th through January 2, 2025.  Happy Holidays!

Vol. 60 No. 6

Trial Magazine

Theme Article

You must be an AAJ member to access this content.

If you are an active AAJ member or have a Trial Magazine subscription, simply login to view this content.
Not an AAJ member? Join today!

Join AAJ

Terms to Know

Explore AI technology in more depth with this glossary of common terms and concepts.

June 2024

Chatbot: This type of computer program, found on many law firm websites, simulates conversation with users through voice or text interactions. While not all chatbots use AI, these programs increasingly rely on generative AI and conversational AI techniques such as natural language processing to produce more sophisticated responses. The most evolved versions are known as “virtual agents” and can take further action based on user requests. (https://tinyurl.com/mv4cz4h7; https://tinyurl.com/n76v33dw)

Data mining: Also known as knowledge discovery in data (KDD), data mining entails organizing and filtering large datasets to uncover patterns. Data can be organized via rules-based methods for finding associations, neural network processing, decision trees, or algorithms. Patterns in data are then used to identify reasons behind successes and failures, predict future outcomes, and recommend courses of action. (https://tinyurl.com/3f9kxw5p; https://tinyurl.com/ms3h9prj)

Deepfakes: Images, videos, or audio that have been altered or manipulated by AI that show a person doing or saying something that they did not actually do or say. (https://tinyurl.com/2ypf5r2t)

Fine-tuning: During this instruction process, many examples are provided to a large language model to train the model to improve its performance on a variety of tasks. Google’s Med-PaLM2, for example, was created by training Google’s general large language model on specific inputs from public medical databases via fine-tuning. (https://tinyurl.com/pata2ber; https://tinyurl.com/42hrftkw)

Generative AI: This type of AI goes beyond making predictions about a specific dataset—it is trained to create new content. It uses vast amounts of training data to recognize patterns and then generate its own text, images, video, and more. (https://tinyurl.com/58xm97am; https://tinyurl.com/murca8z9)

Deep learning: Modeled after the human brain’s processes, this is a type of machine learning that uses multilayered neural networks, known as a deep neural network, to refine and optimize decisions. It is used in products ranging from voice-activated TV remotes to self-driving cars. (https://tinyurl.com/2vmxxsne)

Guardrails: To promote the responsible and ethical use of AI, these safeguards are intended to prevent AI from creating harm. Guardrails may be technical (controls baked into the AI system); policy-based (such as best practices for ethical use or security frameworks); or legal (laws and regulations dictating how AI can be used or prescribing limits on AI). (https://tinyurl.com/5ym7mzs3)

Hallucinations/“Hallucitations”: AI hallucinations refer to large language model responses that are nonsensical or inaccurate due to misinterpretations. “Hallucitations,” a term coined by University of Southern California Professor Kate Crawford, refers specifically to sources or citations that are “made up” by AI. (https://tinyurl.com/28c85me4; https://tinyurl.com/2swhasuk)

Prompt: The natural language text or “input” that the user enters into a generative AI application to generate a result or “output.” (https://tinyurl.com/3c72w3ca)

F-score: This metric is used to evaluate how well an algorithm classifies data into two categories (such as “yes-no”). The F-score describes the algorithm’s precision (how many “yes” values are actually “yes”) and its recall (how much of a dataset’s actual “yes” values are classified into the “yes” group). An F-score of one is perfect, while an F-score of zero indicates the algorithm has failed. (https://tinyurl.com/yrfk7anh)

Retrieval-augmented generation: This process “grounds” the responses generated by large language models in external, verifiable facts to supplement their training data. The goal is to reduce the incidence of hallucinations by providing the model with current and reliable information on which to base its responses. (https://tinyurl.com/bdhvrhkc)

Foundation models: These machine-learning models are trained on large amounts and varied types of data that can be fine-tuned for a specific application. These models form the basis of many generative AI systems such as large language models. (https://tinyurl.com/5cduccth)

Large language models: This class of foundation models—which includes Open AI’s ChatGPT-3 and -4, Meta’s Llama, and Google’s PaLM and BERT/RoBERTa models—is trained on a large amount of data applicable to a variety of use cases and tasks. Large language models can understand and generate text, infer from context, translate text, summarize text, and more. (https://tinyurl.com/mvs92xvn)

Machine learning: Machine learning uses algorithms and data to “learn,” much the way humans do. It involves applying statistical methods to train algorithms to classify and uncover patterns in data and improve accuracy. Machine learning includes supervised learning (labeled datasets are fed into a model for training purposes); unsupervised learning (algorithms analyze unlabeled datasets to identify patterns); and reinforcement learning (an end goal is provided, and the model, through a trial-and-error approach, develops a sequence of actions). (https://tinyurl.com/3b7feabt; https://tinyurl.com/3kps73tx)

Neural network: This type of machine-learning program is modeled on the human brain and includes programs such as Google’s search algorithm. It uses layers of nodes (or artificial neurons) to mimic the way neurons in a human brain recognize, process, and categorize data—but in a fraction of the time. Neural networks can recognize patterns and “learn” and become more accurate over time. (https://tinyurl.com/8drx3vvh)

Natural language processing: The goal of natural language processing is for computers—through tools such as speech recognition and natural language recognition—to read and understand human language and generate text and speech. It is used in applications like chatbots, translation apps, and summarization tools. (https://tinyurl.com/292z6cpa

Weights: These are used in neural networks to signify how important a variable is. Large weighted variables are more important to the network’s decision-making. Weights can be adjusted to increase the accuracy of an AI system. (https://tinyurl.com/ywutzteu; https://tinyurl.com/8drx3vvh)

Predictive analytics: This technology takes a statistical look at current and historical data patterns and makes predictions about future outcomes and performance. Three techniques include decision trees; regression (identifying a formula to represent the relationship between all data inputs); and neural networks. (https://tinyurl.com/5d92bz77)

Turing test: This test determines whether a machine demonstrates human intelligence. Conceived in 1950 by mathematician Alan Turing, it tests whether a computer program can trick a human into thinking it is also human. (https://tinyurl.com/6nxt6er6)

Traditional AI: Also known as “narrow AI” or “weak AI,” this type of artificial intelligence is limited to training models and algorithms to perform specific tasks by learning from data and then making decisions or predictions based on that data. It cannot generate new content. Examples of this type of AI include computer chess, voice assistants such as Siri or Alexa, and internet search engines. (https://tinyurl.com/5634cd58; https://tinyurl.com/586jckwp)

Prompt engineering: This process involves using more-refined prompts to help guide generative AI systems to produce better outputs. The user enters detailed instructions to prompt the system to create higher quality and more relevant outputs, and there are several different techniques available for achieving this. (https://tinyurl.com/3s3cj9sb; https://tinyurl.com/yuvz2zm2)


EDUCATING JUDGES ON AI

The National Civil Justice Institute’s (NCJI) 32nd annual Judges Forum will address “AI and the Courts” on July 20 in Nashville during AAJ’s Annual Convention. The program will focus on the basics of AI in the courtroom, ChatGPT for judges, evidentiary concerns, ethical considerations, and the future of AI in the law. The program will be livestreamed for state trial judges. For more information, contact info@ncji.org.


AAJ RESOURCES

AAJ Education webinars

Science & Technology Section (justice.org/sections)

Artificial Intelligence Litigation Group (justice.org/litigationgroups)

Electronic Discovery Litigation Group (justice.org/litigationgroups)

“Electronic Discovery” Litigation Packet (justice.org/litigationpackets)