10 AI Terms Every Legal Team Should Know
Turn Buzzwords into Building Blocks
Executive Summary
Artificial Intelligence (AI) is transforming how legal departments operate. Technology now shapes how the world conducts business, from contract drafting to risk forecasting. However, many legal teams feel overwhelmed by the terminology, unsure of what is real, what is hype, and what is actionable.
This whitepaper breaks down ten foundational AI concepts that every general counsel and law firm leader must understand. With definitions, examples, and legal-specific use cases, our goal is to demystify the language of AI and equip legal professionals to make informed, confident decisions.
Whether you’re procuring AI tools, overseeing governance, or advising the board, this guide will help you lead with clarity.
1. Large Language Model (LLM)
Definition: A Large Language Model is a type of AI (a branch of computer science) trained on massive volumes of text to understand, generate, and predict human language. Examples include OpenAI’s GPT-4 (ChatGPT) and Anthropic’s Claude.
How It Works: Trained using transformer architecture (see below), LLMs process billions of parameters and can answer questions, summarize documents, translate language, or generate human-like text.
Transformer architecture is a type of deep learning architecture designed to process and generate sequences (especially language) by using a mechanism called self-attention. It enables the model to weigh the importance of each word in a sentence relative to all the others, allowing for better context awareness than earlier models of learning used in AI, such as RNNs or LSTMs.
Why It Matters for Legal:
Powers tools like legal chatbots, contract summarizers, and knowledge search engines
Reduces research time by up to 90% in some use cases (McKinsey, 2023)
Use Case: A legal team uses GPT-4 to automatically draft first-pass NDAs and summarize 50-page service contracts into five bullet points.
Important Note: Sam Altman, CEO of Open AI, notes that current LLM models are best at coding and basic logic (like conversation and chatting). He anticipates that the next big breakthrough in LLMs will come in the form of Artificial Superintelligence, which will significantly advance our understanding of mathematics and develop new branches of science.
2. Retrieval-Augmented Generation (RAG)
Definition: A framework that enhances LLMs by letting them “retrieve” relevant, factual documents and generate answers based on that specific information.
How It Works: Combines vector search (finding relevant text chunks) with generation (producing a coherent response).
Why It Matters for Legal:
Allows AI to generate answers grounded in your firm’s own data (like policies, contracts, or precedents)
Reduces hallucination risk, improves compliance confidence
Use Case: A GC uses a RAG-powered chatbot to ask, “What’s our current retention policy for terminated employees?” and receives an answer sourced from their HR policies.
3. Prompt Engineering
Definition: The practice of crafting effective prompts (questions or instructions) to guide the behavior of an AI model.
How It Works: Prompts influence what information is retrieved, how tone is shaped, and the structure of the response.
Why It Matters for Legal:
The quality of AI outputs depends on how you ask
Lawyers can control tone, structure, and risk exposure through well-engineered prompts
Prompt engineering is everything when working with an LLM. The best prompts are structured, specific, and context-aware. For legal professionals wanting clarity, compliance, and consistency, we recommend the GOLDEN framework:
G — Goal
What do you want the model to do?
“Summarize this contract in plain English…”
O — Output Format
How should the response be structured?
“…in a 5-bullet executive summary with key risks highlighted.”
L — Legal Context
What’s the setting, audience, or legal issue?
“…for a general counsel reviewing a SaaS agreement under Australian law.”
D — Data Provided
Include source material or specify where to find it.
“Use the contract text pasted below.”
E — Examples
Show what good looks like.
“Example of desired output: [Insert format or sample].”
N — Nuance & Constraints
Add tone, exclusions, or compliance needs.
“Avoid speculative language. Do not include boilerplate summaries.”
Sample Legal Prompt Using the GOLDEN Framework:
Prompt:
”You are an experienced legal analyst. Summarize the following contract in plain English, focusing on key obligations, risk clauses, and renewal terms. Provide the output as a 5-bullet executive summary suitable for a general counsel. The contract is under New South Wales law. Use only the text provided. Do not add interpretation or legal advice. Keep it under 300 words.”
+ Insert Contract
Tip: Maintain a prompt library for everyday legal tasks (NDAs, IP analysis, litigation summaries).
4. Hallucination
Definition: When an AI model generates confident but incorrect or fabricated information.
How It Works: LLMs predict the most probable next word (not necessarily the factually correct one). This can lead to errors or invented references.
Why It Matters for Legal:
AI hallucination could introduce serious risks in contracts, advice, or compliance interpretations
Legal teams must verify AI-generated outputs
Use Case: A legal assistant uses AI to summarize case law and cites a judgment that doesn’t exist. This is a poor use of AI because the user has not set controls in their prompt and, therefore, has relied on unreliable data.
5. Embeddings
Definition: A way of converting text into numerical vectors so that AI can “understand” semantic meaning and find similar concepts.
How It Works: Embeddings power similarity search in RAG systems and classification tools. Text that is semantically similar tends to cluster near each other in a vector space.
Why It Matters for Legal:
Enables fast, accurate document search based on meaning (not keywords)
Helps AI match a user’s query to relevant precedents or clauses
Use Case: A law firm’s internal AI tool uses embeddings to recommend the closest precedent clauses based on a new contract’s structure. This differs from the previous practice of using “Ctrl + F” to find the exact same wording.
6. Fine-Tuning
Definition: The process of training an existing LLM on additional, domain-specific data to specialize its outputs.
How It Works: Rather than training from scratch, you take a base model (like GPT-3.5) and update its weights using curated legal datasets.
Why It Matters for Legal:
Improves relevance, tone, and compliance with firm style
Ensures better handling of legal-specific terms and structures
Use Case: A firm fine-tunes an LLM to answer employment law questions based on Australian legislation and previous firm memos.
“Fine-tuning allows you to inject organizational DNA into the model.” (Deloitte AI Practice)
7. Token
Definition: A token is a chunk of text that AI models process. This can be a whole word ofr the totality of a word.
How It Works: LLMs process inputs and outputs in tokens. GPT-4, for example, can handle up to 128,000 tokens (equivalent to ~300 pages).
Why It Matters for Legal:
Impacts how much content an AI can process in one prompt
Controls cost: most AI tools charge per token processed
Use Case: A GC uploads a 100-page contract and uses an AI tool with a 32,000-token context window to ask targeted questions without splitting the document.
Important Note: Until recently, ChatGPT would hallucinate when asked, “How many 'r' s are in the word strawberry?” Something about the length and structure of the word “strawberry” meant that the tokens did not accurately detect the correct number of “r’s.”
8. Guardrails
Definition: Controls implemented to ensure AI operates within safe, legal, and ethical boundaries.
How It Works: This may include content filters, role-based access, red-teaming, or approval workflows.
Why It Matters for Legal:
Prevents misuse (e.g., asking AI for advice on illegal actions)
Supports compliance with internal policies and client confidentiality
Use Case: A law firm’s internal GPT tool prevents users from generating content about ongoing litigation unless they have role-based permissions.
Tip: Every AI use case in legal should include built-in review and approval steps.
9. Generative AI
Definition: A broad class of AI tools that can create new content based on learned patterns. Generative AI is an umbrella term that encompasses ChatGPT and other AI models used in creative design, short films, and more.
How It Works: Uses deep learning models trained on massive datasets to generate outputs from scratch.
Why It Matters for Legal:
Enables contract generation, clause rewriting, policy drafting, and even litigation summaries
Accelerates time-to-draft while standardizing quality
Use Case: An in-house legal team utilizes GenAI to draft the initial version of internal policies, drawing on prior versions, industry standards, and regulatory guidance.
“74% of legal departments say generative AI will impact their workflows in the next 12 months.” PwC (2024)
10. Explainability
Definition: The ability to understand and articulate how an AI model reached its conclusion.
How It Works: More common in traditional machine learning than in deep learning; efforts in GenAI include citation models, confidence scores, and retrieval trails. Think of this as the sequence of logic used by AI in reaching its conclusion/output.
Why It Matters for Legal:
Essential for auditability, defensibility, and trust
Required in some jurisdictions for AI governance and compliance
Use Case: A bank’s legal team utilizes an AI risk classifier with built-in explainability. It highlights which keywords or patterns triggered each classification.
Conclusion: Get Confident in AI Because It’s Not Going Anywhere
Understanding these ten AI concepts is foundational to leading legal departments through transformation. As a general counsel or partner, your responsibility is not to become a data scientist but to know what questions to ask, what risks to manage, and what opportunities to pursue.
At Everingham Legal, we help legal teams translate AI potential into operational reality.
Let’s turn insight into impact.