A Lawyer’s Guide to AI Terminology
What every lawyer, general counsel, and legal leader needs to understand about the language of AI.
Why this Guide Exists
Operating in an AI-enabled world requires language because you cannot govern what you cannot name, supervise what you cannot describe, or advise clients on something you cannot explain clearly.
This guide is structured in seven parts, each addressing a distinct dimension of AI as it applies to legal practice:
1. Foundations
2. Practitioner skills
3. Risk and quality
4. Infrastructure
5. Strategy and governance
6. Legal-specific applications
7. Emerging concepts
Every term is defined in plain English. Every definition is grounded in what it means for lawyers doing legal work.
Read it as a reference, not a cover-to-cover text. Return to it when a vendor uses a term you want to understand properly. Use it to brief leadership, orient teams, or stress-test an AI governance policy. The goal is the same throughout: to give legal professionals the vocabulary to engage with AI thoughtfully.
Part 1: The Foundations — What AI Actually Is
AI (Artificial Intelligence)
The umbrella term for computer systems designed to perform tasks that typically require human reasoning, such as reading, summarising, drafting, classifying, and decision-making. For lawyers, AI isn't one product. It's a category of technology that ranges from basic automation to systems capable of nuanced legal analysis. Understanding this distinction matters: not everything branded as 'AI' offers the same capabilities or entails the same risks.
Machine Learning (ML)
The engine underneath most AI. Instead of following handwritten rules, a machine learning system learns patterns from large volumes of data and applies them to new situations. In a legal context, AI tools learn to identify relevant clauses, flag anomalies in contracts, or predict litigation outcomes by processing thousands of prior examples rather than following a rigid script.
Large Language Model (LLM)
The specific type of AI behind tools like ChatGPT, Claude, Gemini, and Microsoft Copilot. LLMs are trained on vast amounts of text (books, websites, legal documents, academic papers) and learn to generate human-like language. When a lawyer types a question and receives a coherent, structured answer, an LLM is doing the work. Understanding what LLMs are good at (drafting, summarising, explaining) and where they fail (precise citation, real-time information) is essential for any legal professional deploying them.
Generative AI
AI that creates new content (text, images, audio, code) rather than simply classifying or retrieving existing information. Most AI tools currently entering law firms and legal departments are generative AI tools. They can draft contracts, summarise discovery, write submissions, and respond to queries. The word 'generative' signals both capability and risk: the output is created, not retrieved, which means it can be plausible-sounding but factually incorrect.
Deep Learning
A subset of machine learning that uses layered networks of algorithms (loosely inspired by the structure of the human brain) to detect patterns in complex, unstructured data. Deep learning enables AI to understand language, recognise images, and process audio. For legal professionals, this is why modern AI can read and interpret dense legal text rather than merely searching for keywords.
Neural Network
The architecture underlying deep learning. A neural network is a system of interconnected nodes that process information in layers, adjusting as they go based on feedback. The output of one layer becomes the input of the next. For lawyers, the practical implication is that neural networks don't follow logical rules; they infer. This makes them powerful but also opaque, which has significant implications for professional responsibility and explainability.
Algorithm
A set of rules or instructions that a computer follows to solve a problem or reach an outcome. Every AI tool runs on algorithms (but not all algorithms are AI). A billing system that applies a fixed hourly rate uses an algorithm. An AI that predicts which matters are likely to settle uses a far more complex, self-adjusting one. Understanding the difference helps legal leaders ask the right questions when evaluating technology vendors.
Training Data
The material an AI model learns from. For legal AI tools, this might include millions of contracts, court judgments, regulatory filings, or legal research documents. The quality, recency, and representativeness of this data directly determine what the model can and cannot do. A contract review tool trained only on US commercial agreements will perform poorly on Australian construction contracts. Asking vendors 'what was this model trained on?' is a great question in due diligence.
Dataset
The organised collection of information used to train, validate, or test an AI model. In legal technology, datasets might include labelled contract clauses, annotated judgments, or redlined documents. The composition of a dataset shapes what the AI learns. If a dataset overrepresents one jurisdiction, industry, or document type, the model will reflect that bias.
Model
The trained AI system, after it has processed the data. When a vendor says their tool uses a 'proprietary model,' they mean they've trained an AI on specific data for a specific purpose. A model is the product of the training process. Legal professionals procuring AI tools should understand whether they're using a general-purpose model (like GPT-4) with a legal layer built on top, or a model purpose-trained on legal material.
Token
The unit that LLMs use to process text. Tokens are roughly equivalent to word fragments. As an example, 'contract' might be one token, 'indemnification' might be three. LLMs don't read sentences the way humans do; they process sequences of tokens. Token limits determine how much text a model can consider at once (its 'context window'). This has practical implications: if your contract is 80,000 words and the model's context window is 32,000 tokens, it cannot analyse the whole document in a single pass.
Context Window
The maximum amount of text (measured in tokens) that an AI model can process at once. Think of it as the model's working memory. A model with a 200,000-token context window can hold the equivalent of a substantial commercial agreement in 'mind' simultaneously. A model with a 4,000-token limit cannot. For legal applications involving long documents, the context window is not a minor technical detail — it directly determines what the tool can actually do.
Inference
The process of an AI model applying what it has learned to new inputs. When you submit a question or document to an AI tool, the model runs inference, i.e., drawing on its trained parameters to generate a response. Lawyers don't need to understand the mechanics, but they should understand the implication: inference is probabilistic. The model produces the most statistically likely response, not a verified, sourced, authoritative one.
Part 2: Working with AI — The Practitioner's Vocabulary
Prompt
The instruction or question you give an AI system. Prompting is to AI what a brief is to a junior associate. The quality of the output depends heavily on the quality of the input. A vague prompt produces a generic response. A well-structured prompt with context, constraints, and a clear objective produces something significantly more useful. For legal professionals, developing prompting discipline is a core skill, not a peripheral one.
Prompt Engineering
The practice of designing effective prompts to get high-quality, reliable outputs from AI systems. This is a communication skill. In a legal context, prompt engineering involves specifying jurisdiction, document type, desired format, tone, and constraints. A lawyer who can write clear, precise prompts will outperform one who cannot, regardless of the underlying AI tool.
Fine-Tuning
The process of taking a pre-trained AI model and training it further on a smaller, domain-specific dataset to improve performance on particular tasks. A law firm might fine-tune a general LLM on its own precedent library to produce outputs that better match its style and standards. Fine-tuning is distinct from simply prompting. It changes the model itself, not just the input to it.
Retrieval Augmented Generation (RAG)
A technique that improves AI accuracy by allowing a model to search and retrieve relevant documents before generating a response. Instead of relying solely on its training data, a RAG-enabled system can pull from a specific corpus (such as a firm's precedent library, a client's contract archive, or a regulatory database) before answering. For legal applications, RAG is one of the most important architectural decisions: it is what separates a tool that makes things up from one that grounds its outputs in verified sources.
AI Agent
An AI system capable of taking multi-step actions autonomously. An AI Agent will not just answer a question; it will also plan and execute a sequence of tasks. A legal AI agent might receive an instruction to 'review this NDA, identify deviations from our standard positions, and produce a redline with tracked comments.' It completes each step independently. As agents become more capable, the questions of oversight, accountability, and professional responsibility become significantly more complex.
Agentic AI
A term describing AI systems that operate with greater autonomy, make sequential decisions, and take real-world actions (often without requiring human input at each step). In legal practice, agentic AI might manage entire contract workflows: extracting data, populating templates, routing for approval, and filing documents. The efficiency gains are substantial. So is the need for clear human oversight protocols.
Chatbot
An AI interface designed to interact with users through natural language conversation. In legal contexts, chatbots can handle client intake queries, answer FAQs about legal processes, or guide users through document completion. Not all chatbots are equal. Some follow rigid scripts, others use LLMs and can handle nuanced questions. The sophistication of the underlying model determines what a chatbot can reliably do.
API (Application Programming Interface)
The technical connection that allows one software system to communicate with another. When a law firm integrates an AI tool into its document management system or practice management platform, it is using an API. For legal technology leaders, understanding APIs is less about code and more about integration: what data flows where, who has access, and the security implications.
Zero-Shot Learning
The ability of an AI model to handle tasks it was never explicitly trained on, relying instead on its general understanding of language and context. When a lawyer asks a general AI assistant to draft a clause for a type of agreement the model has never specifically trained on, it uses zero-shot reasoning. This is impressive but risky. The model may produce plausible-sounding output without the domain knowledge to know what it's getting wrong.
Transfer Learning
The practice of applying knowledge from one domain to another. Most modern AI tools used in law are built on transfer learning, i.e., general-purpose language models are adapted for legal tasks. This is why tools can be deployed quickly without training from scratch. It is also why understanding the original training data matters: the general-purpose knowledge that transfers may include assumptions, biases, or gaps that affect legal outputs.
Part 3: AI Quality & Risk — What Lawyers Must Understand
Hallucination
When an AI model generates plausible but factually incorrect information, such as citations that don't exist, cases that weren't decided that way, or statutory provisions that have been misquoted or invented. Hallucination is not a bug being fixed; it is a structural feature of how LLMs work. They predict the most likely next token, not the most accurate one. For legal professionals, this means AI output requires verification. An AI tool that produces a beautifully structured legal memo with a fabricated precedent is worse than no memo at all.
Bias (in AI)
The tendency of an AI model to produce outputs that systematically favour or disadvantage certain groups, outcomes, or interpretations, typically reflecting patterns in its training data. In legal applications, this might mean a contract risk tool that flags provisions differently depending on the counterparty's jurisdiction, or a litigation prediction tool that performs better for certain matter types. Bias in AI is not always visible, which makes it a governance concern, not just a technical one.
Overfitting
When an AI model performs well on its training data but poorly on new, real-world inputs. An overfitted legal AI tool might excel at reviewing contracts that look exactly like the ones it was trained on, and fail significantly when presented with unfamiliar structures or jurisdictions. Due diligence on AI tools should include testing on representative samples of your actual work, not just vendor-supplied benchmarks.
Explainability (Explainable AI / XAI)
The degree to which an AI system's reasoning can be understood by humans. A system that says 'I flagged this clause because it deviates from market standard on five specific metrics' is more explainable than one that simply returns a risk score. Explainability matters enormously for legal practice: if you cannot explain why an AI reached a conclusion, you cannot adequately supervise it, stand behind it professionally, or disclose it to a client appropriately.
Guardrails
Technical constraints built into an AI system to prevent certain outputs, such as harmful content, confidential information disclosure, and jurisdictionally incorrect advice. In legal AI tools, guardrails might prevent the system from providing advice in jurisdictions where it is not configured, or from generating outputs that exceed its verified capability. Understanding what guardrails exist (and what they don't cover) is part of responsible deployment.
AI Ethics
The discipline of ensuring AI systems are developed and used in ways that are fair, transparent, accountable, and aligned with human values. For legal professionals, AI ethics is not abstract philosophy, it has practical dimensions: ensuring AI tools don't discriminate, that client data is protected, that accountability is clear when AI outputs are wrong, and that the profession's obligations to the court and to clients are preserved even when technology is doing more of the work.
Supervised Learning
A training approach where the AI learns from labelled examples, i.e., inputs paired with correct outputs. A contract review tool trained through supervised learning has been shown thousands of contracts where humans have already identified the key clauses, risks, or deviations. The model learns to replicate those judgements. The quality of the labels (the accuracy and consistency of the human annotations) directly determines what the model learns.
Unsupervised Learning
A training approach where the AI identifies patterns in data without pre-labelled examples. Useful for clustering similar documents, identifying anomalies, or discovering structures in large datasets. In a legal context, unsupervised learning might help surface patterns in a discovery corpus or identify outlier clauses across a portfolio of agreements without being told what to look for.
Reinforcement Learning from Human Feedback (RLHF)
A training technique where a model's outputs are rated by humans and the model is adjusted to produce responses humans prefer. This is a key part of how modern conversational AI tools like Claude and ChatGPT are trained to be helpful and avoid harmful outputs. For legal professionals, understanding RLHF explains why these tools are often cautious, balanced, and responsive to feedback.
Confidence Score
A numerical indicator of how certain an AI model is about a particular output. Some AI tools surface confidence scores to help users calibrate how much to trust a given response. In legal applications, a contract clause flagged with a 95% confidence score of being non-standard warrants different treatment than one flagged at 54%. Not all tools expose confidence scores, but it is a sensible question to ask vendors.
Part 4: AI Infrastructure — The Technical Landscape
GPU (Graphics Processing Unit)
The specialised computing hardware that powers AI training and inference at scale. GPUs process many computations simultaneously, making them far more efficient than standard processors for the matrix mathematics underlying AI. Legal professionals don't need to understand GPU architecture, but they should understand the resource intensity of AI: the infrastructure required to run sophisticated AI tools is substantial, which affects pricing, availability, and the vendor landscape.
Foundation Model
A large AI model trained on broad, general data that serves as the base for many downstream applications. GPT-4, Claude, and Gemini are foundation models. Legal AI tools are often built on top of foundation models, with additional training or prompting specific to legal tasks. The choice of foundation model affects capability, reliability, data processing agreements, and privacy obligations.
GPT (Generative Pre-trained Transformer)
The architecture and model family developed by OpenAI that underpins ChatGPT and many other AI tools. 'Pre-trained' means it was trained on a large corpus before deployment. 'Transformer' refers to the technical architecture. For legal professionals, GPT is a specific product family, not a generic term for AI (though it is often used colloquially). Understanding this distinction helps when evaluating which AI systems underlie legal technology products.
Open Source AI
AI models and tools whose underlying code and sometimes training data are publicly available for inspection, modification, and use. Open source AI is relevant to legal professionals because it affects privacy, customisation, and auditability. A law firm deploying an open-source model on its own infrastructure has more control (and more responsibility) than one using a proprietary tool via a vendor's API.
Cloud AI
AI services delivered over the internet rather than run on local infrastructure. Most enterprise legal AI tools are cloud-based. This raises important considerations for legal teams: data residency, security, client confidentiality, and what happens to documents uploaded for processing. Cloud AI providers vary significantly in how they handle data, whether they use uploaded content to improve their models, where data is stored, and what contractual protections they offer.
On-Premise AI
AI infrastructure deployed within an organisation's own technology environment rather than through cloud services. For law firms and legal departments handling highly sensitive matters (mergers, regulatory investigations, privileged communications), on-premise AI deployment is often the only option that adequately addresses confidentiality and data sovereignty concerns. The trade-off is cost, complexity, and often, capability.
Natural Language Processing (NLP)
The field of AI focused on enabling computers to understand, interpret, and generate human language. NLP is the foundation of every AI tool that reads or produces text. Document review, contract analysis, legal research tools, and AI drafting assistants all depend on NLP. Understanding NLP helps legal professionals appreciate why AI handles some language tasks well (summarising structured text) and others poorly (understanding implicit context, local legal culture, or ambiguous intent).
Computer Vision
AI that enables machines to interpret and understand visual information (images, diagrams, and tables in scanned documents). In legal applications, computer vision is increasingly relevant for processing physical documents, reading hand-annotated contracts, or extracting data from non-standard document formats. It is the technology that allows AI to 'see' a document as a human would, not just parse machine-readable text.
Speech Recognition
AI that converts spoken language into text. Increasingly relevant for legal practice through transcription of client meetings, court proceedings, and depositions. AI-powered transcription tools are now common in legal technology stacks. Accuracy varies significantly by accent, technical vocabulary, and audio quality (each being an important consideration before using transcription output as a reliable record).
Model Context Protocol (MCP)
An emerging technical standard that governs how AI models communicate with external tools, databases, and systems, including how they share information with each other. For legal technology architects and general counsel overseeing AI governance, MCP is relevant to understanding interoperability between AI systems and the data flows that result. As AI ecosystems become more complex and interconnected, standards like MCP will shape how AI tools are audited and governed.
Part 5: AI Strategy & Governance — What Legal Leaders Need
AI Strategy
A deliberate plan for how an organisation will adopt, deploy, and govern AI. For law firms and legal departments, an AI strategy should address use cases (which tasks, which practice areas), risk appetite, data governance, client disclosure obligations, training and upskilling, and a framework for measuring outcomes. Organisations operating without an AI strategy are not avoiding AI; they are simply allowing its adoption to happen without oversight.
AI Governance
The policies, processes, and accountability structures that ensure AI is used responsibly within an organisation. For legal functions, governance encompasses: who approves AI tool adoption, how outputs are verified, what client consent obligations apply, how errors are identified and remediated, and who bears professional responsibility when AI is part of the work product. AI governance is not optional. It is an extension of existing professional and fiduciary obligations.
AI Policy
The formal set of rules an organisation adopts to guide how employees use AI tools. For law firms, an AI policy might address: which tools are approved for use, what types of client data may be processed through them, verification requirements before AI-assisted work product is delivered, and disclosure obligations. Without a policy, individual lawyers make these decisions inconsistently, creating risk for the firm and its clients.
AI Risk
The range of risks associated with AI adoption, including accuracy failures, data breaches, bias, regulatory non-compliance, reputational harm, and professional liability. For in-house legal teams, AI risk assessment is becoming part of enterprise risk management. For law firms, it extends to professional indemnity. The organisations managing AI risk well are those treating it with the same rigour they apply to other technology and operational risks, not as a separate category, but as an integrated part of existing frameworks.
AI Audit
A structured review of an AI system's performance, fairness, accuracy, and compliance with applicable standards and policies. Legal departments and law firms should build AI audit capability, both for tools they adopt internally and, increasingly, for AI systems used by counterparties and clients. Just as financial statements are audited, AI systems used in significant decisions should be subject to regular, documented review.
Responsible AI
A framework for developing and deploying AI that prioritises transparency, fairness, accountability, privacy, and societal benefit. In the legal sector, responsible AI has a specific dimension: the duty to the court, to clients, and to the profession cannot be outsourced to an algorithm. Responsible AI in law means preserving human professional judgement at every point where it matters, and being honest about where AI has been used and what its limitations are.
Human in the Loop (HITL)
A design principle that keeps a human decision-maker in the process at critical points, rather than allowing AI to act entirely autonomously. For legal professionals, human-in-the-loop is an ethical and professional responsibility requirement. AI can dramatically accelerate legal work, but a qualified lawyer must review, verify, and take ownership of the output before it is delivered to a client or filed with a court.
AI Literacy
The baseline understanding of what AI is, how it works, and what its capabilities and limitations are (sufficient to make informed decisions about its use). AI literacy is becoming a professional competency expectation for lawyers. It does not mean being able to build AI systems. It means being able to evaluate them, supervise their use, explain their limitations to clients, and identify when they are being used inappropriately.
Digital Transformation
The broader process of integrating digital technology into all aspects of an organisation's operations, fundamentally changing how it works and delivers value. AI is a component of digital transformation, not synonymous with it. For law firms and legal departments, digital transformation involves workflow redesign, data strategy, culture change, and capability building (of which AI adoption is one part).
Change Management
The structured approach to transitioning individuals and organisations from a current state to a desired future state. AI adoption without change management fails. Legal professionals who are trained to be precise, risk-averse, and methodical need to understand why a change is happening, what problem it solves, and what is expected of them. The firms succeeding with AI are typically those that have invested as much in change management as in technology selection.
Technology Due Diligence
The process of rigorously evaluating an AI tool or vendor before adoption. For legal professionals, technology due diligence should cover: data security and privacy practices, model accuracy and validation, contract terms (particularly around data use and liability), vendor financial stability, professional indemnity implications, and compatibility with existing systems. The fact that an AI tool is widely used is not a substitute for doing the diligence.
Vendor Lock-In
The situation where an organisation becomes so dependent on a particular technology vendor that switching becomes prohibitively difficult. In AI, vendor lock-in is a real risk. When your workflows, data, and integrations are built around one provider's tools, the cost of changing can be significant. Legal technology leaders should assess exit provisions, data portability, and contractual flexibility before committing to an AI platform.
Part 6: Legal-Specific AI Applications
Contract Lifecycle Management (CLM) + AI
The use of AI to automate and enhance the end-to-end management of contracts, from drafting and negotiation through execution, performance tracking, and renewal. AI-enhanced CLM tools can automatically extract key terms, flag deviations from standard positions, alert teams to renewal dates, and surface portfolio-level risk. For in-house legal teams, AI-enhanced CLM can transform contract management from a reactive, resource-intensive function to a proactive, data-driven one.
Legal Research AI
AI tools specifically designed to assist with legal research, i.e., finding relevant cases, statutes, regulations, and commentary. These tools go beyond keyword search to understand legal concepts and surface relevant authority. The risk profile differs from general AI: a research tool that misses a controlling authority, cites an overruled case, or retrieves a judgment from the wrong jurisdiction creates professional liability. Verification is part of the professional workflow.
Document Review AI (eDiscovery)
The use of AI to process, classify, and prioritise large volumes of documents in litigation, investigations, or regulatory matters. AI-assisted document review (sometimes called Technology Assisted Review (TAR) or predictive coding) has been used in major litigation for over a decade and has been accepted by courts as a valid methodology. It does not replace lawyer review; it directs lawyer attention to the documents most likely to be relevant.
Contract Analysis AI
AI tools that read and analyse contracts to identify defined terms, key obligations, non-standard clauses, risk factors, and deviations from playbook positions. For both law firms and in-house teams, contract analysis AI can substantially reduce the time spent on initial review and due diligence, freeing lawyers to focus on judgment and negotiation rather than extraction. The critical discipline is to define what 'non-standard' means in your specific context, i.e., that the AI reflects the playbook it is given.
Predictive Analytics (Legal)
The use of AI to forecast legal outcomes, such as litigation success rates, settlement ranges, regulatory enforcement likelihood, or deal probability. Legal predictive analytics draws on historical data to identify patterns that correlate with outcomes. Used carefully, it can inform strategy and resource allocation. Used carelessly, it can create overconfidence in probabilistic outputs that are highly sensitive to facts, judges, and circumstances no dataset fully captures.
AI-Assisted Drafting
The use of AI to generate initial drafts of legal documents, such as contracts, submissions, advice letters, and board resolutions. AI-assisted drafting accelerates the creation of first drafts but does not replace the professional judgment required to assess whether a draft is legally sound, appropriately tailored, and fit for purpose. The appropriate framing is that AI produces a starting point, and the lawyer produces the final work product.
Regulatory Technology (RegTech)
Technology tools (increasingly AI-powered) designed to help organisations comply with regulatory requirements. For in-house legal and compliance teams, RegTech can automate regulatory monitoring, flag relevant legislative changes, map obligations across jurisdictions, and maintain compliance registers. As regulation becomes more voluminous and complex, AI-powered RegTech is shifting from a competitive advantage to an operational necessity.
Legal Process Outsourcing (LPO) + AI
The combination of outsourcing legal processes with AI-enabled delivery. LPO providers are rapidly integrating AI to improve throughput and reduce costs for high-volume, process-driven legal work, such as contract review, due diligence, and compliance monitoring. For law firms and legal departments, the implications are significant: the economics and timelines for outsourced legal work are changing, and the benchmarks for what constitutes efficient delivery are shifting accordingly.
Knowledge Management AI
AI tools that help legal organisations capture, organise, retrieve, and apply their accumulated knowledge and precedents. A law firm's collective expertise (its best agreements, most effective arguments, trusted research) represents enormous institutional value. Knowledge management AI makes that value accessible at scale, reducing reliance on individual memory and enabling consistent quality across a practice group.
AI in Due Diligence
The application of AI to accelerate and enhance the due diligence process in transactions, regulatory investigations, and litigation. AI tools can process data room documents at speed, identify key risks, extract critical terms from multiple agreements simultaneously, and produce structured summaries. For large transactions, AI-assisted due diligence is increasingly the norm rather than the exception. The lawyer's role shifts from extraction to analysis, judgement, and risk articulation.
Part 7: Emerging and Advanced Concepts
Artificial General Intelligence (AGI)
A hypothetical AI system that can perform any intellectual task a human can. Not just specific, well-defined tasks, but any task requiring flexible reasoning, learning, and adaptation. AGI does not currently exist (as of Feb 2026). The term matters for legal professionals because it is often conflated with current AI capabilities in vendor marketing and public discourse. Distinguishing between what AI can do today and what AGI would theoretically represent is important for accurate risk assessment and governance.
Artificial Superintelligence (ASI)
A theoretical AI capability that surpasses human intelligence across all domains. Like AGI, ASI does not exist (as of Feb 2026). It is relevant to legal frameworks being developed around AI safety and long-term regulation. Legal professionals engaged in technology policy, AI regulation, or advising AI developers should be familiar with the concept and the regulatory discourse it is generating, while maintaining appropriate scepticism about near-term timelines.
Multimodal AI
AI systems that can process and generate multiple types of data (text, images, audio, video, and code) within a single model. Multimodal AI is increasingly relevant for legal work: tools that can read a scanned contract image, extract the text, identify handwritten annotations, and produce a structured summary are moving from prototype to deployment. For legal evidence, due diligence, and document management, multimodal capabilities represent a significant expansion of what AI can handle.
Autonomous AI / AI Autonomy
The degree to which an AI system can plan and act independently without human direction at each step. Autonomy exists on a spectrum: from AI that suggests an action for a human to approve, to AI that takes sequences of actions toward a goal with minimal oversight. For legal practice, the appropriate level of AI autonomy depends on the stakes, reversibility, and professional accountability requirements of the task. Higher autonomy requires more robust governance, not less.
AI Safety
The research and practice of ensuring AI systems behave as intended, do not cause unintended harm, and remain under meaningful human control as they become more capable. AI safety is a growing area of regulatory focus. The EU AI Act, the UK AI Safety Institute, and equivalent initiatives globally are translating AI safety principles into regulatory requirements. Legal professionals advising organisations on AI adoption need to understand what safety obligations apply and how to demonstrate compliance.
Regulation of AI
The emerging body of law, regulation, and guidance governing AI development and deployment. The EU AI Act is the most comprehensive framework to date, classifying AI by risk level and imposing corresponding obligations. Australia is developing its own approach; sector-specific regulators (ASIC, APRA, Privacy Commissioner) are issuing guidance on AI use within their domains. Legal professionals need to track this landscape as closely as they track any other regulatory change affecting their clients.
AI Liability
The question of who bears legal responsibility when an AI system causes harm, such as an error in AI-generated legal advice, a biased AI decision in a regulated context, or an AI-enabled data breach. AI liability is currently addressed through existing legal frameworks (professional liability, product liability, negligence, contract law) because jurisdiction-specific AI liability regimes are still developing. Legal advisers need to understand how traditional liability principles apply to AI-enabled harms now, while monitoring for specific legislative developments.
Privacy and AI
The intersection of data protection obligations and AI systems that require large volumes of data to function. Privacy law imposes obligations on the collection, use, and sharing of personal data. For legal professionals, this includes understanding what data is being fed into AI tools, whether that constitutes a use or disclosure requiring consent, and what obligations apply to AI systems processing client data.
Intellectual Property and AI
The complex and still-evolving question of copyright, ownership, and moral rights as applied to AI-generated work product. Can AI-generated content be copyright-protected? Who owns a contract drafted with AI assistance: the lawyer, the firm, or the AI developer? What are the IP implications of training AI on third-party legal documents? These questions are being litigated in multiple jurisdictions simultaneously. Legal professionals in IP, technology, or publishing need to stay close to this rapidly evolving field.
AI Disclosure
The obligation (ethical, professional, or regulatory) to inform clients, courts, or counterparties that AI has been used in the production of work product. Professional rules in many jurisdictions are actively being updated. The baseline principle is transparency: clients should know when AI has been used in their matter, particularly when it affects the basis for professional fees or the reliability of outputs. Courts are increasingly requiring disclosure of AI use in filed documents.
Turing Test
A thought experiment proposed by mathematician Alan Turing in 1950: if a machine can converse with a human so convincingly that the human cannot reliably distinguish it from another human, can it be said to 'think'? The Turing Test is less a practical benchmark than a conceptual reference point. It is relevant to legal discussions about AI rights, consciousness, and accountability. The answer, for legal professionals, is: not yet, and perhaps not in the ways that matter most.
A Final Note
This vocabulary will evolve. New terms will emerge. Some current terminology will fall out of use or change meaning as the technology and the regulatory framework around it mature.
What will not change is the underlying discipline: legal professionals who understand the tools they use will always be better positioned than those who do not.
AI will not replace good lawyers. But lawyers who understand AI will replace those who do not.