A
Algorithm
A set of step by step instructions that tells a computer how to solve a problem or complete a task. Think of it like a recipe: if you follow the steps in order, you get the result.
Why it matters: When someone says "the algorithm decided," they mean a set of rules produced an output. In social care, understanding this helps you ask the right questions about how decisions are being made and whether they are fair.
Attention (Attention Mechanism)
The method AI language models use to figure out which words in a sentence are most important to each other. When you read "The social worker visited the family at their home," your brain knows "their" refers to "the family." Attention is how AI does something similar, by weighing which words relate most strongly to which other words across the whole text.
Why it matters: Attention is what makes modern AI tools good at understanding context in longer documents. It is the reason you can paste a full care plan into a tool and get a sensible summary, rather than a jumble of disconnected sentences.
Artificial Intelligence (AI)
Software that learns from patterns in data and uses those patterns to do something useful, such as recognising speech, making predictions, or generating text. It does not think or feel. It processes data.
Why it matters: AI is already in tools you use every day, from predictive text to spam filters. Understanding what it actually is helps you engage with it critically rather than fearing or over-trusting it.
Automation Bias
The tendency for people to trust a computer's output over their own judgement, even when the computer is wrong. If a system says something confidently, we are more likely to accept it without checking.
Why it matters: In social care, automation bias could mean accepting an AI-generated assessment or recommendation without applying professional scrutiny. UK social care legislation places responsibility on practitioners, not tools.
B
Bias (in AI)
When an AI system produces unfair or skewed results because it was trained on data that reflects existing inequalities. If the training data over-represents one group or under-represents another, the AI's outputs will reflect that imbalance.
Why it matters: Social care serves diverse communities. AI tools that carry bias could lead to unfair assessments, missed referrals, or unequal access to services, particularly for people from marginalised backgrounds.
Black Box
An AI system where you can see what goes in and what comes out, but you cannot see how it reached its decision. The internal workings are hidden or too complex to explain in simple terms.
Why it matters: If you cannot explain how a tool reached a recommendation, you cannot be accountable for it. CQC and Ofsted in England expect organisations to demonstrate transparent decision making.
C
Chatbot
A software tool that simulates conversation with a user. Older chatbots follow scripted rules. Newer ones use large language models to generate more natural responses. The chatbot you speak to before reaching a human on a helpline is a common example.
Why it matters: Some local authorities and care providers are exploring chatbots for initial enquiries. Understanding the difference between a rule based chatbot and a generative one helps you assess the risks and limitations of each.
Cluster
A group of data points that are similar to each other. AI finds clusters by measuring how close pieces of data are in a mathematical space. For example, if you fed thousands of case notes into an AI system, it might cluster them into groups like "safeguarding concerns," "housing needs," and "mental health referrals" based on the language patterns, without being told those categories exist.
Why it matters: Clustering is how AI organises and makes sense of large amounts of unstructured information. Understanding this helps you see both the potential (finding patterns in data) and the risk (the AI decides the groupings, and those groupings may not reflect your professional understanding).
Closed Source (Proprietary)
Software where the underlying code is kept private by the company that built it. You can use the product, but you cannot see how it works, audit it, or modify it. ChatGPT and Microsoft Copilot are closed source.
Why it matters: When you use a closed source AI tool with sensitive data, you are trusting the company's claims about how your data is handled. For social care, this means robust due diligence before adoption.
Context Window
The amount of text an AI model can consider at one time. Think of it as the tool's working memory. A larger context window means it can process longer documents in a single go. A "128k context window" means roughly 96,000 words.
Why it matters: If you paste a long care plan or policy document into an AI tool and it exceeds the context window, the tool will miss parts of it. Understanding this helps you know when to break tasks into smaller chunks.
D
Data Protection Impact Assessment (DPIA)
A formal process required under GDPR when introducing technology that poses a high risk to people's privacy. It maps out what data is processed, who has access, what the risks are, and how those risks will be reduced.
Why it matters: Before using any AI tool that touches personal data in social care, a DPIA should be completed. The ICO expects organisations to do this, and it is a key part of responsible AI governance.
Deep Learning
A type of machine learning that uses layered networks (called neural networks) to process data in increasingly complex ways. It is the technology behind image recognition, voice assistants, and large language models.
Why it matters: Deep learning powers many of the AI tools entering social care. Understanding that it learns from data patterns, rather than being explicitly programmed, helps you grasp both its potential and its limitations.
DeepSeek
An open source AI model developed by a Chinese company that became one of the most widely used AI tools globally. DeepSeek demonstrated that high performing AI does not have to come from the largest tech companies, and its open source approach means anyone can inspect, adapt, and build on it.
Why it matters: DeepSeek is a good example of why understanding open source matters for social care. Open source models offer transparency, but organisations still need to consider where the model was trained, what data it used, and whether it meets UK data protection requirements before adopting it.
E
Ethical AI
The practice of designing, deploying, and using AI systems in a way that is fair, transparent, accountable, and respects people's rights. It means considering the impact on individuals and communities, not just efficiency.
Why it matters: Social care works with some of the most vulnerable people in society. Ethical AI is not optional; it is a responsibility. This includes checking for bias, protecting data, and keeping humans in the loop for decisions that affect people's lives.
Explainability
The ability to describe, in terms a person can understand, how an AI system reached a particular output or recommendation. The opposite of a black box.
Why it matters: If you use an AI tool to help draft an assessment, you need to be able to explain your reasoning. "The AI said so" is not a defensible professional position. Explainability supports accountability.
F
Fine-tuning
Taking a pre-trained AI model and training it further on a specific, smaller dataset so it performs better for a particular task or sector. The base model already understands language; fine-tuning teaches it the specifics of your domain.
Why it matters: A general purpose AI tool may not understand social care terminology, legislation, or context. Fine-tuned models can be more accurate for sector specific tasks, but they require careful oversight of the training data.
Foundation Model
A large AI model trained on broad data that serves as the base for many different applications. GPT-4, Claude, and Gemini are foundation models. Other tools and services are built on top of them.
Why it matters: When a vendor says their product "uses AI," it often means they have built a layer on top of a foundation model. Knowing this helps you ask better questions about where the underlying technology comes from and what data it was trained on.
G
Generative AI
AI that creates new content, such as text, images, audio, or code. It does this by predicting the most likely next piece of content based on patterns learned during training. ChatGPT, Microsoft Copilot, Google Gemini, and Claude are all generative AI tools.
Why it matters: Generative AI is the type most social care practitioners will encounter. It can draft documents, summarise text, and answer questions, but it can also produce confident sounding misinformation. Professional judgement must always come first.
Governance (AI Governance)
The policies, processes, and oversight structures an organisation puts in place to make sure AI is used safely, ethically, and in line with regulations. This includes deciding who can use AI tools, what data can be shared, how outputs are checked, and how risks are managed.
Why it matters: Without governance, AI use becomes ad hoc and risky. Good governance protects service users, supports practitioners, and gives leadership confidence that AI is being used responsibly. It is also what regulators like CQC will look for.
H
Hallucination
When an AI tool generates information that sounds confident and plausible but is factually wrong. This happens because the model predicts likely text, not verified facts. It has no concept of truth, only probability.
Why it matters: In social care, a hallucinated fact in a care plan, assessment, or report could put someone at risk. Always verify AI generated content against reliable sources before using it in practice.
Human in the Loop
A design principle where a human reviews, approves, or modifies an AI system's output before it is acted upon. The AI assists, but a person makes the final decision.
Why it matters: This is the cornerstone of safe AI use in social care. Decisions about people's care, safety, and wellbeing must always have a qualified human making the final call. AI is the assistant, not the decision maker.
L
Large Language Model (LLM)
An AI system trained on enormous amounts of text to understand and generate human-like language. "Large" refers to the billions of parameters (settings) the model uses. ChatGPT, Claude, and Gemini are all large language models.
Why it matters: LLMs are the engines behind most generative AI tools you will encounter. Knowing this helps you understand that when a tool writes text, it is not drawing on a verified knowledge base; it is predicting words based on statistical patterns.
M
Machine Learning (ML)
A branch of AI where systems learn from data rather than being explicitly programmed. Instead of writing rules for every scenario, you give the system examples and it figures out the patterns itself.
Why it matters: Machine learning is what makes AI tools improve over time. But "learning from data" means the quality and fairness of the data directly affects the quality and fairness of the outputs. Biased data in, biased results out.
Model
The trained AI system itself. When someone says "the model," they mean the software that has been trained on data and can now make predictions or generate outputs. GPT-4o, Claude Sonnet, and Gemini Pro are all models.
Why it matters: Different models have different strengths, weaknesses, and risks. Knowing which model powers a tool helps you assess its reliability and understand its limitations.
N
Natural Language Processing (NLP)
The branch of AI that deals with enabling computers to understand, interpret, and generate human language. Every time an AI tool reads your question and writes a response, NLP is at work.
Why it matters: NLP is what allows AI tools to read care plans, summarise reports, and generate draft documents. Understanding that it processes patterns in language, rather than truly comprehending meaning, helps you use these tools with appropriate caution.
O
Open Source
Software where the underlying code is publicly available for anyone to inspect, modify, and improve. The opposite of closed source. DeepSeek, Llama (by Meta), and Mistral are examples of open source AI models. DeepSeek is currently one of the most widely used open source models globally.
Why it matters: Open source tools offer greater transparency, which is valuable in social care where accountability matters. You or your organisation can, in principle, verify how the tool works and what it does with data.
P
Prompt
The instruction or question you type into an AI tool. Everything the AI produces depends on what you ask and how you ask it. A prompt can be a single sentence or a detailed set of instructions.
Why it matters: The quality of AI output depends heavily on the quality of the prompt. Vague questions get vague answers. For social care tasks, clear, specific prompts with context produce more useful and safer results.
Prompt Engineering
The skill of writing clear, specific instructions to get the best possible output from an AI tool. This includes providing context (who you are, what you need), specifying the format, setting boundaries, and telling the AI what not to do.
Why it matters: Better prompts mean more relevant and safer outputs. For social care practitioners, learning basic prompt engineering can turn a general purpose AI tool into something genuinely useful for your work.
R
Retrieval Augmented Generation (RAG)
A technique where an AI tool searches a specific knowledge base before generating a response, rather than relying only on what it learned during training. It retrieves relevant information first, then uses it to create a grounded answer.
Why it matters: RAG reduces hallucinations by anchoring AI responses in real, verified information. For social care, this is particularly useful for tools that need to reference specific policies, legislation, or organisational procedures accurately.
S
Shadow AI
When staff use AI tools at work without the knowledge or approval of their organisation. This might mean pasting case notes into ChatGPT on a personal phone, or using an unapproved tool to draft emails.
Why it matters: Shadow AI is one of the biggest risks in social care right now. It creates data protection issues, confidentiality breaches, and accountability gaps. Organisations need clear policies and approved tools so staff can use AI safely rather than in secret.
Synthetic Data
Artificially generated data that mimics the patterns and properties of real data but does not contain any actual personal information. It is created by AI models trained on real datasets.
Why it matters: Synthetic data allows organisations to test and train AI systems without exposing real people's information. For social care, it could enable innovation while protecting the privacy of service users.
T
Temperature
A setting that controls how creative or random an AI's output is. A low temperature makes the model pick the safest, most predictable word each time. A high temperature lets it take more chances, which can produce more varied but less reliable output.
Why it matters: If you are using AI for factual tasks like summarising legislation, you want a low temperature. If you are brainstorming ideas, a higher temperature might help. Understanding this setting helps you get better results.
Token
The small chunks that AI breaks text into when processing it. Sometimes a token is a whole word, sometimes it is part of a word. The word "becomes" might be split into "be," "co," and "mes." AI processes text as a sequence of tokens, not words.
Why it matters: Tokens determine how much text an AI tool can handle at once. If your document exceeds the tool's token limit (its context window), it will miss information. Knowing this helps you work within the tool's capabilities.
Training Data
The large collection of text, images, or other data that an AI model learns from during its initial development. For a large language model, this typically includes books, websites, articles, and other publicly available text.
Why it matters: The training data shapes everything the AI knows and how it behaves. If the training data lacks representation of social care contexts, the AI may not understand the sector well. If it contains biased content, the AI will reproduce those biases.
Transformer
The type of neural network architecture that powers most modern AI language models. The "T" in GPT stands for Transformer. It is particularly good at understanding context and relationships between words across long pieces of text.
Why it matters: You do not need to understand how transformers work technically, but knowing the term helps you follow conversations about AI and assess vendor claims about their products.
V
Vector (Embedding)
A way of representing words, sentences, or entire documents as lists of numbers so that AI can measure how similar they are. Think of it like plotting words on a map: words with similar meanings end up close together. "Social worker" and "practitioner" would be near each other; "spreadsheet" would be far away.
Why it matters: Vectors are how AI "understands" that related concepts belong together, even if the exact words are different. This is the technology behind semantic search, where an AI tool can find relevant information based on meaning rather than exact keyword matches. For social care, this means AI tools can potentially search case records by concept, not just by the exact phrase you type.
Missing a term?
This glossary is a living document. If you have come across an AI term that is not here and you want it explained, let us know.