Can AI Replace Social Workers? Here Is What the Evidence Actually Says
This is the question I hear most often. At sector events, in training sessions, in messages from practitioners who are anxious but do not quite know what they are anxious about. So let me answer it directly.
Right now: no. AI cannot replace social workers.
But I want to explain why that answer is more complicated than it sounds, and why something Anthropic published recently has made me think about it differently. Not because I have changed my mind. Because the question itself is changing.
Moravec's Paradox: why this matters for social work
In the 1980s, roboticist Hans Moravec made an observation that has held up remarkably well. He noticed that the tasks computers find easy are not the tasks humans find hard, and vice versa. Chess? Straightforward for a computer. Walking across an uneven floor and reading someone's expression as you approach them? Enormously complex. Still hard.
Moravec's Paradox: the high-level, abstract reasoning we associate with intelligence is computationally simple for AI. The sensorimotor, relational, contextual work we do without thinking is where AI still struggles most.
Now apply that to social work.
The tasks AI can assist with in social work: drafting case notes from a verbal summary, cross-referencing records, flagging patterns in structured data. These are genuine time savers and AI is getting better at them fast.
The tasks at the core of social work practice: reading a room. Noticing that a parent's account of events does not align with what the child's body language is communicating. Holding the uncertainty of a safeguarding decision where every option carries risk. Building enough trust with a person in crisis that they tell you what is actually happening. Knowing when to push and when to wait.
Those are not peripheral skills. That is the job. And they fall squarely in the category that Moravec's Paradox tells us AI finds hardest.
What Anthropic just changed
Here is where it gets more interesting.
In April 2026, Anthropic's interpretability research team published a paper titled "Emotion concepts and their function in a large language model." Using colour-mapped visualisations of internal model activations, researchers identified 171 distinct emotional states with measurable internal representations: signals that influence how the model processes and responds. These are not designed in. They emerge. They are measurable. They are real in the sense that they shape outputs.
This is not science fiction. It is the frontier of what we now understand about how these systems work.
I want to be precise about what this does and does not mean. It does not mean AI is conscious. It does not mean AI experiences the world. It does not mean AI can do what a social worker does. But it does mean we can no longer accurately describe AI as "just a tool" in the way we might describe a spreadsheet or a database. Something more complex is happening internally. The question of what that means for professional practice deserves serious engagement, not dismissal.
For those arguing that AI will eventually replace social workers, this research will seem significant. Here is why I think it is not the decisive factor.
What social work actually requires
Even if AI develops something that functions like awareness of its own emotional states, that is not the same as being able to do what a social worker does. Professional social work practice is shaped by requirements that have no AI equivalent.
The things AI cannot replicate
Professional accountability under the HCPC Standards of Proficiency and Social Work England Professional Standards. A social worker is personally and professionally accountable for their decisions. AI is not regulated, cannot be supervised in the professional sense, and cannot carry that accountability.
Ethical obligations under the BASW Code of Ethics. Social work practice is grounded in values: human dignity, self-determination, social justice. These are not parameters in a prompt. They are professional commitments that shape how decisions are made.
Relational practice that requires a body, a history, and a stake in the outcome. AI has none of these things. It has no experience of what it means to sit with a family in crisis, no history of carrying cases, no professional identity formed through practice and supervision.
Trauma-informed presence. Being with someone in distress in a way that holds them. This is not something that can be replicated by a system that processes text.
A social worker can be registered. They can be struck off. They can be supervised, supported, and held accountable by a professional regulator. They can be called into court and required to defend their reasoning. AI cannot. These are not small things. They are what professional practice means.
The real risk is not the technology
The risk is not that AI will decide to replace social workers. AI does not make those decisions.
The risk is that organisations under financial pressure, facing workforce shortages, and looking for ways to manage increasing demand, will use AI as a justification for what some have wanted to do for years: reduce headcount, dilute professional thresholds, and substitute AI-generated risk assessments for qualified social work judgment.
That is not a technology problem. It is a workforce planning and political problem. And it is worth naming it clearly.
The Ada Lovelace Institute's 2025 review of AI in local authority social care found that some councils are already using AI tools to triage cases and generate draft assessments without adequate human review. The risk is not that the AI has replaced the social worker in name. It is that the quality of human oversight has been quietly reduced to the point where the worker is, in practice, rubber-stamping outputs they do not have time to scrutinise.
M.C. Elish named this the moral crumple zone: the point where a human absorbs the blame for a system failure they were never equipped to prevent. Social workers without AI training are uniquely vulnerable to becoming exactly that.
What practitioners need to do
Understand the tools you use. Not at a technical level, but at a practical one. What kind of errors do large language models make? How do you recognise when an AI-generated assessment has omitted something critical? What does automation bias look like in your own practice?
Know your rights under UK GDPR. When automated or semi-automated decision-making is used to make significant decisions about individuals, there are legal protections. Those protections apply in social care. They apply to the people you work with.
Push back when AI is positioned as a replacement for professional judgment rather than a support for it. That distinction matters. It is the difference between a tool that frees up your time and a system that erodes your professional role.
And keep asking the question. Not "can AI replace me?" but "is this AI being used in a way that supports or undermines the quality of social work?"
Those are very different questions. The answer to the second one will shape this profession for the next decade.
The answer, for now
Moravec's Paradox tells us that the things AI finds hardest are exactly the things that social work is built on. Anthropic's research tells us that AI is more complex internally than we previously understood. Neither of those facts changes the answer to the question.
Can AI replace social workers? Right now: no. Not consistently. AI cannot write a decent social work assessment with any reliability. It cannot hold a safeguarding decision. It cannot sit with a family in crisis. The practitioners who will do well in ten years are those learning to integrate these tools into their practice now, on their own terms, with a clear understanding of what AI can and cannot do.
That said, the honest answer is not "never." If AI moves from large language models to world models that can reason about physical and relational environments, and if robotics develops to the point where AI can engage bodily with people in distress, the question becomes more complex. That is a future question. And it may not resolve as replacement: people may want to choose between a human social worker, an AI-supported one, or something hybrid. That choice should belong to the people using services. Anthropic's emotions research points in an interesting direction too. As developers begin to understand how AI internally represents and processes emotional states, there is potential it may become better at mimicking the relational dimensions of practice. That does not make it a social worker. But it means the conversation will need to keep being had.
What AI can do, today, is help practitioners work better. With the right training, the right governance, and the right tools, AI supports the work without supplanting it. That is the future worth building. And it is the future that good governance, not wishful thinking, will protect.
If you want to understand what ethical AI governance looks like in practice, start with the TESSA Responsible AI Framework. It is free, grounded in social care practice, and built for the people doing the work.
References and Further Reading
Ada Lovelace Institute. (2025). AI in local authority social care: findings from 17 councils. Ada Lovelace Institute. https://www.adalovelaceinstitute.org/
Elish, M.C. (2019). Moral crumple zones: cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60.
Moravec, H. (1988). Mind children: the future of robot and human intelligence. Harvard University Press.
Social Work England. (2025). AI and social work: commissioned research findings. https://www.socialworkengland.org.uk/
Anthropic Interpretability Team. (2026). Emotion concepts and their function in a large language model. Anthropic. https://www.anthropic.com/research/emotion-concepts-function
HCPC. (2017). Standards of proficiency: social workers in England. Health and Care Professions Council.
BASW. (2021). The code of ethics for social work. British Association of Social Workers.