top of page
Search

When AI Meets Social Work: Automation Bias, the Training Gap, and Why Governance Alone Is Not Enough

  • Writer: Nadia Hajat
    Nadia Hajat
  • Feb 8
  • 3 min read

Updated: Feb 16


The headline: 83% of social workers believe AI could reduce administrative burden.

The problem: 86% of newly qualified social workers receive no AI training. Yet 60% are already using AI tools in practice. That is not innovation for the sector. That is a high risk trade off


What Social Work England Found

In late 2025, Social Work England published two commissioned research reports on AI use in social work. The findings confirm what many leaders suspect: AI is already being used across social work, social care, healthcare, and education. But the sector is unprepared. Practitioners are using generative AI tools without organisational guidance. Some are copying case records into publicly available AI platforms, believing it is ethical if they remove names. Evidence on AI in social work is sparse. Most comes from healthcare pilots, not long-term evaluations in statutory social work contexts. The training gap is significant. AI literacy is absent from most qualifying programmes and continuing professional development. Practitioners are adopting tools they do not understand, under time pressure, with high caseloads and safeguarding responsibilities.


Automation Bias: The Hidden Risk

Automation bias is the well-documented tendency to defer to algorithmic outputs, especially under cognitive load stress. Research shows that humans accept incorrect AI-generated content more often when tired, rushed, or carrying multiple demands.

In social work, social care, and healthcare, that means:


  • Accepting AI-generated Care Act assessments without adequate scrutiny

  • Missing omissions or inaccuracies because the system did not flag them

  • Losing professional judgement as AI handles cognitive labour


When things go wrong, practitioners absorb the blame. The AI vendor is protected. The organisation points to policy. The social worker, who was never trained to critically evaluate the output, becomes the "moral crumple zone."


Why Governance Is Not Enough

Both research reports recommend governance frameworks, ethical oversight, and regulatory alignment. These are necessary. But governance does not change behaviour if practitioners lack the knowledge to implement it. The sector has named the risks: ethics, privacy, accountability, safeguarding. What it has not done is determine what AI literacy practitioners actually need. This is not about turning social workers into data scientists. It is about ensuring professionals working in statutory contexts can:


  • Recognise when AI output is unreliable

  • Identify when key information has been omitted or distorted

  • Maintain professional judgement when using AI-generated content for Care Act assessments, safeguarding decisions, and care planning


What Your Organisation Needs

AI adoption without training creates organisational risk. Effective AI use requires:


For practitioners: AI literacy training on how large language models work, how to critically evaluate outputs, and how to maintain narrative fidelity in assessments


For supervisors: Quality assurance frameworks that identify where AI-generated content compromises professional standards


For leaders: Procurement guidance, governance frameworks, and organisational readiness assessments that go beyond vendor promises


For educators: AI literacy embedded into qualifying programmes, not as an add-on, but as core professional knowledge


How Tessa Tools Can Help

Tessa Tools provides research-informed AI training, consultancy, and quality assurance support for organisations adopting AI in social work, social care, healthcare, and education.

I am a practicing social worker and doctoral researcher at Nottingham Trent University. I have used AI tools (Magic Notes, Microsoft Copilot) while carrying a statutory caseload. I research the gap between AI adoption and professional preparation, and I train organisations to use AI safely.


Services include:

  • AI literacy training for practitioners, supervisors, and leaders

  • Policy development and governance frameworks

  • Quality assurance methods for AI-generated content

  • Organisational readiness assessments and procurement guidance

The sector does not need to choose between innovation and safety. It needs both, held together by training that is grounded in evidence, informed by practice, and specific enough to make a difference.


 
 
 

Comments


Ethical AI Guidance & Consultancy
We don't sell software. We decode the tech jargon for you

 © Tessa Tools Ltd 2025 · Company No. 16752016 
Registered in England & Wales 71–75 Shelton Street Covent Garden,    London, WC2H 9JQ · All rights reserved · Legal and Compliance 

Favicon and Logo vector_edited.png
  • LinkedIn
  • Youtube
  • Instagram
  • Facebook
  • Spotify
bottom of page