top of page
Search

Ada Lovelace Findings Confirm the Training Gap I’ve Been Warning About

  • Writer: Nadia Hajat
    Nadia Hajat
  • Feb 11
  • 7 min read

Nadia Hajat | 11 February 2026

Today the Guardian published findings from an eight-month Ada Lovelace Institute study across 17 councils. The headline finding: AI transcription tools used in social work are generating hallucinations that are entering official records.


One practitioner reported the AI fabricated suicidal ideation when the client never mentioned it. Another described transcripts that referred to “fishfingers or flies or trees” when a child was describing parents fighting. These are not edge cases. These are not hypothetical risks. These are AI-generated errors entering the statutory records that inform decisions about children’s safety, adults’ care needs, and families’ futures.

I have been writing and speaking about this risk for over a year. The Ada Lovelace findings do not surprise me. But they should alarm every social worker, supervisor, manager, and policymaker in the UK.



What the study found

The Ada Lovelace Institute conducted an eight-month study across 17 local authorities examining how AI transcription and note-taking tools are being used in social work. The findings, reported in the Guardian on 11 February 2026, include:

  • AI tools generating fabricated content (hallucinations) in social work records

  • Practitioners reporting nonsensical transcriptions of sensitive conversations with children and families

  • AI systems inserting clinical language (such as suicidal ideation) that was never spoken by the client

  • The British Association of Social Workers confirming that disciplinary action is already being taken against practitioners for failing to properly check AI outputs

This last point deserves its own section.


Practitioners are being punished for a system failure

Social workers are now facing professional consequences for failing to spot AI hallucinations they were never trained to identify. Think about what that means. A council procures an AI tool. It rolls the tool out with minimal preparation. It expects practitioners to provide “human oversight” of AI outputs. It does not define what that oversight requires. It does not train practitioners to identify hallucinations, fabrication patterns, or domain-specific AI errors. It does not resource the time needed for meaningful review.


Then, when the AI generates fabricated content and a practitioner does not catch it, the practitioner faces disciplinary action. This is what technology researcher M.C. Elish calls a moral crumple zone: the point where a human absorbs the blame for a system failure they were never equipped to prevent. The practitioner becomes the crumple zone between the AI system that generated the error and the organisation that failed to prepare them.


I wrote about this concept in detail in my recent analysis, When AI Meets Social Work: Automation Bias, the Training Gap, and Why Governance Alone Is Not Enough. The Ada Lovelace findings are a case study in exactly the dynamic I described.


The training gap in numbers

Social Work England’s 2025 commissioned research provides the context for understanding why the Ada Lovelace findings were inevitable:

  • 86% of newly qualified social workers received no AI training during their degree

  • 60% of social workers are already using AI tools in practice

  • 83% of practitioners believe AI reduces administrative burden

  • Magic Notes, one of the AI transcription tools implicated in the Ada Lovelace study, has been adopted by approximately 100 councils across the UK. Microsoft Copilot is being rolled out across local authorities. The technology is here. The training is not.

  • When 86% of practitioners have received no training and 60% are already using AI, we are not looking at a future risk. We are looking at a current crisis.


Why “just check the outputs” is not enough

The standard response to concerns about AI accuracy is some version of: practitioners should check the outputs before they use them. This sounds reasonable. It is not.

There are three problems with the “just check it” approach.

First, checking requires knowledge. If you do not know what hallucinations look like, you cannot spot them. AI hallucinations are not random gibberish (although the “fishfingers or flies or trees” example suggests some are). Many hallucinations are plausible-sounding fabrications that read as if they could be accurate. Identifying them requires understanding how AI systems generate text, where they fail, and what patterns of error look like in your specific domain. Without training, practitioners are reviewing outputs they are not equipped to evaluate.


Second, checking requires cognitive capacity. This is where automation bias enters the picture. Automation bias is the well-documented tendency to defer to algorithmic outputs, especially under cognitive load (Jones-Jang and Park; Parasuraman and Manzey). When you are managing a complex caseload, facing impossible time pressures, and an AI tool offers you a pre-written assessment, the cognitive pull to accept that output is powerful. The whole point of these tools is to reduce cognitive burden. Asking practitioners to then apply the same level of critical scrutiny they would to their own writing defeats the purpose and is unlikely to happen consistently under real-world conditions.


Ben Green’s research on human oversight of algorithmic systems makes this point precisely: the assumption that humans can provide meaningful oversight of AI outputs is often false, because oversight is not cognitively costless. It requires attention, expertise, and time, all of which are in short supply in frontline social work.

Third, checking requires time. Properly reviewing an AI-generated assessment against a source conversation is not a quick task. It requires going back to notes, checking specific claims, verifying language and tone. That time is not being resourced. Supervision capacity is not being increased to accommodate AI-assisted work. We are expecting practitioners to produce more, faster, while also providing the level of review that would make AI use safe. These two demands are in direct tension.


What AI literacy training actually needs to include

My doctoral research is investigating what AI literacy training should look like for social workers. That research will take time to complete properly. But we already know enough to start.


At minimum, practitioners using AI tools need:


  • Understanding of how AI generates text and why hallucinations occur. Not deep technical knowledge, but enough to understand that AI does not “know” anything. It predicts likely text sequences. That prediction process can produce fabricated content that reads as plausible.


  • Domain-specific error recognition. What do hallucinations look like in social work contexts specifically? How does AI handle clinical terminology, emotional content, ambiguous language? Where are the highest-risk failure points?


  • Automation bias awareness. Understanding the cognitive tendency to defer to AI outputs, especially under pressure, and having practical strategies to counteract it.

  • Structured review protocols. Not “check the outputs” but specific, practical procedures for reviewing AI-generated content against source material.


  • Understanding of model differences. Different AI tools work differently. Magic Notes, Copilot, and other tools have different architectures, different strengths, and different failure modes. Practitioners need to understand what they are using.


  • Prompt engineering basics. How the way you interact with AI systems affects the quality and accuracy of their outputs.


  • Supervisors need additional training on quality-assuring AI-assisted work and identifying when practitioners are over-relying on AI outputs. Service managers and leaders need understanding of procurement requirements, implementation planning, and why governance policies alone are insufficient.


Governance is necessary but not sufficient

The Ada Lovelace study found that some councils had governance policies in place. Those policies did not prevent hallucinations from entering records.

This is not surprising. A governance policy is a document. It tells practitioners what they should do. It does not equip them to do it. You cannot govern your way out of a training gap. I am not arguing against governance. Clear policies on AI use in statutory contexts are essential. But a policy that says “practitioners must review AI outputs for accuracy” is meaningless if practitioners have not been trained to identify inaccuracies, do not have time to conduct thorough reviews, and are working under conditions that make automation bias almost inevitable. Governance without training is a framework for blame allocation, not risk mitigation.


The urgency

Social workers are using AI tools now. Councils are procuring AI systems now. Practitioners are facing disciplinary action now. Children and families are having their lives documented by AI systems that fabricate content now.


We need to act on what we already know:

  1. Develop interim training protocols drawing on existing automation bias research, AI literacy frameworks, and emerging social work-specific evidence. Perfect should not be the enemy of good enough.

  2. Mandate AI literacy content in social work degree programmes. The next cohort of qualifying social workers will enter a profession where AI tools are ubiquitous. They need to be prepared.

  3. Resource supervision time for AI-assisted work. Oversight is not free. If you want practitioners to properly review AI outputs, you must give them the time and supervision capacity to do so.

  4. Review disciplinary frameworks. When a practitioner fails to spot a hallucination they were never trained to identify, in a system they were given no preparation to use, under time pressures that make thorough review impossible, that is not an individual failure. It is an organisational one.

  5. Commission more urgent sector-specific research into AI implementation in social work practice. Most of our current evidence base is imported from health and education. Social work has unique characteristics (statutory responsibilities, narrative-based assessments, vulnerable populations) that demand sector-specific understanding.


The moral crumple zone is here

The Ada Lovelace findings are not a warning about what might happen. They are evidence of what is already happening. AI tools are generating fabricated content in social work records. Practitioners are absorbing the consequences. The training gap that makes this inevitable has been documented, reported on, and written about. And still, councils continue to procure and implement AI tools without adequately preparing the people who use them. When a social worker is disciplined for failing to spot an AI hallucination they were never trained to identify, that is not accountability. That is a moral crumple zone. We can do better than this. But only if we start treating AI implementation as a workforce development issue, not just a procurement decision.

The evidence is now overwhelming.


The question is whether we act on it.

Nadia Hajat is an independent social worker and doctoral researcher at Nottingham Trent University, investigating AI implementation in social work practice. She is the founder of Tessa Tools Ltd.

For training enquiries: Contact me here

 
 
 

Comments


Ethical AI Guidance & Consultancy
We don't sell software. We decode the tech jargon for you

 © Tessa Tools Ltd 2025 · Company No. 16752016 
Registered in England & Wales 71–75 Shelton Street Covent Garden,    London, WC2H 9JQ · All rights reserved · Legal and Compliance 

Favicon and Logo vector_edited.png
  • LinkedIn
  • Youtube
  • Instagram
  • Facebook
  • Spotify
bottom of page