Professional Doctorate Research Nottingham Trent University

Write or Wrong: Can We Trust What We Cannot Verify?

"Towards an AI Literacy Framework for Social Work Practice: What pedagogical framework is needed to develop AI literacy that enables social work practitioners to actively use rather than passively accept AI-generated assessment content?"

Researcher Nadia Hajat (ORCID: 0009-0007-2512-6397)
Institution Nottingham Trent University

The Field Right Now

Reality vs. Readiness

86%
No AI Preparation

Newly qualified social workers are entering practice without formal AI literacy, despite 60% already employing these tools (Rothera and Macdonald, 2025).

85
UK Local Authorities

As of early 2025, transcription AI is active in 85 LAs. Adoption is frequently driven by administrative burden rather than evidence (Ada Lovelace Institute, 2026).

Research commissioned by Social Work England highlights a critical tension: AI may reduce workloads, but it risks generating inaccuracies, entrenching biases, and eroding critical thinking. The Gardiner et al. (2026) scoping review, drawing on a decade of literature, confirms that AI is beneficial in systematic environments but fails in the dynamic, relational context of social care assessments that require professional judgement and understanding of relevant legislation.

These findings directly inform the TESSA governance framework, which embeds BASW professional standards and CQC inspection expectations into practical AI guidance. For practitioners already working with AI tools, our learning resources translate this research into day-to-day practice support.

Research documents and analysis illustration representing AI literacy research in social work

Voices of Authority

Research and Institutions

Ada Lovelace Institute
Scribe and Prejudice? (2026)

Identified clear hallucinations entering social work records, including invented suicidal ideation. Characterises social workers as the "primary safety mechanism" while remaining under-trained.

Key Insight

The accountability load is drifting downwards without matched support or shared standards.

M.C. Elish
Data and Society
The Moral Crumple Zone

Describes how humans in automated systems absorb legal responsibility for failures they did not cause.

Ben Green
University of Michigan
The Myth of Oversight

Policies requiring "human oversight" offer a false sense of security while distributing risk downward.

David Wilkins
Cardiff University
Professional Distinction

Comparing LLMs to practitioners: AI fails at the relational and social context dimensions of social work.

Fern Gillon and Beth Weaver
University of Strathclyde
Erosion of Critical Thinking

If the tool produces the narrative, the professional reasoning that connects assessment to intervention is absent.

Where the Evidence is Thin

The Vertical Slice Gap

Vertical Accountability

We lack evidence on what literacy looks like for Team Managers and Commissioners. The accountability chain is broken if the supervisor cannot audit the AI draft.

Research Gap 01

Personhood and Consent

Meaningful consent is absent from the current conversation. We do not know what people drawing on care think about their lives being processed by LLMs.

Research Gap 02

Researcher Profile

Nadia Hajat

Diverse group of women representing social work practitioners and researchers
Study Design
Three-condition pilot | Synthetic cases | Fidelity benchmarks

Write or Wrong?

Nadia Hajat is a qualified social worker and Director of Tessa Tools Ltd. Her research examines whether mechanism-based AI literacy training changes practitioner engagement with machine-generated content.

Nadia identifies as neurodivergent and has directly used AI tools to manage cognitive load under caseload pressure. Her position is that benefit cannot be realised without structured preparation.

"The research asks what training needs to contain to ensure practitioners can actively use rather than passively accept AI outputs. Good assessment is its own defence."