"Towards an AI Literacy Framework for Social Work Practice: What pedagogical framework is needed to develop AI literacy that enables social work practitioners to actively use rather than passively accept AI-generated assessment content?"
Reality vs. Readiness
Newly qualified social workers are entering practice without formal AI literacy, despite 60% already employing these tools (Rothera and Macdonald, 2025).
As of early 2025, transcription AI is active in 85 LAs. Adoption is frequently driven by administrative burden rather than evidence (Ada Lovelace Institute, 2026).
Research commissioned by Social Work England highlights a critical tension: AI may reduce workloads, but it risks generating inaccuracies, entrenching biases, and eroding critical thinking. The Gardiner et al. (2026) scoping review, drawing on a decade of literature, confirms that AI is beneficial in systematic environments but fails in the dynamic, relational context of social care assessments that require professional judgement and understanding of relevant legislation.
These findings directly inform the TESSA governance framework, which embeds BASW professional standards and CQC inspection expectations into practical AI guidance. For practitioners already working with AI tools, our learning resources translate this research into day-to-day practice support.
Research and Institutions
Identified clear hallucinations entering social work records, including invented suicidal ideation. Characterises social workers as the "primary safety mechanism" while remaining under-trained.
The accountability load is drifting downwards without matched support or shared standards.
Describes how humans in automated systems absorb legal responsibility for failures they did not cause.
Policies requiring "human oversight" offer a false sense of security while distributing risk downward.
Comparing LLMs to practitioners: AI fails at the relational and social context dimensions of social work.
If the tool produces the narrative, the professional reasoning that connects assessment to intervention is absent.
The Vertical Slice Gap
We lack evidence on what literacy looks like for Team Managers and Commissioners. The accountability chain is broken if the supervisor cannot audit the AI draft.
Research Gap 01Meaningful consent is absent from the current conversation. We do not know what people drawing on care think about their lives being processed by LLMs.
Research Gap 02Nadia Hajat
Nadia Hajat is a qualified social worker and Director of Tessa Tools Ltd. Her research examines whether mechanism-based AI literacy training changes practitioner engagement with machine-generated content.
Nadia identifies as neurodivergent and has directly used AI tools to manage cognitive load under caseload pressure. Her position is that benefit cannot be realised without structured preparation.
"The research asks what training needs to contain to ensure practitioners can actively use rather than passively accept AI outputs. Good assessment is its own defence."