What a Responsible AI Framework for Social Care Actually Looks Like
Most AI policies in social care look the same. They reference UK GDPR. They list principles: transparency, accountability, fairness, human oversight. They acknowledge the risks. They are circulated, filed, and largely unread by the practitioners who are meant to follow them.
That is not a framework. That is a document.
The difference matters more than most organisations realise. And as CQC and Ofsted sharpen their focus on AI governance, it is a distinction that will start to show up in inspection outcomes.
A policy tells you what the organisation believes. A framework tells practitioners what to do.
This is the core distinction. A policy articulates values and intent. It might say something like "all AI use must respect human dignity and comply with data protection legislation." That is fine. It is also almost entirely useless to a social worker sitting in front of an AI-generated care assessment at four o'clock on a Friday afternoon, wondering whether the summary they are looking at is accurate.
A responsible AI framework gives that practitioner something actionable. It answers the questions that a policy cannot.
What should I check in this output before I rely on it? What do I do if I think something is wrong? Who is accountable if this assessment contributes to a poor outcome? What training have I had that equips me to make that judgment?
If your organisation cannot answer those questions, you have a policy. You do not have a framework.
What most organisations have right now
In my work with social care organisations across England, I see the same pattern repeatedly. The policy exists, sometimes in draft form. The AI tools are already in use. The gap between the two is where practitioners are most exposed.
What most policies contain
- Reference to UK GDPR and Data Protection Act 2018
- Principles: transparency, fairness, accountability
- A list of approved or prohibited tools
- A statement on human oversight
- A sign-off process for new AI tools
What a framework adds
- Named roles and accountability at every level
- Specific training requirements for practitioners
- Review protocols for AI-generated outputs
- Clear escalation routes when something goes wrong
- An audit mechanism that actually works
The Ada Lovelace Institute's 2025 review of AI in 17 local authority social care services found that in many councils, the governance infrastructure was either absent or theoretical: frameworks existed on paper that had no operational reality in teams using AI daily. The result was practitioners navigating AI adoption without the knowledge or structures to do it safely.
The five elements a responsible AI framework must contain
1. Governance structure with named accountability
Vague accountability is no accountability. A responsible AI framework names who is responsible for what: which role approves new tools, which role reviews incidents, who the practitioner contacts if they have concerns. This is not bureaucracy for its own sake. It is the difference between a governance structure and a governance aspiration.
2. Training requirements that go beyond awareness
AI awareness training tells practitioners that AI exists and has risks. That is not sufficient. Practitioners using AI tools in statutory contexts need domain-specific knowledge: how to recognise errors in AI-generated assessments, how automation bias operates under cognitive load, how to maintain narrative fidelity when AI is supporting documentation. The Social Work England commissioned research from 2025 found that 86% of newly qualified social workers receive no AI training. A framework that does not address that gap is not fit for purpose.
3. Review protocols for AI-generated outputs
Human oversight is a principle. A review protocol is how that principle becomes practice. It specifies what a practitioner should check, how, and how often. It distinguishes between AI tools used for administrative support and AI tools used in direct assessment or decision support. It accounts for the reality that practitioners under high caseload pressure are vulnerable to automation bias, and it builds in protections rather than relying on individual vigilance.
4. Clear escalation routes
When a practitioner believes an AI tool has produced inaccurate output, what do they do? If the answer is "raise it with their manager," that is a starting point. A framework specifies what happens next: is the tool suspended pending review? Is the incident recorded? Is the vendor contacted? Under UK GDPR, there are reporting obligations when AI contributes to decisions that significantly affect individuals. A framework maps those obligations before they are needed, not during a crisis.
5. An audit mechanism with teeth
A framework without audit is a framework that drifts. Audit here does not mean an annual paper review. It means a regular, structured process that examines whether the framework is being implemented in practice: are practitioners trained? Are review protocols being followed? Are incidents being recorded? Are tools performing as intended? The Care Act 2014 requires local authorities to promote wellbeing and prevent the need for care and support. AI governance is part of how organisations evidence that they are meeting that duty.
What CQC and Ofsted are looking for
Inspectors are asking about AI governance. Not every inspection, not yet systematically, but the direction of travel is clear.
CQC's inspection framework already covers safe use of technology in care settings. Ofsted's social care framework expects leaders to demonstrate that practice is safe and well-governed. AI tools that support assessment, documentation, or decision-making fall within both. An organisation that cannot show how it governs AI use, trains its staff, and handles AI-related incidents is carrying a risk that will eventually surface in inspection findings.
A policy that references UK GDPR is not evidence of governance. A framework with named accountability, training records, review protocols, and an audit trail is.
Why this is a practitioner issue, not just a leadership issue
I want to be direct about something. The burden of managing AI risk should not fall on individual practitioners. It falls on organisations. A social worker who uses an AI tool that their organisation has approved, without adequate training and without clear protocols, is not the person who failed. The organisation failed them.
But practitioners who understand what a responsible AI framework looks like are in a stronger position. They can ask the right questions. They can recognise when governance is absent. They can push back, through supervision, through professional standards bodies, through their union, when AI is being used in ways that put them or the people they work with at risk.
Knowing what good governance looks like is part of professional knowledge. It is not optional in a sector where AI is already in use.
Where to start
If you are building or reviewing your organisation's approach to AI governance in social care, the TESSA Responsible AI Framework sets out the structure and principles that an operational framework should be built on. It is grounded in social care practice, aligned with current regulatory expectations, and available free.
If you want to go further: talk to me. I work with organisations to build frameworks that work in practice, not just on paper. That means reviewing your current tools and policies, identifying gaps, designing training that is specific to your context, and putting in place the governance structures that protect practitioners and the people they serve.
A responsible AI framework is not a compliance exercise. It is how you build an organisation where AI supports good practice rather than exposing practitioners to risk they were never equipped to manage.
References and Further Reading
Ada Lovelace Institute. (2025). AI in local authority social care: findings from 17 councils. Ada Lovelace Institute. https://www.adalovelaceinstitute.org/
Care Act 2014. HM Government.
Data Protection Act 2018. HM Government.
ICO. (2023). Guidance on AI and data protection. Information Commissioner's Office. https://ico.org.uk/
Social Work England. (2025). AI and social work: commissioned research findings. https://www.socialworkengland.org.uk/
UK GDPR. (2021). UK General Data Protection Regulation. HM Government.
CQC. (2024). How we regulate and inspect: our methodology. Care Quality Commission. https://www.cqc.org.uk/