The Data (Use and Access) Act 2025 (DUAA) is UK legislation that amends the UK GDPR. It came into force on 5 February 2026. Section 80 is the key provision for social care: it mandates meaningful human involvement in significant decisions affecting individuals. If your organisation uses AI to inform decisions about people's lives, you must demonstrate that a qualified professional reviews every output before it influences an outcome.
UK GDPR requires organisations to complete a Data Protection Impact Assessment (DPIA) under Article 35 before deploying AI that processes personal data at scale or with high risk to individuals. You must also have a lawful basis for processing, ensure data minimisation, and provide transparency about how AI is used in decision-making. In social care, most AI use cases involve sensitive personal data, so compliance is not optional.
A Data Protection Impact Assessment (DPIA) is a process required under Article 35 of UK GDPR. You need one whenever AI processing is likely to result in high risk to individuals. In social care, this covers most AI use cases because you are processing sensitive personal data about vulnerable people. A DPIA documents the risks, the mitigations you have in place, and how you will monitor them over time.
Under Section 80 of the Data (Use and Access) Act 2025, meaningful human involvement means a qualified professional must review, apply professional judgement to, and take responsibility for any AI-generated output before it influences a significant decision about an individual. Simply approving or rubber-stamping what AI produces does not count. The professional must genuinely engage with the content and be able to explain their reasoning.
CQC assesses AI governance under the Single Assessment Framework, specifically the quality statements on "Governance, management and sustainability." They look for clear roles and systems of accountability for AI, named governance leads, formal error reporting processes, and evidence that human oversight is maintained over AI-generated outputs. If you cannot show who is responsible for AI decisions in your service, that is a gap an inspector will flag.
Ofsted evaluates whether organisations have appropriate boundaries on AI use, whether staff understand the risks and limitations, and whether safeguarding is maintained when AI tools are used in settings involving children and young people. If AI is being used in assessment writing, reporting, or communication with families, Ofsted will want to see that professional judgement remains central and that children's data is protected.
Yes. The CQC Single Assessment Framework expects services to have clear roles and systems of accountability. A named governance lead is someone in your organisation who takes responsibility for how AI is used, ensures policies are followed, and can explain your approach to a regulator. This does not need to be a new role; it can be added to an existing senior position, but it must be documented and understood across the team.
Your error reporting process should allow any staff member to flag when AI produces inaccurate, inappropriate, or biased output. It should record what happened, what action was taken, and what was changed as a result. Treat it like any other incident reporting system: log, review, learn, and improve. Regulators want to see that you have a culture of accountability, not perfection.
Social Work England Principle 9 states practitioners must have specific skills and expertise to implement and use AI safely. Training should cover prompt design for accuracy and transparency, recognising AI errors and hallucinations, bias awareness in AI-generated case notes, maintaining professional authorship, and understanding when AI use is and is not appropriate. This is not a one-off session; it needs to be embedded in CPD.
Professional authorship means that you, as the qualified practitioner, take full ownership of any document or assessment you submit, regardless of whether AI helped draft it. If you used AI to help write a report, you must review every line, ensure it reflects your professional judgement, and be able to defend it. The output is yours, not the machine's.
Be straightforward: explain what tools you use, what they are used for, what safeguards are in place, and who is accountable. Show your Responsible AI Framework, your DPIA, your training records, and your error reporting logs. Regulators are not looking for organisations that avoid AI; they are looking for organisations that govern it well. If you can walk an inspector through your process with confidence, you are in a strong position.
The Digital Care Hub Tech Pledge is a voluntary commitment. Commitment 2 specifically asks organisations to educate users on how AI models are trained and how data inputs impact outcomes. While not legally binding, signing and following the pledge demonstrates good practice to regulators and shows your organisation takes responsible AI seriously.
Our free assessment covers all three areas and gives you tailored next steps based on your specific gaps.
Take the Assessment