Training or Tech Overhaul? Fixing the Ethical Crisis in Social Work’s Digital Future
- Nadia Hajat

- Oct 29
- 3 min read

AI is transforming social work. Tessa Tools explores whether ethical AI training or systemic tech reform is the key to safeguarding professional judgment, client voice, and accountability in adult social care.
The Digital Debate We Can No Longer Avoid
The AI debate in social care centres on ethics, accountability, and who shapes the impact of technology on lives.
What began as curiosity about efficiency has become an ethical reckoning: are we ready to let AI decide what is significant in the lives of the people we assess?
Across adult social care, AI-assisted writing tools are now used to “improve clarity” or “streamline documentation.” But subtle linguistic shifts can have profound ethical consequences.
A practitioner might write:
“The person and their family have concerns about managing daily routines.”
Yet the AI output reframes this as:
“There is a significant risk around daily living.”
That single change transforms lived experience into a formalised risk judgement—without practitioner intent or consent. Over time, such automated reframing shapes both the record and the narrative underpinning eligibility, risk, and resource allocation.
The issue is not speed. It is ethics.When technology begins to define significance, social workers must deliberately reassert professional judgment to preserve authentic client voice and accountability. Comprehensive ethical AI training helps practitioners interpret digital outputs, question bias, and maintain professional judgment.
The Problem We Already Know About
Practitioners have used tools like WhatsApp for years. We understand end-to-end encryption and know these platforms were never designed for social care. We adapted—creating workarounds, avoiding sensitive document sharing, and developing informal data awareness. But the truth is clear: the Data Protection Act does not address personal messaging apps in professional social work, leaving no unified standard or national oversight. Practice relies on local custom, discretion, and organisational patchwork rather than shared professional guidance. If we cannot regulate established tools, how can we hope to manage algorithms that rewrite professional reasoning?
The Training Argument: Digital Literacy as Safeguard
This reality demands immediate, structured ethical AI and digital literacy training—not as a technical upgrade but as the foundation of professional accountability in a digital era.
Training is the mechanism that safeguards the Care Act’s values in a changing practice environment. It enables practitioners to question AI outputs, interpret how data is processed, and recognise when technology has altered the meaning of lived experience.
Digital upskilling keeps social work ahead of the curve, not behind it. Practitioners do not need to become data scientists but they must be confident, critical users who can interpret, challenge, and ethically apply technology.
The Tech Argument: Systemic Overhaul and Co-production
The counterargument challenges the assumption that the crisis lies in worker readiness. If digital systems are poorly designed, untested for person-centred care, or built without practitioner input, no amount of training will fix the flaw.
Ethical practice cannot exist within unethical systems.To change this, organisations should apply human-centred design principles and co-production from the outset.
Three questions for vendors and digital partners:
How does the tool embed practitioner and user feedback in its design?
How does it support the individual needs and preferences of clients?
How does it prioritise data privacy and user empowerment?
True reform means embedding social work expertise directly into the development and governance of technology—before deployment, not after failure.
Bridging the Divide: User and System Readiness
Social care’s digital ethics gap has two layers: user readiness and system readiness.Each depends on the other.
Practitioners need competence now, but the profession also requires a seat at the design table to prevent repeated ethical failures. Co-production sessions using “datasheets for datasets” can strengthen transparency and accountability, creating a shared standard for ethical development.
Stepwise Action Plan
Audit current digital tools and governance policies within your organisation.
Deliver tailored AI and digital ethics training for all practitioners.
Establish multi-disciplinary teams including practitioners, service users, and technology experts.
Co-produce new systems through advisory groups and pilot testing.
Evaluate ethical and operational impact continuously using practitioner and service-user feedback.
Where Accountability Meets Ethical AI
The future of ethical digital practice depends on how we define and measure quality not speed. AI can enhance care only when it is transparent, explainable, and guided by professional reasoning, not commercial convenience. That principle underpins Tessa Tools’ research and training frameworks designed to keep Voice, Ethics, Reasoning, and Assurance at the heart of every assessment and algorithm.
About the Author
This article is based on research and reflections by Nadia Hajat, Founder and Director of Tessa Tools Ltd, a consultancy specialising in ethical AI training and digital literacy for adult social care.
Call to Action
Ready to future-proof your team’s digital practice?
Explore ethical AI training frameworks and CPD-aligned workshops at tessa-tools.org.Connect with Nadia Hajat on LinkedIn or watch explainers on YouTube.
Tessa Tools Ltd — Ethical AI training built with practitioners and people, not for them.


Comments