top of page
Search

When AI Ethics Meets Social Work Practice: The Language Problem No One Is Discussing

  • Writer: Nadia Hajat
    Nadia Hajat
  • Jan 10
  • 4 min read

The Oxford Institute for Ethics in AI defines responsible use of generative AI in social care as ensuring "the use of AI systems in the care or related to the care of people supports and does not undermine, harm or unfairly breach fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing."


What does this actually mean when a social worker opens Co-pilot on Monday morning to document an assessment?


The Reality of AI in Local Authorities


Most Local Authorities across England now have Co-pilot enabled. What has been minimal is the training provided to practitioners who use these tools daily. Many social workers receive basic guidance on prompt writing - how to ask the tool to summarise notes or draft an assessment section.


What practitioners have not been told is that Co-pilot now operates using reasoning models that work fundamentally differently from earlier AI systems.The Needle in a Haystack Problem


Large language models are trained on vast datasets of text encompassing trillions of tokens scraped from across the internet. Current models as of 2025 are trained on 1,000 languages and roughly 20–30 terabytes of duplicated, high-quality text. Within this massive corpus, high-quality professional social work practice guidance represents possibly less than 0.01%. UK-specific social work practice - Care Act assessments, Mental Capacity Act decisions, Section 42 safeguarding enquiries - represents an even smaller fraction.


The vast majority of training data is news articles, commercial content, forum discussions, and general internet noise. When you ask the model to help document a Best Interests decision, it draws on a foundation built largely from material that has nothing to do with UK social work. Professional social work practice is the needle. The internet is the haystack.


From Instruction to Constraint


Earlier AI models required detailed instructions - you had to tell them how to think step by step. Reasoning models work differently. They already engage in multi-step thinking, analyse your query, consider different approaches, weigh options before generating output.This introduces a challenge: these models do not simply follow instructions. They reason about whether to follow them. You are no longer telling the model what to do. You are attempting to set boundaries around what it should not do.


Co-pilot automatically selects which reasoning approach to apply to your query. You have no visibility into this decision. You type your prompt, receive your output, and never see what reasoning process was used or why.


When Reasoning About Ethics Goes Wrong


The screenshots show something particularly concerning. When asked to provide examples of situations requiring police involvement, Co-pilot's reasoning model actively tries to avoid bias. You can expand the reasoning process and see it explicitly thinking about avoiding stereotypes.


Screen shot of text from my teams co-pilot
Screen shot of text from my teams co-pilot

However, despite these good intentions, the examples produced still required significant revision. The model reasoned its way toward being ethical, but the training data foundation meant the outcome did not match the intent.This illustrates why transparency matters. When reasoning happens invisibly, practitioners cannot identify where the tool is making problematic decisions - even when those decisions stem from attempts to be ethical. The model tried to uphold equality and avoid harm, but without practitioner oversight at the reasoning stage, the output still needed revision.


Why Language Matters


When documentation shifts from person-centred to clinical language, dignity is affected. When the tool uses terminology that does not align with Care Act requirements, professional accountability is compromised.


Consider documentation for someone who needs support with ADLs. The practitioner specifies that documentation should be strengths-based and person-centred. But the reasoning model, drawing on training data where clinical language dominates, generates text describing "deficits in self-care" and "dependence in ADLs."


Clinical, deficit-focused terminology positions the person as a patient with problems to be managed. Strengths-based, person-centred language positions them as someone with capabilities and outcomes they want to achieve, who requires specific support to do so.The practitioner did not instruct the model to use deficit language. The model reasoned its way there, likely because deficit-focused clinical documentation is more prevalent in its training data than UK social work's strengths-based approach.


Accountability and Professional Practice


Social workers are professionally responsible for their documentation. They must ensure assessments are accurate, comply with legal frameworks, and uphold person-centred practice values. But if the tool is making invisible decisions that affect language, framing, and emphasis, that accountability becomes difficult to maintain.


If practitioners cannot see how the reasoning model is processing their instructions, how do we ensure accountability? Transparency in AI ethics cannot only mean explaining algorithmic decisions to service users. It must also mean practitioners understanding what their tools are doing.


The Training Gap


Most training on AI tools focuses on prompt writing. Practitioners also need to understand how reasoning models differ from instruction-following models, why carefully written prompts might not produce expected results, and where they need to intervene to maintain professional standards.They need to know that scale of training data does not guarantee quality for specialised professional contexts. When the model "thinks," it is pattern-matching based on billions of examples, most of which have nothing to do with social care.


Bridging Principles and Practice


The ethical frameworks being developed are essential. The Oxford definition provides clear values that should guide implementation. But principles require practical pathways.


Bridging this gap requires attending to technical details - how reasoning models work, why training data matters, what constraints are effective - because these details determine whether ethical principles materialise in actual practice.


It requires organisations providing Co-pilot access to also provide meaningful training beyond prompt writing. It requires practitioners having transparency into what their tools are doing. And it requires ongoing dialogue between those developing ethical frameworks and those implementing AI in daily practice.


Conclusion


Language is where dignity, accountability, and professional judgment are enacted in documentation and assessment. When reasoning models operate invisibly, when training data overwhelmingly reflects contexts outside UK social work, when practitioners receive tools without understanding how they work, the gap between ethical principles and practical implementation widens.


The principles outlined in the Oxford framework provide essential direction. Now we need to ensure that direction reaches the practitioner at their desk on Monday morning, using a tool that is making decisions they cannot see.---



 
 
 

Comments


Ethical AI Guidance & Consultancy
We don't sell software. We decode the tech jargon for you

 © Tessa Tools Ltd 2025 · Company No. 16752016 
Registered in England & Wales 71–75 Shelton Street Covent Garden,    London, WC2H 9JQ · All rights reserved · Legal and Compliance 

Favicon and Logo vector_edited.png
  • LinkedIn
  • Youtube
  • Instagram
  • Facebook
  • Spotify
  • X
bottom of page