Social workers at risk of legal action over AI inaccuracies
Social workers could face “professional or legal repercussions” for inaccuracies in artificial intelligence-generated case notes, a think tank has warned.
Research by the Ada Lovelace Institute highlights concerns over the rapid rollout of AI transcription tools across local authorities.
While finding many benefits, such as time saved, it also highlights a risk of bias, “hallucinations” and “unpredictability potentially leading to harmful misrepresentations”.
Such risks “aren’t being fully mitigated” by employers and policymakers, leaving frontline workers to “navigate these challenges on their own”, the thinktank warns.
Researchers said: “Inaccuracies in care records could impact decisions made about people’s care in statutory contexts, leading to incorrect decisions being made.
“Inaccurate records could also directly impact the experience of people who draw on care, as records of interactions with social workers can help people to understand significant events in their lives and seek redress when things go wrong.
“Moreover, social workers could face professional or legal repercussions for inaccuracies that stem from the transcription and summarisation functions of AI transcription tools.”
The research consulted 39 social workers with experience of using AI transcription tools across 17 local authorities, including senior staff involved in procurement.
Social work concerns
Social workers themselves have raised concerns over inaccuracies in AI transcription.
One described how a transcription tool had incorrectly “indicated that there was suicidal ideation” despite their client “at no point” talking about suicidal ideation or planning.
The social worker noted: “If I hadn’t checked that and it had gone into the case note… that could have had implications further on down the road.”
Another worker said their AI-generated transcriptions often included “gibberish”.
Several social workers found AI-generated documentation “very academic and very formal” and “not as person-centred”.
Irrelevant information being recorded was also noted. One social worker said: “We've seen already examples where someone has had that conversation, not really proofread it properly, it's on the child's file and we've got either wrong names in there or we've got someone talking about their cat … [which is] not really relevant for that child's file or anything to do with the conversation that was happening.”
The researchers said: “These examples suggest that the oversight of AI transcription tools is inconsistent across the social care sector and errors are occurring in official documentation processes, with potentially profound consequences.”
They further warned there is “very little evidence” on how AI tools impact on service users, adding: “Our evidence suggests that these tools can lead to harm in some instances.”
Efficiency over quality
The research also raises concerns that adoption of AI in local authorities is motivated by “efficiency” rather than benefit to people using social services.
They said: “As an example of measuring for efficiency, one manager wanted to test whether social workers could use AI to relieve their administrative workloads, so that social workers could move from one home visit per day to two.
“Managers also spoke of the pressures to evidence an ROI (return on investment) or cost saving from the introduction of AI transcription tools, which they aimed to achieve through improvements in efficiency.”
One social worker said: “In terms of the conversations we’ve had as a team, it’s more often been ‘How are you using it?’, ‘What are you using it for’, and ‘How much time has it saved you?’, rather than a quality aspect.”
Benefits highlighted
While serious concerns were raised, social workers also spoke of the “profound” impact of AI in terms of time saved, more comprehensive note-taking and better work/life balance.
One said: “It just changed everything for me”, while another proclaimed: “I can’t imagine doing my job without it.”
Another said AI had halved their workload while one worker reported being able to spend more time with people and “build better relationships”.
One social worker interviewed highlighted being able to see four times as many people a month while others spoke of being able to clear backlogs.
Some reported being able to leave work on time more often with one worker saying: “I’m not turning my laptop on at the weekend. I’m not turning my laptop on late at night… I would often find myself working till early hours in the morning.”
AI was also praised for freeing up time to spend on other tasks, such as preparing court documents.
Some felt AI recorded information “they might otherwise forget”, capturing a person’s “situation to its fullest” and giving space for better interaction with people.
Training and other issues
At some local authorities processes made integrating AI too complicated, the researchers found, while others felt they did not have sufficient training to benefit from it.
Social workers also expressed worry that AI transcription tools removed their professional judgement and their voice.
One said: “‘I just don't think it would suit how I speak with people, it's very much [a] casual conversation that would be dotted around and to upload that, and get that as a transcript, I don't think would be useful to me.”
Being recorded can also be problematic to people with certain health issues or “negative experiences” such as police interviews, some workers noted.
Recommendations
The think tank, funded by the Nuffield Foundation to ensure AI works for people and society, called for further research on how much oversight is needed to ensure the ‘human in the loop’.
It also urged regulators, local authorities and sector bodies to produce guidance for the use of AI transcription with “clear accountability structures”.