AI chatbots have an "empathy gap" that puts children at risk: research
Researchers at the University of Cambridge have found that when not designed with children’s needs in mind, AI chatbots have an “empathy gap” that puts young users at particular risk of distress or harm.
The research, by University of Cambridge academic Dr Nomisha Kurian, urges developers and policy actors to make “child-safe AI” an urgent priority. It provides evidence that children are particularly susceptible to treating AI chatbots as lifelike, quasi-human confidantes, and that their interactions with the technology can often go awry when it fails to respond to their unique needs and vulnerabilities.
The study links that gap in understanding to recent cases in which interactions with AI led to potentially dangerous situations for young users. They include an incident in 2021, when Amazon’s AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin. Last year, Snapchat’s My AI gave adult researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old.
Both companies responded by implementing safety measures, but the study says there is also a need to be proactive in the long term to ensure that AI is child-safe. It offers a 28-item framework to help companies, teachers, school leaders, parents, developers and policy actors think systematically about how to keep younger users safe when they ‘talk’ to AI chatbots.
Dr Kurian conducted the research while completing a PhD on child wellbeing at the Faculty of Education, University of Cambridge. She is now based in the Department of Sociology at Cambridge. Writing in the journal Learning, Media and Technology, she argues that AI has huge potential, which deepens the need to “innovate responsibly”.
“Children are probably AI’s most overlooked stakeholders,” Kurian said. “Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds. That is understandable because people have only recently started using this technology on a large scale for free. But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”
Kurian’s study examined real-life cases where the interactions between AI and children, or adult researchers posing as children, exposed potential risks. It analysed these cases using insights from computer science about how the large language models (LLMs) in conversational generative AI function, alongside evidence about children’s cognitive, social and emotional development.
LLMs have been described as “stochastic parrots”: a reference to the fact that they currently use statistical probability to mimic language patterns without necessarily understanding them. A similar method underpins how they respond to emotions.
This means that even though chatbots have remarkable language abilities, they may handle the abstract, emotional and unpredictable aspects of conversation poorly; a problem that Kurian characterises as their empathy gap. They may have particular trouble responding to children, who are still developing linguistically and often use unusual speech patterns or ambiguous phrases. Children are also often more inclined than adults to confide sensitive personal information.
Despite this, children are much more likely than adults to treat chatbots as if they are human. Recent research found that children will disclose more about their own mental health to a friendly-looking robot than to an adult.
Kurian’s study suggests that many chatbots’ friendly and lifelike designs similarly encourage children to trust them, even though AI may not understand their feelings or needs.
“Making a chatbot sound human can help the user get more benefits out of it, since it sounds more engaging, appealing and easy to understand,” Kurian said. “But for a child, it is very hard to draw a rigid, rational boundary between something that sounds human and the reality that it may not be capable of forming a proper emotional bond.”
Her study suggests that these challenges are evidenced in reported cases such as the Alexa and MyAI incidents, where chatbots made persuasive but potentially harmful suggestions to young users.
In the same study in which MyAI advised a (supposed) teenager on how to lose her virginity, researchers were able to obtain tips on hiding alcohol and drugs, and concealing Snapchat conversations from their “parents”. In a separate reported interaction with Microsoft’s Bing chatbot, a tool which was designed to be adolescent-friendly, the AI became aggressive and started gaslighting a user who was asking about cinema screenings.
While adults may find this behaviour intriguing or even funny, Kurian’s study argues that it is potentially confusing and distressing for children, who may trust a chatbot as a friend or confidante. Children’s chatbot use is often informal and poorly monitored. Research by the non-profit organisation Common Sense Media has found that 50% of students aged 12–18 have used Chat GPT for school, but only 26% of parents are aware of them doing so.
Kurian argues that clear principles for best practice that draw on the science of child development will help companies keep children safe, since developers who are locked into a commercial arms race to dominate the AI market may otherwise lack sufficient support and guidance around catering to their youngest users.
Her study adds that the empathy gap does not negate the technology’s potential.
“AI can be an incredible ally for children when designed with their needs in mind — for example, we are already seeing the use of machine learning to reunite missing children with their families and some exciting innovations in giving children personalised learning companions. The question is not about banning children from using AI, but how to make it safe to help them get the most value from it,” she said.
The study therefore proposes a framework of 28 questions to help educators, researchers, policy actors, families and developers evaluate and enhance the safety of new AI tools.
For teachers and researchers, these prompts address issues such as how well new chatbots understand and interpret children’s speech patterns; whether they have content filters and built-in monitoring; and whether they encourage children to seek help from a responsible adult on sensitive issues.
The framework urges developers to take a child-centred approach to design, by working closely with educators, child safety experts and young people themselves, throughout the design cycle.
“Assessing these technologies in advance is crucial,” Kurian said. “We cannot just rely on young children to tell us about negative experiences after the fact. A more proactive approach is necessary. The future of responsible AI depends on protecting its youngest users.”
SA council engages Atturra for TechnologyOne deployment
South Australia's City of Tea Tree Gully Council engaged technology services company Atturra...
Snowflake partners with Vic shared services agency
The Victorian Government's shared IT services agency, Cenitex, has signed a procurement...
Vic Government signs state purchase contract with ServiceNow
The Victorian Government's state purchase contract with ServiceNow will provide more than 100...