AI products for children promising friendship and learning: three things to consider

AI products for children promising friendship and learning: three things to consider

This article has been reviewed in accordance with Science X’s editorial process and policies. The editors have emphasized the following attributes while ensuring the credibility of the content:

verified facts

reliable source

written by one or more researchers


Credit: Unsplash/CC0 Public domain

× close

Credit: Unsplash/CC0 Public domain

Smart toys with Internet connections and AI-enabled robots with sophisticated social interaction capabilities with children are widely available today. This is undoubtedly due to the rapid development of artificial intelligence, the impacts of which are being widely felt. Businesses are looking to improve productivity and revenue, while governments are considering security measures in their AI strategies.

AI products for children include:

Parents and professionals working with children may feel pressured to purchase these or other products as alternatives to in-person activities like playdates, therapy, or games, or question their usefulness and their potential roles in children’s lives.

Before we launch into a world of AI babysitters, machine therapists, and robot teachers, we should carefully examine this technology to assess its capabilities and relevance in children’s lives.

Informed choices

We need to examine the assumptions inherent in these products or the way they are marketed and advocate for scientific approaches to evaluating their effectiveness.

Evidence-based approaches from psychology, child development and education and related fields can systematically test, observe and make recommendations on the use of new technologies, without any direct commercial interest. In turn, this promotes informed choices for parents and professionals working with children.

This is essential with AI systems, systems that do not provide transparency in their decision-making processes. There are also concerns about data privacy and surveillance.

What are the underlying assumptions potentially suggested or directly asserted with these technologies and how are they commercialized? Three are discussed below.

Hypothesis 1: Human traits like curiosity, empathy or emotions like happiness, sadness, “heart” or compassion can be realized in a machine.

A person might assume that since AI products can react with human-like qualities and emotions, they can possess them. As other scholars have pointed out, we have no reason to believe that the display of human emotion or empathy is more than mere simulation.

As sociologist Sherry Turkle memorably described in a 2018 opinion piece, this technology presents us with an “artificial intimacy” that may work but will never compare to the depth of human inner life.

Hypothesis 2: When companions, teachers, or therapists engage with children, their human qualities and traits are irrelevant

It can be assumed that AI systems do not actually need to possess human traits if they focus on the practice of care, therapy or learning, thus avoiding the previous concern.

But the characteristics that have made human presence essential to child development are directly linked to emotional life and qualities such as human empathy and compassion.

Gold standard interventions for children’s mental health problems, such as psychotherapy, require human characteristics such as warmth, openness, respect, and trustworthiness. Even if AI products can simulate a therapeutic conversation, this should not force people to use these products.

Hypothesis 3: Research that proves the effectiveness of human-led therapeutic, care, or learning interventions is applicable to AI-based interventions.

The social relationships that children can form with others (whether friendships, therapeutic alliances, or teacher-student bonds) have been shown to be beneficial through decades of human research. As noted above, many AI products aimed at children can position themselves as alternatives to these roles.

However, the extent to which research on human-to-human experiences can provide insight into the benefits of child-AI relationships should not be taken for granted. We know from decades of psychological research that contextual factors such as culture and the way educational and therapeutic interventions are implemented are extremely important. Given the newness of the technology and the lack of extensive non-commercial research on AI products for children, we must approach claims of effectiveness with caution.

The value of human presence

Early developmental periods are critical to preparing children for success as adults.

Social interactions with parents, friends and teachers can have profound impacts on a child’s learning, development and understanding of the world. But what if some of these interactions are with AI systems? Researchers in psychology, human-computer interaction, and learning sciences are investigating these and other related questions in ongoing research.

Ultimately, we don’t think AI should be completely excluded from children’s lives. Generative AI is an exciting form of AI with its conversational interfaces, access to vast information, and media creation capability. Learning workshops such as those hosted by the MIT Media Lab offer children and young people the opportunity to learn about data and privacy and discuss ideas about AI.

Children need human care and companionship and will always benefit from an engaged, emotional and thoughtful human presence. Empathy, compassion and validation are uniquely human.

Allowing another human to feel what we feel and say, “If I were in your shoes, I would feel the same way” is irreplaceable. So maybe we should leave these situations to the people who do best: people!