Opinion
With AI, children risk learning to be human from a machine
Children and AI is a generational experiment that risks eroding the essential friction of human development. Image: REUTERS/Seth Herald
- A century-old experiment on children's development may provide clues about the risks that lie ahead as AI interacts with young children.
- Teenagers are already exposed to AI companions, but the younger a child, the greater the risk of developmental distortion.
- We must design human-centric AI that protects our ability to connect, cooperate and understand each other - skills that have shaped our humanity since the beginning.
In 1930, Winthrop and Luella Kellogg welcomed their first child, Donald. His birth provided Professor Winthrop the opportunity to investigate a long-standing question: Could a chimpanzee, raised as if human, develop capacities akin to their Homo sapiens cousins? Others had attempted this but found it difficult to replicate the thousands of daily micro-interactions that instill human habits, skills and culture. Winthrop’s idea: raise a chimpanzee alongside his son Donald.
By the time Donald was 10 months old, the Kelloggs had integrated Gua, a 7-month-old female chimpanzee, into their family. They provided Gua and Donald with the same physical and emotional comforts, schedule of activities and expectations for behavior. Gua quickly developed human characteristics, walking more upright, understanding language and even becoming (more or less) potty trained.
The extraordinary early findings suggested the study would be a gold mine for understanding human and primate development. Maybe it would even resolve that age-old debate of nature vs. nurture. Despite its potential, the Kelloggs abruptly ended the experiment after nine months: Donald was exhibiting chimp-like behaviors.
AI and children: An experiment on human development
We are about to run a similar experiment on young children across the world, only this time the nonhuman companions will be machines. Advances in speech recognition will soon make it possible for digital systems to understand the disfluent, highly variable speech of young children. Unburdened by the limitations of swiping, typing and clicking, young children will be able to have meaningful, sustained engagement across a wide range of digital tools, including companion AIs.
Companion AIs are reshaping how adolescents and adults work, learn and love. When optimized for toddlers and preschoolers, the implications could extend to how children develop persistence, social behavior, and the ability to build and sustain human relationships.
How the Forum helps leaders make sense of AI and collaborate on responsible innovation
A preview of what’s coming
The companion AIs will emerge within an attention economy refined to convert young children’s engagement into dollars. A 2024 survey of US parents found that nine out of 10 three- to five-year-old children had access to a smartphone or tablet, with about half of these children being the sole or primary user of the device. According to parent reports, preschool-aged children spend on average more than two hours a day on screens. Regardless of what children are doing on devices, the economic incentive is clear: maximize engagement. From advertising, in-app purchase or subscriptions, the business success depends on keeping children engaged.
If we want to understand what happens when young children seamlessly interact with sophisticated AI-enabled devices, a look at slightly older peers is a good place to start. Existing companion AI, in which users can real-time, ongoing, personal chats with AI ‘friends’ or role-play with online characters that present themselves as companions or even advice-givers, have become surprisingly popular with teens. While access to these companion bots is limited to adults, nearly three out of four US teens report having engaged with an AI companion, and more than half engage multiple times a month.
The dangers of teen companion AI involvement have been well documented, with catastrophic examples of AI relationships contributing to teens harming themselves or others. Yet most of these interactions are described as innocuous and even positive by teens. Some use AI companions to rant about a bad day, get advice or talk to ‘someone’ when they feel alone. Others use them in prosocial ways like rehearsing tricky social situations before trying them in real life, talking through challenges or problems, or practicing skills like job interviewing or conflict resolution that they can carry back into their offline lives.
Individual interactions of companion AI may not be ultimately concerning, but psychologists warn that this sort of regular engagement can blur the line between simulated and real relationships. The American Psychological Association (APA) notes that adolescents are less likely than adults to question a chatbot’s accuracy or intent, and may mistake its simulated empathy for genuine human understanding. The APA warns that adolescent relationships with AI can displace healthy real-life friendships and family ties and foster unhealthy dependency. Early research suggests strong attachment to AI characters can hinder social-skills and real-world emotional connection.
Most teens approach AI companions with a fundamental awareness that their interlocutor is different from a flesh-and-blood human. They view these systems as tools rather than substitutes for people. Teens overwhelmingly prefer human relationships to virtual ones, with 80% saying they prioritize real friends over AI companions and two-thirds saying that conversations with bots are less satisfying than those with friends. They’re skeptical of what AI says and are aware of the risks of misinformation, deepfakes and the potential for their personal information to be weaponized.
Preschoolers cannot hold AI at arm’s length in the same way older youth can, in part because the boundary between real and pretend is still forming. They often aren’t aware of how they learned something, whether they saw it, imagined it or were told, making them more likely to accept AI delivered information as true. Preschoolers freely attribute human thoughts, feelings or intentions to nonhuman entities. When presented with a companion AI that has eyes, a nose and ears, children will readily attribute feelings, intentions and even kindness or meanness to its actions. And once they feel comfortable, they will freely share about themselves, where they live and intimate details of their lives.
Most preschoolers can recognize that AI is different from a human. But this is because today’s systems still present as machines: they pause a beat too long before speaking, speak a touch too fast, use a voice that’s a shade too mechanical, or use phrasing that sounds like a grown-up reading a script.
Those tells are fixable engineering problems. They require customizing AI tools to include low-latency turn-taking, reducing the mechanistic qualities of synthetic speech, programming more expressive patterns that include natural disfluencies and breath, modeling and building child-directed lexical tuning, enabling a consistent persona and memory for past chats, and embedding that into a friendly machine that can mimic a natural gaze and contingent gesture.
With those problems fixed, will young children be able to distinguish human from machine? Not reliably. The closer an AI gets to warm, contingent, child-directed conversation, the more a young child will bond with it as a feeling, trustworthy friend.
The human stakes of machine companions
In the century since the Kellogg experiment, the nature vs. nurture debate has been decided: it’s both. A child’s genes provide the blueprint, but experiences shape how those genes are expressed. Interactions with other humans are among the most needed and consequential of these experiences. Development unfolds through what researchers call “serve-and-return”: a child acts, and another person responds. Over time, these repeated exchanges shape the architecture of the brain and influence every aspect of social and cognitive development.
Healthy development depends not only on warmth but also on limits. When children encounter frustration and receive calm, consistent responses, they learn to manage emotions, control impulses and adapt when things do not go as planned.
As children grow into toddlers and preschoolers, peers become central to that development. Learning to navigate relationships can be tricky, largely because most children under five believe others see, think and feel the same as they do. A few minutes of sharing toys with another toddler will quickly disabuse that notion.
The friction that happens when children’s desires differ from each other is essential to healthy development. It’s how they build empathy, patience and understanding. In other words, it’s how they learn to be a friend. Four-year-olds who practice friendship learn to handle conflicts because they can communicate what they’re thinking and feeling, propose compromises and adjust when they go wrong rather than letting the friendship fall apart.
If young children rely on companion AIs who never say no, never misunderstand and never leave, they may miss out on the experiences of friction, frustration and repair that throughout our time as a species have provided the essential skills that help us coexist.
In early childhood, we don’t just learn from our companions; we learn to be like them. The Kellogg experiment provided a dramatic example of this. Parents witness this daily when their child picks up a classmate’s habits, patterns of speech or values that are different from those used in their home. Why would we expect young children who engage in regular interactions with companion AIs to be any different?
Designing for AI that works for humanity
In the 2024 bestseller The Anxious Generation, the author called out the tech industry and policymakers for experimenting on an entire generation by introducing social media without thinking through its consequences.
Introducing companion AIs for our youngest children represents the next big experiment. These devices and applications have the potential to usurp, at least to some extent, our most fundamental job as humans – raising young children.
The choices being made now will shape how companion AIs influence children’s development. Designers, investors and policymakers have options. Standards could require that AI companions intended for young children remain transparent about their nonhuman nature. Design norms could discourage persuasive features that exploit immature self-control or encourage excessive use. Limits on data collection could reduce incentives to maximize engagement at all costs.
This is not about rejecting innovation but aligning it with decades of developmental science that proves how young children build empathy, resilience and cooperation through real life relationships that include misunderstanding, negotiation and repair. As companion AIs are refined to become more fluid and responsive, the central question is not whether young children will engage with them, but how the engagements will shape emotional habits that carry into adulthood.
Let’s proceed intentionally and work together to ensure that companion AIs don’t programme out fundamental human capacities for connection, cooperation and understanding that have shaped our humanity since the beginning.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Jeremy Jurgens
February 27, 2026




