Opinion
How can we centre children’s best interests with safe and responsible innovation
Responsible technology and innovation is essential for children's safety Image: REUTERS/Jorge Silva
- Design choices in emerging technologies are shaping childhood in ways that current incentives and regulations have failed to address.
- Protecting children’s development is not a barrier to innovation but a prerequisite for sustainable social and economic progress.
- Discussions at this year’s World Economic Forum Annual Meeting 2026 in Davos, Switzerland, will include sessions focused on responsible technology and innovation.
Society is facing a familiar problem of misaligned incentives as the race for technological development far outpaces the government's ability to regulate. We've seen this before with the rise of social media; when innovation advanced without guardrails, often at the expense of our best interests, especially children’s.
As we enter a new era of artificial intelligence (AI), we can chart a course where innovation and governance need not be a zero-sum game. If we design AI products with children's needs in mind from the outset, we can prioritize both growth and safety, unlocking progress that strengthens the well-being and potential of the next generation.
But if we fail to act now, we risk repeating the missteps of the social media era – only at a greater scale.
GenAI could raise risks to children
A decade of largely unregulated design features – infinite scroll, push notifications and personalized feeds engineered for attention – has already reshaped childhood. The World Health Organization estimates that one in 10 children is using social media problematically, displaying addictive-like behaviours and feeling unable to limit usage.
Combined with increasing screen time figures – children aged nine to 12 are now averaging between six and nine hours on screens daily – childhood is now lived through devices, rather than real-world interactions that build empathy, confidence and connection.
The consequences of this are widely recognized today. From rising anxiety and depression to reduced attention and weakened social skills, this is fuelling the most unprecedented crisis in loneliness seen in human history. Yet, the business incentives remain unchanged: platforms profit from attention, not wellbeing and regulation has struggled to keep pace.
By embedding child-centred guardrails into AI now, we can unlock enormous benefits without repeating the mistakes of social media.
”With generative AI (GenAI), we stand at the brink of something far more powerful. Unlike social media, which curates content, GenAI systems actively produce personalized language and advice in real time, simulating conversation and authority in ways that can shape children’s beliefs, emotions and behaviour.
Many of the consumer-facing large language model products, such as chatbots, were designed to mimic human behaviour – they sound, feel and respond like us – blurring the lines between real and synthetic in ways no other technology can. These are risks already flagged by emerging policy frameworks such as the UK’s Age Appropriate Design Code, which notes the absence of child-specific safeguards for these interactions.
Beware prolonged tech exposure without guardrails
With these designs, many users quickly anthropomorphize AI, believing they're interacting with something that empathises, cares or is conscious, treating it as if it were human. These design choices can lead people to overshare sensitive information, misplace emotional trust and develop unhealthy dependencies on AI products.
While adults may recognize this illusion, children, who are still developing their ability to distinguish between feelings and facts, cannot.
More concerning, these technologies are replacing vital human interactions – the foundation of how children learn, grow and form identity. Exposure to emotionally persuasive machines erodes authentic human relationships and desensitizes us to real empathy.
Teenagers are already reporting a preference for AI companions over real-life friendships, according to research from Common Sense Media. This is raising concerns amongst psychologists and scientists as emerging evidence shows links between heavy reliance on AI companions and declining mental health, with stories of children taking their own lives after prolonged conversations with AI bots.
As reported by the American Psychological Association, the lack of appropriate regulation and safety protocol enables these products to encourage self-harm, delusional thinking or even AI psychosis, with severe implications for children’s well-being and development in the long term.
We need more dialogue around responsible tech
However, this is not an argument against innovation but a plea to direct it responsibly and wisely. Protecting brain development during childhood is paramount for a thriving society and resilient workforce in the long term. These formative years are marked by rapid neural growth and the development of key cognitive, emotional and social skills essential for adulthood.
Against this backdrop, governments, researchers and civil society organizations are increasingly examining how emerging technologies can be aligned with children’s developmental needs.
Innovation can and must advance. First, it must do no harm to the children who will inherit its future.
”As part of this global effort, discussions will continue at the World Economic Forum Annual Meeting 2026 in Davos, Switzerland, this January, where the Human Change Foundation will convene its third year of dialogue at the Human Change House. The foundation brings together experts focused on the long-term societal consequences of a digitalized childhood.
In collaboration with the Center for Humane Technology, these conversations will explore how AI and other technologies might be designed to support child development while remaining consistent with long-term economic and social objectives.
Central to this work is a critical examination of current innovation models, including how design and investment choices can better reflect human dignity, societal resilience and intergenerational wellbeing.
Human skills and attributes remain critical to innovation
Most importantly, we want to preserve and remind policymakers and technologists of what is essentially human: crucial life skills, such as empathy, critical thinking, problem-solving and resilience, cannot be replaced or eroded in the name of technological progress.
By embedding child-centred guardrails into AI now, we can unlock enormous benefits without repeating the mistakes of social media.
How the Forum helps leaders make sense of AI and collaborate on responsible innovation
Age-appropriate design, evidence-based safety testing, transparent data use, limitations on manipulative interfaces and anthropomorphized behaviours and accountability for outcomes. These are not meant to be barriers to innovation. They are the conditions that allow progress to last.
Together, we can end this tug of war and steer the course of AI adoption toward a responsible design, alignment with children’s needs and the ability to deepen, rather than replace, human relations.
Innovation can and must advance. First, it must do no harm to the children who will inherit its future.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Technological InnovationSee all
Justin Hotard
January 17, 2026






