How can we create actionable guidelines to protect, educate and empower children and young people in the age of artificial intelligence?
Children and young people today grow up in an increasingly digital age in which technology pervades every aspect of their lives. From robotic toys and social media networks, to the classroom and home, artificial intelligence (AI) is ubiquitous in the daily life of a child.
However, special precautions must be taken to protect society’s most vulnerable and disenfranchised demographic. Germany took the step to ban AI-powered toys because of surveillance risks to children, while regulators elsewhere in the world are starting to confront issues of control, use and monetization of children’s data.
Issues of privacy are compounded by questions about the impact of AI-enabled toys on cognitive development. Is it necessary to protect traditional creative play, or is early exposure to AI useful for children who will grow up engaging with AI in the workplace? AI-enabled devices are increasingly able to manipulate and addict users, and children are particularly susceptible. This is highly concerning given the prevalence of bias in AI to which children are less attuned than adults.
In the absence of clear guidelines, parents and caregivers are left to make decisions about products with incomplete information and complex implications for their children’s health and privacy. As these devices come onto the market, all relevant stakeholders need to construct mechanisms to protect children while enabling “precision education” and other benefits of AI.
The World Economic Forum’s Platform on Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning is partnering with the United Nations Children’s Fund (UNICEF), Canadian Institute for Advanced Research (CIFAR), LEGO and others to address this urgent challenge.
This multistakeholder community of governments, academics, businesses, and international and civil society organizations is being brought together to develop best practice norms for the governance of AI targeted at children and young people.
The project will focus on three pillars:
- Protect: The stakeholder community will develop governance guidance for protecting children’s human rights and civil liberties when using and encountering AI in their homes, schools and public places.
- Educate: The community will develop frameworks for educating children, adolescents as well as parents and guardians about best practices and minimum guidelines for using AI technology. It will also develop toolkits for technical AI education to train and inspire children and young people to leverage AI in their education and professional careers.
- Empower: The community will develop opportunities for using AI to empower underrepresented children and young people, including girls, communities of colour and emerging economies. The community will also empower children and young people with AI skills to create their own technology to improve the state of the world.
As part of the Generation AI project, the Forum will organize and convene a Youth Council comprised of young people interested in AI, who will serve as part of the project community and provide critical guidance and perspective on the project direction and materials the core community develops.
The Forum is also establishing a Toy Awards and Product Review to enable corporate governance through the methodological assessments of toys available in the market that integrate AI. The selection committee will recognize toys for ethical design and encouraging healthy child development, including privacy, self-determination and independent agency. The awards are also intended to inspire toy and gamemakers to harness the developmental promise of AI to promote individualized learning and adaptive creativity.
How to engage
Project community: Nominate experts, policy-makers or senior executives who can help guide this project by providing regular input as projects develop.
Fellow: Nominate an individual from your company to work full- or part-time at the World Economic Forum’s Centre for the Fourth Industrial Revolution to play an integral role in shaping this initiative.
For more information, contact Kay Firth-Butterfield, Head of AI and Machine Learning, at Kay.Firth-Butterfield@weforum.org