Education and Skills

AI and education: Kids need AI guidance in school. But who guides the schools?

Education: Laptop on desk with the AI app ChatGPT on the screen.

Kids need AI guidance as part of their education at school. Image: Emiliano Vittoriosi on Unsplash

Hadi Partovi
Founder and CEO, Code.org
Pat Yongpradit
Chief Academic Officer, Code.org
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Education and Skills?
The Big Picture
Explore and monitor how Education is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Education

This article is part of: World Economic Forum Annual Meeting
  • A UNESCO survey shows less than 10% of schools and unis had institutional policies or formal guidance for the use of generative AI.
  • The absence of such guidance means a greater risk of privacy breaches, uneven disciplinary actions, and ineffective implementation.
  • Below we lay out 7 principles to consider when creating guidance to ensure the responsible and effective use of AI in education.

The rapid integration of Artificial Intelligence (AI) into society requires clear guidance in educational settings. Yet a recent UNESCO global survey of more than 450 schools and universities showed that less than 10% had institutional policies or formal guidance regarding the use of generative AI.

In October, the Center for Democracy and Technology reported that 81% of parents say that guidance on how their child can responsibly use generative AI for schoolwork and within school rules would be helpful. And 72% of students agree that this same guidance would be helpful for themselves.

Have you read?

With the proper guidance, the use of AI in education can lead to improved learning results, bolster teacher instruction and well-being, and promote fairness in education. In the absence of such guidance, however, there's a risk of privacy breaches, uneven disciplinary actions, and ineffective implementation of AI technologies in the educational context.

In such a new field, what practical guidance can leaders turn to when developing guidance for their school systems? Below are seven practical principles that educators, policymakers, and education leaders can consider when creating guidance to ensure the responsible and effective use of AI in education.

7 principles

1. Purpose: Explicitly connect the use of AI to educational goals

AI should be employed purposefully to support and enrich the learning experience, promoting student and staff well-being, and enhancing administrative functions. The focus should be on using AI to help all students achieve educational goals while considering equity, inclusivity and reducing the digital divide. AI tools must align with the shared education vision, catering to diverse learning needs and backgrounds.

By way of example, the Lower Merion School District, in Pennsylvania, the US, states: “Rather than ban this technology, which students would still be able to access off campus or on their personal networks and devices, we are choosing to view this as an opportunity to learn and grow.”

Discover

What is the World Economic Forum doing to improve digital intelligence in children?

2. Compliance: Affirm adherence to existing policies

Implementing AI in education requires compliance with key areas of technology policy, including privacy, data security, student safety and data ownership. It's essential to align AI usage with existing regulations and ethical considerations, particularly regarding student privacy and data security.

3. Knowledge: Promote AI Literacy

AI literacy involves understanding how AI works, its limitations, implications and ethical considerations. It's crucial to equip individuals with the knowledge and skills to engage responsibly with AI technologies. This encompasses elements of computer science, ethics, psychology, data science and more.

An example of this is Article 26 of Argentina’s Framework for the Regulation of the Development and Use of AI , which states: “AI training and education will be promoted for professionals, researchers, and students, in order to develop the skills and competencies necessary to understand, use and develop AI systems in an ethical and responsible manner.”

4. Balance: Realize the benefits of AI and address the risks

While AI offers numerous potential benefits for education, it's vital to acknowledge and mitigate its risks. Education systems should provide guidance on using AI responsibly, ensuring it supports community goals like improving student and teacher well-being and learning outcomes.

For example, in April 2023, the United Arab Emirates Office of AI, Digital Economy and Remote Work released 100 Practical Applications and Use Cases of Generative AI, a guide that includes detailed use cases for students, such as outlining an essay and simplifying difficult concepts.

The potential for AI is obvious, and educating our future generation is just the beginning.

H.E. Omar Sultan Al Olama

5. Integrity: Advance academic integrity

AI presents both challenges and opportunities regarding academic integrity. It's important to address plagiarism risks while using AI to emphasize fundamental values like honesty, trust, fairness, respect and responsibility. AI tools can assist in cross-referencing information, but their limitations should be recognized to value authentic creation.

Teachers should be clear about when and how to use AI on assignments. Below are three levels of AI use that should change depending on the assignment.

– Permissive: Students can freely utilize AI tools to assist in their assignments, such as generating ideas, proofreading or organizing content.

– Moderate: Students can use AI tools for specific parts of their assignments, such as brainstorming or initial research, but the core content and conclusions should be original.

– Restrictive: AI tools are prohibited for the assignment, and all work must be the student's original creation.

6. Agency: Maintain human decision-making

Any AI-supported decision-making must allow for human intervention and rely on human approval processes. AI should serve in a consultative role, augmenting but not replacing the responsibilities of educators and administrators.

By way of example, Peninsula School District, Washington, laid out its AI Principles and Beliefs Statement: “The promise of Artificial Intelligence (AI) in the Peninsula School District is substantial, not in substituting human instructors but by augmenting and streamlining their endeavors. Our perspective on AI in education is comparable to using a GPS: it serves as a supportive guide while still leaving ultimate control with the user, whether the educator or the student.”

7. Evaluation: Continuously assess the impact of AI

It’s crucial to regularly review and update AI guidance to ensure it meets the evolving needs of the educational community, and complies with changing laws and technology. Feedback from various stakeholders, including teachers, parents and students, is vital for continuous improvement.

The AI Guidance for Schools Toolkit and these seven principles provide a framework for implementing AI in education responsibly and effectively. By adhering to these guidelines, educators and policymakers can harness the benefits of AI while addressing its challenges, ensuring a balanced, ethical and inclusive approach to AI in education.

The TeachAI initiative, with over 70 advisory committee members and 60 education authorities, is dedicated to providing resources to connect the discussion of teaching with AI to teaching about AI. Sign up for updates on events and future releases at TeachAI.org.

The principles in this blog were summarized, with some help from ChatGPT, from the AI Guidance for Schools Toolkit developed by TeachAI, an initiative led by Code.org, ETS, the International Society for Technology in Education, Khan Academy, and the World Economic Forum. See the entire toolkit at TeachAI.org/toolkit.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Education and SkillsForum InstitutionalEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

AI is changing the shape of leadership – how can business leaders prepare?

Ana Paula Assis

May 10, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum