How should government and society come together to address the challenge of regulating artificial intelligence? What approaches and tools will promote innovation, protect society from harm and build public trust in AI?
Artificial intelligence (AI) is a key driver of the Fourth Industrial Revolution. Algorithms are already being applied to improve predictions, optimize systems and drive productivity in many other sectors.
Early experience shows, however, that AI can create serious challenges. Without proper oversight, AI may replicate or even exacerbate human bias and discrimination, cause potential job displacement and lead to other unintended and harmful consequences.
To accelerate the benefits of AI and mitigate its risks, governments need to design suitable regulatory frameworks. Regulating AI is, however, a complex endeavour. Experts hold diverse views on what areas and activities should be regulated and approaches to regulating AI diverge in different regions. In some jurisdictions, a lack of consensus on a path forward may deter action. Emerging controversies surrounding AI can also mean governments implement hastily constructed regulatory policies.
Given the growing importance of this powerful technology, AI regulation should be planned in a collaborative and open way, encouraging innovation, minimizing risks and building trust. Regulation needs to be reimagined as a co‑designed and flexible system of levers, tools and incentives.
This project brings together people from throughout society to co‑design innovative frameworks for governing AI. Underpinning the project is the view that trust is necessary for AI’s potential to be fully realized. Having appropriate safeguards in place will increase consumer and citizen confidence and provides opportunities for global mobility.
Activities are centred on three core objectives, which will help make AI systems transparent and build trust in the design and use of AI:
- Framing national and global conversations to help people understand the issues and choices in AI
- Developing a road map to create a national body that will provide support and advice to users of AI
- Identifying and iterating innovative risk assessment approaches and tools that can be scaled up
How to engage
Project community: Nominate experts, policy-makers or senior executives to provide ongoing input on the project
Fellow: Nominate an individual from your organization to play an integral role in shaping this initiative
For more information on this project, please contact Kay Firth-Butterfield, Head of AI and Machine Learning, at Kay.Firth-Butterfield@weforum.org or Lofred Madzou, Project Lead, at Lofred.Madzou@weforum.org. You may also reach out to us at email@example.com.