How should government and society come together to address the challenge of regulating artificial intelligence? What approaches and tools will promote innovation, protect society from harm and build public trust in AI?
Artificial intelligence (AI) is a key driver of the Fourth Industrial Revolution. Algorithms are already being applied to improve predictions, optimize systems and drive productivity in many other sectors.
However, early experience shows that AI can create serious challenges. Without proper oversight, AI may replicate or even exacerbate human bias and discrimination, cause potential job displacement, and lead to other unintended and harmful consequences.
To accelerate the benefits of AI and mitigate its risks, governments need to design suitable regulatory frameworks. However, regulating AI is a complex endeavour. Experts hold diverse views on what areas and activities should be regulated, and approaches to regulating AI diverge sharply across regions. In some jurisdictions, a lack of consensus on a path forward and the risk of stifling innovation may deter any action. Emerging controversies surrounding AI can also force governments to implement hastily constructed and suboptimal regulatory policies.
Given the growing importance of this powerful technology, AI regulation should not be designed in a haphazard manner. A collaborative roadmap is needed to reimagine an agile regulatory system for AI that encourages innovation and minimises its risks.
This project brings together stakeholders from all sectors of society to collaborate on co-designing innovative, agile frameworks for governing AI. Underpinning this is the belief that robust regulation promotes consumer confidence, provides the opportunity for global mobility, and gives social license for the adoption of emerging technologies.
Activities are centred on three core objectives:
– Framing national and global conversations on regulating AI in a coherent and accessible manner
– Developing a roadmap for policy-makers to facilitate their decisions about whether and how to regulate AI
– Identifying and iterating innovative approaches and tools for regulating AI that can be scaled
September - October 2019: Scoping Phase
– Build core project community of key stakeholders
– Identify primary issues and knowledge base
October - December 2019: Policy Development
– Work with project community to frame conversation on AI regulation
– Identify potential pilot projects
– Produce draft ‘Policy Roadmap for AI Regulation’
January - June 2020: Pilot and Iterate
– Test roadmap with government and industry partners
– Pilot new approaches and tools for AI regulation
– Capture lessons and share findings
July 2020 onward: Scale
– Encourage broad adoption of the roadmap and tools based on lessons learned from pilot implementations
– Community session at the World Economic Forum Annual Meeting 2020, Davos-Klosters, Switzerland (21-24 January)
– Workshop with project community, San Francisco, USA (30-31 January)
– Second New Zealand workshop with project community, Waitangi (5 March)
– Community session, Global Technology Governance Summit, San Francisco, USA (21-22 April)
How to engage
Project community: Nominate experts, policy-makers or senior executives to provide ongoing input on the project
Fellow: Nominate an individual from your organization to play an integral role in shaping this initiative
For more information on this project, please contact Kay Firth-Butterfield, Head of AI and Machine Learning, at Kay.Firth-Butterfield@weforum.org or Lofred Madzou, Project Lead, at email@example.com. You may also reach out to us at firstname.lastname@example.org.