AI governance trends: How regulation, collaboration and skills demand are shaping the industry
Many leaders acknowledge that mitigating AI risks can lead to a competitive advantage and fundamentally contribute to their organization’s success. Image: Unsplash/Google DeepMind
- AI is transforming industries, leading to a growing demand for innovative solutions and trained professionals to address governance needs.
- The self-governance of AI systems requires both organizational and technical controls in the face of new and constantly changing regulatory activity.
- The path forward for AI is open and collaborative, with automation required to scale with responsible adoption.
Artificial intelligence (AI) is transforming industries across the board as leaders recognize the technology's potential to improve productivity, enhance creativity, boost quality, and generate new solutions. This has created a surge in demand for AI, and in particular, generative AI.
Unlocking the value of generative AI with responsible transformation can be a game-changer. Generative AI use cases can be found in various industries, from inspiring new designs in the furniture industry and personalizing marketing to accelerating drug discovery in the pharmaceutical industry.
However, along with AI's potential value, leaders are also concerned about its risks, including bias, safety, security, and loss of reputation if something goes wrong.
Many leaders also acknowledge that mitigating these risks can lead to a competitive advantage and fundamentally contribute to their organization’s success. Therefore, adopting the technology ethically and responsibly has become a key consideration, leading to the rapid emergence and adoption of AI governance.
AI regulation is here and expanding
AI is already subject to applicable regulations that focus on more than just the technology itself. For example, laws focus on privacy, anti-discrimination, liability, and product safety. In addition, AI-focused regulatory activity is expanding.
In 2024, the European Union’s Artificial Intelligence Act, or EU AI Act, was finally passed into law after years of debate and anticipation. Like the General Data Protection Regulation (GDPR), we expect it to influence many similar laws in other regions of the world.
AI has been discussed by policymakers worldwide and was mentioned in legislative proceedings twice as frequently in 2023 as in 2022. In some cases, regulatory activity is explicitly focused on generative AI, such as China’s Interim Administrative Measures for Generative Artificial Intelligence Services.
There has also been an increase in AI-related standards activities and cross-jurisdictional collaboration, exemplified by initiatives driven by the Organisation for Economic Co-operation and Development (OECD), the US National Institute of Standards and Technology (NIST), the United Nations Educational, Scientific and Cultural Organization (UNESCO), the International Organization for Standardization (ISO), and the Group of Seven (G7).
In 2024, we saw an increased focus on AI safety with the launch of new AI safety institutes and the expansion of efforts driven by institutes in the US, the UK, Singapore, and Japan. Also, the new EU AI Office, established under the EU AI Act, will focus on developing best practices.
We can expect this growth in the volume and variety of AI regulations and standards to continue for the foreseeable future as policymakers grapple with how to manage AI risks. International agreements on interoperable standards and baseline regulatory requirements will play an important part in enabling innovation and improving AI safety.
AI self-governance will require both organizational and technical controls
Many organizations choose to adopt self-governance approaches to further drive alignment with their organizational values and build eminence.
In fact, implementing an organization’s principles often involves meeting ethical standards that extend beyond regulatory requirements. Organizations may choose to leverage voluntary methods and frameworks such as the US NIST’s AI Risk Management Framework, Singapore’s AI Verify framework and toolkit, and the UK’s AI Safety Institute open-sourced Inspect AI safety testing platform.
Self-governance of AI systems will involve both organizational and, increasingly, automated technical controls. AI requires strong organizational management systems with controls like those described in the ISO/IEC 42001 international standard.
Technical controls are equally important as socio-technical systems. Automation can often help them, for example, through automation in AI red teaming (a structured way of testing AI models to identify issues and help protect against harmful behaviours or outcomes), metadata identification, logging, monitoring, and alerts.
Automation will be needed as the technology reaches speeds and intelligence that require real-time controls. Furthermore, as harmonized standards and technical AI safety controls advance, many organizations will embrace them.
Human and AI collaboration will remain important. For instance, IBM has implemented organizational controls through its AI Ethics Board and Integrated Governance Program and offers solutions to help clients implement organizational and technical controls such as watsonx.governance.
A growing need for skilled AI professionals
As the AI governance market continues to expand rapidly, skilled AI professionals who can implement responsible organizational and technical controls will be in high demand.
The playing field will range from existing technology leaders to new start-ups focused almost exclusively on AI governance solutions such as AI inventory management, policy management and reporting.
The market will evolve to support specialized areas such as incident management, red-teaming, conformity assessments, transparency reporting and technical documentation. As this occurs, the number of trained AI governance professionals will need to grow to support it.
These professionals will need training to perform tasks specific to market demands, including tasks required by regulations. Education and certification programmes will play a critical role. In fact, we have already seen new certification options appear, such as the International Association of Privacy Professionals' Artificial Intelligence Governance Professional certification.
While costs are associated with training professionals and implementing AI ethics and governance practices more broadly, the costs of not doing these things can be even higher.
Organizations can leverage a holistic approach to evaluating return on investment by examining not just traditional returns but also the returns gained through reputational impact and the possibility of building new organizational capabilities.
Leaders should ensure they view AI governance from a value generation perspective and not purely one of risk avoidance.
The path forward is open and collaborative
AI is changing and enabling the way humans work and live, providing unprecedented benefits to individuals, organizations and society.
Scaling AI requires skilled individuals and often will need automation to support responsible adoption. These professionals can also support the growing need for effective regulatory compliance and self-governance.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
While companies are currently finding exciting new ways to use AI, the future holds even greater promise as it converges with other technologies such as quantum, robotics, biotechnology, and neurotechnology.
In this context, open technology and collaboration with diverse stakeholders will be critical. As AI transforms industries, AI governance is a crucial enabler in unlocking the full value of this technology.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Nigel Vaz
December 12, 2024