This article was originally published by McKinsey & Company. Copyright (c) 2021 All rights reserved. Reprinted by permission.
- McKinsey’s most recent survey on the state of AI highlights the importance of best practices, risk management and how businesses can maximize its potential.
- Three artificial intelligence experts unpack the findings and answer key questions.
- These include standout findings, high performers and key enablers of change.
The results from McKinsey’s most recent survey on the state of AI are in. Conducted during a year of pandemic, it covered some 1,800 respondents from across a range of industries around the globe.
According to our findings, the adoption of AI continues to build; a full embrace of best practices is critical to high performance; and risk management remains complicated and challenging. It’s the 4th year we’ve run the survey, and the first time MLOps (short for machine-learning operations, the term refers to best practices for the commercial use of AI) and cloud technologies emerged as critical differentiators.
Below, three of our experts share an inside look at the research: Michael Chui talks about the latest AI trends, Liz Grennan walks us through the complex world of AI and risk, and Kia Javanmardian explores MLOps, one of the industry's hottest topics.
What stood out for you in the findings?
The companies that are deriving the most benefits from AI are professionalizing or industrializing their capabilities. These high performers can attribute the greatest percentage of their profits to their use of AI. They weren’t necessarily spending more, but their project costs tended to stay in budget. Indeed, other companies were far more likely to have AI costs overruns.
The findings revealed that AI high performers follow many of the best practices. Is it an all-or-nothing scenario?
No, but the benefits are multiplicative; best practices do interlock in their benefits.
High performers have adopted MLOps, a set of practices and component tools (analogous to DevOps in software development/deployment) that have emerged over the past few years. When you put them together, they allow you to do things like train, deploy, and test models many times faster than when AI is approached as a craft. When you automate and industrialize these processes, you can repeatedly and predictably achieve significant returns in your AI investments.
According to the research, cloud is a key enabler for MLOps. Why is that so?
First, native to cloud environments are off-the-shelf tools, libraries, and frameworks that can speed up the AI model-development life cycle. Cloud also provides flexibility to ramp compute up and down as needed, which is especially useful for retraining models when necessary.
Together, these survey findings indicate that the combination of MLOps, cloud, and applying other best practices provide a good foundation for capturing AI value at scale.
Liz Grennan, expert associate partner
How did risk and AI factor into the latest findings?
The highest performers are also those who are addressing risk management in AI. One worrisome finding is that cybersecurity continues to fall on the list of concerns. There’s no set cyber standard across organizations. It underscores, for me, the need for every organization to come up with its own framework, which isn’t easy. We see three risk categories: cyber, data, and AI. They’re all completely interdependent and require an integrated risk model.
What are some of the consequences of poor AI risk management?
One of the worst things is that it can perpetuate systematic discrimination and unfairness. Specifically, this can mean women not getting hired due to biased training data. People of color being denied employment, loan consideration, housing, and other benefits because the data is biased. In one pandemic-era example, certain students unable to sit for exams were excluded from university simply because they came from a historically poorly performing high school, despite their own excellent personal records. The algorithm generating proxy test scores was inherently biased.
Without AI risk management, unfairness can become endemic in organizations and can be further shrouded by the complexity.
How does a company start with an AI risk management program?
The easiest, and maybe the highest and best place to start, is to establish a set of ethical values you want for your business and then sort through how to operationalize those values into a framework and determine a lens through which you will start evaluating risk.
Have you read?
Thematically, fairness and privacy are two highly important values, in addition to security, explainability and transparency, model performance, and safety. And it’s important to understand the regulations that are in place for the applicable industry and geography.
How does AI risk relate to cyber risk?
A complex AI system is the perfect target. And the more scaled up the AI, the bigger the threat. A bad actor can inflict damage if they break down the model and insert bad, faulty, or incorrect data, impacting large numbers of people in very personal, profound ways.
What makes you optimistic?
I work with an organization that aspires to be a global leader on human rights issues because they are ‘conscience first.’ They’ll state a values-driven aspiration and then they’ll weigh it for feasibility, costs, and other relevant business drivers. They want their values to be a market differentiator—it’s that sort of position that makes me optimistic.
Kia Javanmardian, senior partner
Why is MLOps becoming critical to AI implementations?
We have been using the car-factory analogy and it holds up pretty well: MLOps is the factory you build to scale your analytics. There are a few big picture concepts.
A first step is to shift some of what you spend on R&D and pilots to building the infrastructure that will allow you to mass produce and scale your AI projects. You also need to be monitoring the data your models are using—to stick with the car analogy, a gas gauge or dashboard—so that you can track the quality of the data going in and out of your models and their level of performance.
Third, if you are building every car from scratch, down to the door handle, it’s going take you an awful lot of time and energy to build each car.
So, where does MLOps fit in?
It’s based on the concept of building a library of standard parts or code. Your data scientists will go from creating models to spending a good chunk of their time assetizing them, converting them into reusable Lego-like parts.
Finally, you don’t have the people who designed the car, your best engineers, assembling the car and maintaining it. Instead they focus on what it takes to get the horsepower in the engine to go from 400 to 800 today.
How widespread is MLOps?
Digital natives like Google and Amazon have been practicing MLOps for years to build their products. Very few non-digital natives are using it at scale.
What do people think when they hear about it?
How is the World Economic Forum ensuring that artificial intelligence is developed to benefit all stakeholders?
Artificial intelligence (AI) is impacting all aspects of society — homes, businesses, schools and even public spaces. But as the technology rapidly advances, multistakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality.
The World Economic Forum's Platform for Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning is bringing together diverse perspectives to drive innovation and create trust.
- One area of work that is well-positioned to take advantage of AI is Human Resources — including hiring, retaining talent, training, benefits and employee satisfaction. The Forum has created a toolkit Human-Centred Artificial Intelligence for Human Resources to promote positive and ethical human-centred use of AI for organizations, workers and society.
- Children and young people today grow up in an increasingly digital age in which technology pervades every aspect of their lives. From robotic toys and social media to the classroom and home, AI is part of life. By developing AI standards for children, the Forum is working with a range of stakeholders to create actionable guidelines to educate, empower and protect children and youth in the age of AI.
- The potential dangers of AI could also impact wider society. To mitigate the risks, the Forum is bringing together over 100 companies, governments, civil society organizations and academic institutions in the Global AI Action Alliance to accelerate the adoption of responsible AI in the global public interest.
- AI is one of the most important technologies for business. To ensure C-suite executives understand its possibilities and risks, the Forum created the Empowering AI Leadership: AI C-Suite Toolkit, which provides practical tools to help them comprehend AI’s impact on their roles and make informed decisions on AI strategy, projects and implementations.
- Shaping the way AI is integrated into procurement processes in the public sector will help define best practice which can be applied throughout the private sector. The Forum has created a set of recommendations designed to encourage wide adoption, which will evolve with insights from a range of trials.
- The Centre for the Fourth Industrial Revolution Rwanda worked with the Ministry of Information, Communication Technology and Innovation to promote the adoption of new technologies in the country, driving innovation on data policy and AI – particularly in healthcare.
Contact us for more information on how to get involved.
Managers get the problem, but they can be overwhelmed with the solution. It’s very technical, involving a data infrastructure, governance, risk practices, and systems. It’s about organizational change, talent mix, evolving roles.
It can be overwhelming. But the companies that are practicing MLOps are getting orders-of-magnitude higher returns for the same relative AI investment.