Opinion
Before superintelligent AI can solve major challenges, we need to define what 'solved' means

Before superintelligent AI can solve global challenges, we have to solve a very human one: agreeing on what 'solved' means. Image: Unsplash/Getty Images
- As we approach the next frontier of superintelligent AI — systems that recursively self-improve and tackle complex, open-ended problems — we must agree on what it means to 'solve' a problem.
- If we can't align on these questions among humans, how do we encode them for AI systems?
- Leaders are gathering at the World Economic Forum Annual Meeting 2026 to explore how the ethical use of AI and other emerging technologies will translate into solutions for real-world challenges.
More data, more compute, and architectural innovations such as transformers have enabled rapid scaling of AI capabilities. Now, as we approach the next frontier of superintelligent AI — systems that recursively self-improve and tackle complex, open-ended problems — we're confronting a question that's been hiding in plain sight. We need to agree on what it actually means to 'solve' a problem.
AI is great at optimizing for the goals you give it, which is a blessing and a curse. For example, a company decides it wants to maximize customer satisfaction scores at a call centre. Without any other guidance, the simplest solution might be to spam a million bots that fill out the customer satisfaction form after a short call and select the top rating. Or it might decide to give each failed order a $10,000 resolution gift. The customer satisfaction will go up, but at what cost? AI accomplished what it was told to do, but in a useless way.
This is called reward hacking. Scale this problem up to societal challenges and the stakes become existential.
Right now, we have prompt engineers who craft instructions for AI systems: 'Respond helpfully but concisely' or 'Analyze this data and summarize the key findings.' These work because the tasks are bounded and the outcomes are relatively obvious.
But what happens when we ask AI to tackle bigger problems? Optimize a tax system. Reduce inequality. Combat climate change. Maximize economic growth. Suddenly, we're not just giving instructions but encoding values, priorities and trade-offs. We have put so much attention towards which jobs will go away or be reduced due to AI, but I believe a new line of work will emerge.
What does success actually look like for superintelligent AI?
Imagine we ask a superintelligent AI to 'grow the economy and solve economic inequality.' What does that mean?
Does it mean equal incomes for everyone? Equal opportunities? Equal outcomes across specific dimensions? Do we want to preserve some inequality to maintain incentives for innovation? What if reducing inequality requires slowing economic growth? Is that acceptable?
Or consider: 'Make the company more profitable.' Should the AI maximize quarterly earnings? Long-term shareholder value? Should it account for employee well-being? Environmental impact? At what point does profit optimization become problematic?
These are challenging philosophical and political questions that societies have debated for centuries. But for superintelligence to actually help us solve these major global challenges, we have to figure out how to articulate what we want.
I've seen this challenge firsthand. In our AI Economist paper, we used reinforcement learning to design optimal tax policies. We created a simulated economy where AI agents acted as workers and an economic planner, all learning together to find tax policies that improved the trade-off between equality and productivity.
We told the AI to optimize for equality multiplied by productivity. What if it had accomplished that while destroying the environment in the process? Or created technically equal outcomes where everyone was miserable? We would have achieved our stated goal, but would have gotten something we clearly didn't want.
If you can't agree on the actual reward you want to give the AI or think outside the box about how the AI may accomplish a task in an undesirable way, the AI won't be able to accomplish these things well for you. You have to find all these corner cases, all these failure scenarios that objectively achieve your goal, but practically weren't what you meant.'
How the Forum helps leaders make sense of AI and collaborate on responsible innovation
Who gets to decide what solved means?
This brings us to perhaps the most challenging question: who gets to define what 'solved' means? Different nations have fundamentally different values.
Even within countries, we can't agree. Should we optimize for GDP growth or happiness? Liberty or security? Innovation or stability? Individual freedom or collective welfare? If we can't align on these questions among humans, how do we encode them for AI systems that might work on recommendations to optimize across several human civilizations?
We need shared frameworks for thinking about objective outcomes, just as we need international protocols for product safety testing. Not only that, we need people who specialize in translating human values and societal goals into precise specifications that AI systems can work with. I call it reward engineering. Whatever the name, it's a discipline and job title that barely exists today.
These people need to understand AI deeply enough to anticipate how open-ended, continually learning, self-improving superintelligence might game objectives, not because it’s evil or has a mind of its own, but because it will find the simplest, cheapest, fastest way to get a reward that humans defined for it. They'd need some critical thinking and philosophical training to grapple with fundamental questions about values and trade-offs. They'd need domain expertise across their AI’s application area: economics, policy, ethics and social sciences or the more mundane, but highly important functions like service call centres. They'd need to think through failure scenarios: situations in which you objectively accomplished your stated goal but, in practice, got something nobody wanted.
What needs to be considered before deploying superintelligent AI?
Some argue that we need a global consensus on objectives before deploying powerful AI. Others will say objectives should emerge through competition and experimentation. Some will want democratic processes to define values. Others will trust experts or markets. As AI will become more relevant in every industry, it makes no sense to try to define all of it in the abstract. We need to regulate when it actually touches people’s lives in each industry.
We do not have all the answers yet about how AI impacts all the jobs in every industry, but I do know that technological breakthroughs alone will not be sufficient to yield economic and societal gains from superintelligence.
Before superintelligent AI can solve global challenges, we have to solve a very human one: agreeing on what 'solved' means and building the frameworks to translate that into something AI can work with.
Which jobs will be the most valuable to define a reward for?
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Shargiil Bashir
January 13, 2026






