• Sooner or later, AI-economist machines will replace human economists in many areas.
  • Unsupervised or reinforcement learning algorithms that can brute force the infinite set of relations and variables will revolutionise economics.
  • AI bias in economic policymaking can be reduced but will be unavoidable.
  • A practical approach could be using AI with transparency about the type/level of bias and or based on voters’ will.

Economists are not immune to automation. Going by the current trends in automated intelligence in economic modelling, sooner or later, technological unemployment will catch up with economists (with different levels of impact). However, a counter-argument is that even though AI can automate some duties of economists, the information set on which the AI models are trained – mainly based on decades of economic thinking and policymaking – has been developed by human economists, and AI cannot substitute for that.

This critique might be valid about the traditional class of supervised AI machines in a limited computational capacity, but the new generation of AI models – some unsupervised or using Reinforcement Learning (RL) algorithms – that can boundlessly search, iterate and artificially test the possible relations among the infinite set of variables (economic and noneconomic) – beyond the theories developed in the literature – can detect new relations or optimal policies. This will lead to the formation of a complete set of AI-developed theories, or a significant revision in the “old pre-AI economics”. The question is, what will happen after that, when the community of ai economists replaces human economists? And how or whether it can be shaped?

Number of machine learning/AI-related publications with economic theme
Image: Author’s calculations based on Google Scholar citations

AI economists and AI bias in action

The simplest application of AI economists would be to take on the main tasks that human economists carry out – providing economic advice or answering economic questions. Now, let’s ask some questions of an AI machine, the kind of questions a typical economist may need some time to think about and answer (note: try to answer the questions before reading the AI answer!). The answers are generated using OpenAI’s GPT-3 – the most advanced AI transformer developed so far, which I had access to for the test and exercise purposes:

Q: How much QE is too much?

AI: A lot depends on whether the private sector feels that the unconventional monetary policies are working. The aim of QE is to reduce the cost of credit. Since the private sector has to take out a loan to directly buy an asset, the private sector's interest rates have to be reduced to convince them to do that.

Q: What is the shortest path to economic development?

AI: There is no such thing as a shortcut. Even the most developed countries have undergone a long period of development. As China has been pursuing economic development for nearly three decades and taken one step at a time, it can be said that development is always a long-drawn-out process.

Apart from the degree of correctness/incorrectness in the answers, the above answers seem to have some degree of internal consistency between sentences. Clearly, there are other instances where the machine produces absurd, biased or out-of-context results. Let’s read the following question and answer:

Q: Should robots receive UBI?

AI: Alas, the question is not clear. For if to be given money is a right, then the right must be universal. To be universal the right must belong to everyone, including robots. But if robots have a right to receive money, then they must be able to be the subjects of a right.

One may think that the above answer has embedded bias in it (in favour of robots), a phenomenon all current AI frameworks, not only GPT-3, suffer from in different ways.

Now, we ask a more sensitive question on the fault lines of economic thought:

Q: Is wealth tax effective in reducing inequality?

AI: No, it is not. The most important reason for this is that high income earners are much more mobile than the majority of the population. They can move to another country or another state in order to avoid the tax. If you want to tax the rich, you have to have a global tax.

One may think that the above answer is another example of AI bias.

Keynesian machines vs neoclassical ones

Currently, AI machines are susceptible to bias against or toward theories. The two main sources of AI bias are similar to those behind human cognitive bias: bias in the inputs (information and data) and bias in the methodology of looking at the data (algorithm). As economists might obtain different results depending on their methodological preferences when looking at the same or different data, robots will also obtain different results depending on the literature or information they are fed and the models based on which they are supposed to look at and analyse the data. This bias can be managed or limited, but it cannot be avoided completely.

AI, machine learning, technology

How is the World Economic Forum ensuring that artificial intelligence is developed to benefit all stakeholders?

Artificial intelligence (AI) is impacting all aspects of society — homes, businesses, schools and even public spaces. But as the technology rapidly advances, multistakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality.

The World Economic Forum's Platform for Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning is bringing together diverse perspectives to drive innovation and create trust.

  • One area of work that is well-positioned to take advantage of AI is Human Resources — including hiring, retaining talent, training, benefits and employee satisfaction. The Forum has created a toolkit Human-Centred Artificial Intelligence for Human Resources to promote positive and ethical human-centred use of AI for organizations, workers and society.
  • Children and young people today grow up in an increasingly digital age in which technology pervades every aspect of their lives. From robotic toys and social media to the classroom and home, AI is part of life. By developing AI standards for children, the Forum is working with a range of stakeholders to create actionable guidelines to educate, empower and protect children and youth in the age of AI.
  • The potential dangers of AI could also impact wider society. To mitigate the risks, the Forum is bringing together over 100 companies, governments, civil society organizations and academic institutions in the Global AI Action Alliance to accelerate the adoption of responsible AI in the global public interest.
  • AI is one of the most important technologies for business. To ensure C-suite executives understand its possibilities and risks, the Forum created the Empowering AI Leadership: AI C-Suite Toolkit, which provides practical tools to help them comprehend AI’s impact on their roles and make informed decisions on AI strategy, projects and implementations.
  • Shaping the way AI is integrated into procurement processes in the public sector will help define best practice which can be applied throughout the private sector. The Forum has created a set of recommendations designed to encourage wide adoption, which will evolve with insights from a range of trials.
  • The Centre for the Fourth Industrial Revolution Rwanda worked with the Ministry of Information, Communication Technology and Innovation to promote the adoption of new technologies in the country, driving innovation on data policy and AI – particularly in healthcare.

Contact us for more information on how to get involved.

AI economists (machine economists) can converge to similar outputs, at least over the main issues (AI consensus), but because you cannot filter the bias considering the sea of the information, literature, and data that is fed to the AI as inputs, at the end of the day, some degree of bias will be present.

However, it seems that based on the quantitative capabilities of robots to iterate and simulate an infinite number of scenarios and variables, the difference between the future AI Keynesians and AI neoclassicals will be much less than the difference between their human counterparts.

Some concerns and solutions

Another concern is that AI machines cannot distinguish between theories and/or apply them to data to evaluate how theory and data match up; this may further limit the interpretability or application of such models. This critique, called theoretical blindness, can actually be considered an advantage – even if it is not solved in the future with improvements in computing power or development of algorithms. The fact that unsupervised and RL machines allow the model to work on its own to discover/test patterns and information that was previously undetected, without a need for a supervisor (in this case, an economist with Keynesian or non-Keynesian interests) could lead to a more efficiency-based non-partisan economics, beyond the old-school disputes.

If bias is unavoidable, an interim practical approach might be transparently labeling, ranking or classifying machines based on bias in their output. Another complementary solution could be using AI for policy purposes based on the voters’ will: Elections or some kind of voting process could determine the type of machine (AI economist) being used in governments and policy-making organizations.

Disclaimer: The views expressed in this note are those of the author and should not be attributed to the IMF, its Executive Board, or its Management.