AI Ethics Framework

Creation of an overarching ethics framework for the design, development and deployment of artificial intelligence in India, along with self-assessment guidelines for public- and private-sector organizations

The challenge

As artificial intelligence adoption has increased, inherent risks of relying on AI have emerged across a number of areas. AI‑powered solutions can sometimes be discriminatory, are unable to explain the decisions made by their algorithms, and potentially pose a risk to individual privacy given their heavy reliance on data. Issues related to AI development, such as “explainability”, transparency and accountability remain ongoing, raising questions about ethics, privacy and security. Using AI with malicious intent – for example, creating “deepfakes” or autonomous weapons – can have serious repercussions on society. Alongside this, it is likely that the strong position of those countries that enjoy the advantage of being able to freely collect and distribute data will be consolidated in an increasingly automated and digitized world.

The opportunity

Globally, a number of principles for the creation of ethical AI solutions have emerged in both the public and private sector. However, there is a lack of specific AI “guidelines” for India. India’s National Strategy for AI emphasizes that AI‑related risks must be mitigated through effective policy, along with the creation of standards and awareness among relevant stakeholders. There is a need to develop effective guidelines for the public and private sectors to promote the creation of responsible AI for India. 


The Indian government’s policy think tank NITI Aayog and the Centre for the Fourth Industrial Revolution are partnering to co‑design an ethics framework to ensure the responsible use of AI. The organizations will: – Co‑design principles of AI ethics – Jointly draft self‑assessment guides for the public and private sectors – Draft enforcement of AI ethics principles across public and private sector, academia – Test the AI ethics framework on use cases under the jurisdiction of different ministries

For more information on this project, please contact Kay Firth‑Butterfield, Head of AI and Machine Learning, at Kay.Firth‑ or Arunima Sarkar, Project Lead, at

Part 1: Principles for Responsible AI

Read Full Report here

Part 2: Operationalizing Principles for Responsible AI

Read Full Report here
License and Republishing

World Economic Forum projects may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

Subscribe for updates

A weekly update of what’s on the Global Agenda

© 2021 World Economic Forum

Privacy Policy & Terms of Service