Stakeholder Capitalism

8 ways to ensure your company's AI is ethical

A code of ethics?

A code of ethics? Image: Ali Shah Lakhani on Unsplash.com

Barbara Cosgrove
Vice-President; Chief Privacy Officer, Workday
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Stakeholder Capitalism?
The Big Picture
Explore and monitor how Corporate Governance is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Corporate Governance

This article is part of: World Economic Forum Annual Meeting
  • Commitments to ethical AI are only valuable if they are implemented.
  • Sharing best practice will become increasingly important.
  • Here are 8 lessons for companies looking to integrate ethics into their AI.

Keeping up with artificial intelligence (AI) and data privacy can be overwhelming. While AI holds plenty of promise and opportunity, there are also concerns about data misuse and personal privacy being at risk. As we evaluate these topics and as the Fourth Industrial Revolution unfolds, questions arise about the promise and peril of AI, and how organizations can better realize its value.

Integrating 'ethics' into technology products can feel abstract for engineers and developers. While many technology companies are working independently on ways to do this in concrete and tangible ways, it is imperative that we break out of those silos and share best practices. By working collaboratively to learn from each other, we can raise the bar for the industry as a whole - and a good place to start is focusing on the things that earn trust.

Have you read?

Many companies are releasing high-level principles about their approach to designing and deploying AI products. But principles are only valuable if they are actually implemented. Workday recently published our Commitments to Ethical AI to show how we operationalize principles that build directly on our core values of customer service, integrity and innovation. Based on our experiences, here are eight lessons for technology companies looking to champion those principles across their organization:

1. Define what 'AI ethics' means. This definition needs to be specific and actionable for all relevant stakeholders in the company. At my company, it means our machine-learning (ML) systems reflect our commitment to ethical AI: we put people first; we care about society; we act fairly and respect the law; we are transparent and accountable; we protect data; and we deliver enterprise-ready machine-learning systems.

2. Build ethical AI into the product development and release framework. These cannot be separate processes that create more work and complexity for developers and product teams. Workday has built our principles into the fabric of our product development and created processes that drive continued compliance with them. New ML controls have been incorporated into our formal control framework to serve as additional enforcement of our ML ethics principles. Our development teams examine every ML product through an ethical lens by asking questions about data collection and data minimization, transparency and values. We have a long history of this in the privacy space, including privacy-by-design processes as well as third-party audits against our controls and standards. Workday has embraced a set of ethics-by-design controls for machine learning, and have in place robust review and approval mechanisms for the release of new technologies, as well as any new uses of data. We are committed to ongoing reviews of our processes, and evolving them to incorporate new industry best practices and regulatory guidelines.

3. Create cross-functional groups of experts to guide all decisions on the design, development and deployment of responsible ML and AI. Early on in this journey, Workday established a machine-learning task force comprised of experts drawn from our product and engineering, legal, public policy and privacy, ethics and compliance teams. These groups examine future and existing uses of ML in our products. Bringing these diverse sets of skills and views together to discuss future and existing uses of ML in our products has been really powerful, and has enabled us to identify potential issues early on in the product lifecycle.

Discover

What is algorithm bias?

4. Bring customer collaboration into the design, development and deployment of responsible AI. Workday engages our customer advisory councils, drawn from a broad cross-section of our customer base, during our product development lifecycle to gain feedback around our development themes related to AI and ML. And through our early adopter programme, we work closely with a handful of customers who act as design partners to test out new ML models and features through our innovation services.This enables us to understand and address customers’ ideas and concerns around AI and ML early on as we co-develop people-centric ML solutions.

5. Take a lifecycle approach to bias in machine learning. ML tools represent a phenomenal opportunity to help our customers leverage data to enhance human decision-making. With that opportunity comes the responsibility to build enterprise-ready tools that maintain our customers' trust, which is why one of the focal points of our commitments to ethical AI is mitigating harmful bias in ML. We use a 'lifecycle approach', through which we perform various bias assessments and reviews starting from the initial concept for a new product through the post-release phase.

Loading...

6. Be transparent. The ethical use of data for ML requires transparency. Because ML algorithms can be so complex, companies should go above and beyond in explaining what data is being used, how it’s being used, and for what purpose. Explain to customers how your ML technologies work and the benefits they offer, and describe the data content needed to power any ML solutions you offer. Demonstrate accountability in your ML solutions to your customers.

7. Empower your employees to design responsible products. We do this through required ethics training modules, toolkits, seminars, employee onboarding and workshops to ensure employees are trained in how to uphold our AI ethical commitments. For example, a human-centered design-thinking workshop uses different scenarios and personas to help Workday employees understand our commitments to creating ethical ML technologies.

8. Share what you know and learn from others in the industry. We do this through participation in industry groups and peer meetings such as the World Economic Forum Steering Committee for Ethical Design and Deployment of Technology to help develop an ethical framework for the tech industry. In addition, Workday makes it a priority to monitor and contribute to new standards and plans. In the US, we have engaged heavily with lawmakers and agency officials on ethical AI, including developing and participating in a Congressional AI Caucus staff briefing on 'Industry Approaches to Ethical AI', and playing the role of convener between industry and policy-makers in multiple venues. In addition, we provided support for the National Science Foundation’s update to the National Artificial Intelligence Research and Development Strategic Plan and the National Institute of Standards and Technology’s (NIST) development of their report, Artificial Intelligence Standards and Tools Development. We continue to advocate for an expanded role for NIST in the development of AI ethics tools. In Europe, Workday participated in a pilot programme to evaluate the European Union’s High-Level Expert Group’s Ethical Guidelines’s (HLEG) Trustworthy Artificial Intelligence Assessment List.

As we navigate this evolving world of ethical AI, it will become more important than ever to share practices and identify what we’ve learned along the way. We are eager to hear from others on what approaches have been effective for scaling and implementation, and we welcome the opportunity to share. In fact, the aim of Workday’s collaboration with the World Economic Forum is to encourage others to join us in sharing their best practices for championing responsible and ethical technology. The pursuit of responsible, ethical artificial intelligence and technology is critical - and is greater than any single company or organization.

Together, we should be building goodwill and trust through our actions, allowing us to realize the benefits of these powerful new technologies.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Stakeholder CapitalismEmerging TechnologiesForum InstitutionalFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

'It's now cheaper to save the world than destroy it': author Akshat Rathi on Climate Capitalism 

Robin Pomeroy and Sophia Akram

April 10, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum