Fourth Industrial Revolution

We asked 4 tech strategy leaders how they're promoting accountability and oversight. Here's what they said

Published · Updated
Organizations must focus on accountability and oversight to build trust.

Organizations must focus on accountability and oversight to build trust. Image: Getty Images/iStockphoto

Daniel Dobrygowski
Head, Governance and Trust, World Economic Forum
Bart Valkhof
Head, Information and Communication Technology Industry, World Economic Forum
  • In the intelligent age, digital trust is a central component for any tech organization.
  • The World Economic Forum's Digital Trust Framework has been designed to support decision makers.
  • We asked four tech strategy leaders how they are promoting accountability and oversight.

Digital trust has become increasingly important in the intelligent age, where technologies impact our everyday lives. The World Economic Forum’s Digital Trust Framework was created to help decision makers build societal trust by aligning around three core goals: security and reliability; accountability and oversight; inclusive, ethical and responsible use.

Have you read?

    In the second part of this series we focus on accountability and oversight, which requires organizations to take responsibility for trustworthiness through well-defined and clearly assigned specific stakeholders, teams or functions, along with provisions for addressing failures. Furthermore, it ensures that rules, standards, processes and practices are followed and performed as required.

    These three dimensions are critical to achieving it:

    • Transparency: requires honesty and clarity around digital operations and uses. Enabling visibility into an organization’s digital processes reduces the information asymmetry between an organization and its stakeholders while signaling to individuals that the organization intends not only to act in the individual’s interest but also to make those actions known and understandable to those inside and outside of the organization.
    • Redressability: represents the possibility of obtaining recourse where individuals, groups or entities have been negatively affected by technological processes, systems or data uses. With the understanding that unintentional errors or unexpected factors can cause unanticipated harms, trustworthy organizations have robust methods for redress when recourse is sought and mechanisms in place to make individuals whole when they have been harmed.
    • Auditability: is the ability for both an organization and third parties to review and confirm the activities and results of technology, data processing and governance processes. Auditability serves as a check on an organization’s commitments and signals the intent of an organization to follow through on those commitments.

    The development and deployment of trustworthy intelligent technologies is a shared responsibility. We therefore asked members of the Forum’s ICT Strategy Officers Community – a group of 40 diverse senior strategy leaders from around the world – for their experiences and insights in adopting the Forum’s Digital Trust Framework principles.

    Here’s what some of them had to say on promoting accountability and oversight.

    Vivek Mohindra, SVP Corporate Strategy, Dell Technologies

    AI is a shared responsibility. This is particularly true when it comes to the transparency and accountability of how AI models make decisions to help ensure fair, effective and trustworthy outcomes. At Dell, we take a holistic approach to managing risk grounded in our strategy and core beliefs. We were among the first in the industry to designate a chief AI officer and operationalize oversight through cross-business governance boards.

    We keep accountability and oversight front and centre by examining how data is used and what actions models drive, building in transparency and explainability. Taking this into account, we then use the most responsible approach across the IT environment – from the data centre to PCs – to achieve our business outcomes, anchored across our priority AI use cases.

    As AI systems become increasingly prevalent and we strengthen our efforts to drive accountability and oversight, it’s important that we lean into uniquely human qualities, such as critical thinking and empathy.

    Vivek Mohindra, SVP Corporate Strategy, Dell Technologies.

    One example is software development, where we’ve trained our engineers on responsible use of code-assistant technology to ensure accountability across existing development frameworks and tooling. This includes a code review process, along with various integrated validation scans for code security and quality.

    As AI systems become increasingly prevalent and we strengthen our efforts to drive accountability and oversight, it’s important that we lean into uniquely human qualities, such as critical thinking and empathy. Working together, the human-AI partnership will drive the strongest and safest long-term outcomes.

    Have you read?

    Christopher Young, EVP Business Development, Strategy and Ventures, Microsoft

    We believe trust is earned through action. Our commitment to accountability and oversight in the development and deployment of intelligent technologies reflects this belief. We've been dedicated to responsible AI since 2016, long before generative AI's rise in 2022.

    Our Responsible AI Standard guides our teams in making thoughtful, context-driven decisions throughout the AI development lifecycle. Creating a standard and governance framework only works if you practice what you preach. While everyone at the company plays a role in responsible AI, we also have governing bodies like the Office of Responsible AI and the AI Ethics and Effects in Engineering and Research (Aether) Committee to inform and oversee the creation and implementation of these standards.

    Creating a standard and governance framework only works if you practice what you preach.

    Christopher Young, EVP Business Development, Strategy and Ventures, Microsoft.

    One example is Microsoft Security Copilot, AI-powered software helping customers summarize vast data signals into key insights, detect cyber threats before they cause harm, and reinforce their security postures. Developed under our Responsible AI Standard, Security Copilot identifies and mitigates key risks, ensuring potentially harmful content is surfaced only when requested by security professionals. Through ongoing monitoring during phased releases, the team triaged and addressed responsible AI issues weekly. This approach, combined with validation by subject matter experts, resulted in a secure, transparent, and trustworthy generative AI product.

    Even with these efforts, we recognize that the dynamic nature and rapid pace of AI advancements present ongoing challenges. That's why we designed our Responsible AI Standard in a flexible manner, making it outcome based and allowing for the integration of any regulations into our development cycle. We also participate in global AI safety and security forums to drive progress on science-based safety and security testing and standards. We are committed to our pledge to develop AI that is not only innovative but also trustworthy, while being transparent in the process.

    Discover

    What is the World Economic Forum doing about the Fourth Industrial Revolution?

    Steve Rudolph, VP Strategy and Transformation, Pegasystems

    AI accountability and oversight are essential, and many organizations have established governance teams to identify AI use cases and mitigate risks. However, if these principles aren’t integrated deeply into platforms, governance will become ineffective since it won’t have the right data and tools to support it.

    Here are three ways Pega helps clients achieve meaningful oversight:

    Decisioning: AI encompasses more than just machine learning; effective decisioning combines machine-learning-driven predictions with business rules and policies into decision strategies. This allows organizations to balance customer and stakeholder interests with business objectives and policy down to the individual decision level.

    If these principles aren’t integrated deeply into platforms, governance will become ineffective since it won’t have the right data and tools to support it.

    Steve Rudolph, VP Strategy and Transformation, Pegasystems.

    Simulation and active monitoring: Insight during model design isn't enough. Continuous end-to-end monitoring of models and logic is crucial, especially when tens of millions of decisions are made daily. Clients can simulate decision logic with the Ethical Bias Check, proactively detecting bias in next-best-action strategies, before they release new decisions and logic. This also includes tracking performance against general business and customer goals or risk controls, with mechanisms for real-time alerts or self-correction if issues arise.

    Explainable AI and audit: For accountability, not just data scientists but domain expert end-users and customers should understand AI decisions and generative AI output. It is vital to be clear on the reasons behind actions, such as loan denials or claims investigations, or to explain what a generative AI answer is based on. Additionally, Pega maintains an audit trail of decisions and generative AI calls and responses, including input and logic used (e.g., a history of prompts and answers) to help ensure transparency and the right to challenge automated outcomes.

    Discover

    How is the World Economic Forum creating guardrails for Artificial Intelligence?

    Nitesh Aggarwal, Chief Strategy Officer, Tech Mahindra

    We are committed to applying the principles of accountability and oversight to foster trust among our key stakeholders. This commitment is evident in our approach to the development and deployment of intelligent technologies, ensuring that ethical considerations are embedded at every stage.

    Applying the principles of accountability and oversight through rigorous governance frameworks and dedicated committees – ensures that all technological developments are aligned with ethical standards and stakeholder expectations. We also have regular audits and reviews to maintain high standards of integrity and transparency.

    Applying the principles of accountability and oversight … ensures that all technological developments are aligned with ethical standards and stakeholder expectations.

    Nitesh Aggarwal, Chief Strategy Officer, Tech Mahindra.

    An example of this commitment is Makers Lab, an innovation hub where cutting-edge technologies are developed. All projects undergo stringent ethical reviews before deployment. For instance, the lab's AI-driven healthcare solutions are subject to thorough scrutiny to ensure patient data privacy and compliance with global data protection regulations. This process not only enhances the reliability of the technologies but also builds trust among users and stakeholders.

    Redressability is another critical aspect. For a US-based technology customer, we established robust mechanisms for addressing stakeholder concerns and grievances. This included dedicated support teams and clear escalation paths to resolve issues promptly and efficiently.

    But the biggest challenge we face is the dynamic nature of technological advancements, which can outpace existing ethical frameworks and regulations. Staying ahead of these changes requires constant vigilance and adaptability. Collaborating with industry bodies and regulatory authorities is key to shape and update guidelines that keep pace with innovation.

    Loading...
    Related topics:
    Fourth Industrial RevolutionEmerging Technologies
    Share:
    Contents
    Vivek Mohindra, SVP Corporate Strategy, Dell TechnologiesChristopher Young, EVP Business Development, Strategy and Ventures, MicrosoftSteve Rudolph, VP Strategy and Transformation, PegasystemsNitesh Aggarwal, Chief Strategy Officer, Tech Mahindra

    3 things we learned about AI and skilling from experts

    Tom Crowfoot

    December 11, 2024

    3:09

    This open-source platform helped map the flooding in Greece

    About us

    Engage with us

    • Sign in
    • Partner with us
    • Become a member
    • Sign up for our press releases
    • Subscribe to our newsletters
    • Contact us

    Quick links

    Language editions

    Privacy Policy & Terms of Service

    Sitemap

    © 2024 World Economic Forum