We asked 6 tech strategy leaders how they're promoting security and reliability. Here's what they said
Ensuring security and reliability means protecting technology and data against internal and external attacks. Image: Getty Images/iStockphoto.
- Building digital trust is essential for any organization working in the tech industry.
- The World Economic Forum's Digital Trust Framework has been designed to support decision makers.
- We asked six tech strategy leaders how they are promoting security and reliability.
In the rapidly evolving intelligent age, where digital trust is increasingly important, the tech sector has a responsibility to its stakeholders to ensure the technologies and services they provide will protect and uphold societal expectations and values.
The World Economic Forum’s Digital Trust Framework outlines a set of three goals – security and reliability; accountability and oversight; inclusive, ethical and responsible use – which the tech industry can use to inform decision-making in the pursuit of those aims.
Have you read?
In the first part of this series we focus on security and reliability, which refers to an organization’s ability to protect its technology and data against internal and external attacks, manipulations and interruptions, while operating as designed according to a clearly defined set of parameters.
These three dimensions are critical to achieving it:
- Privacy: for individuals, it is the expectation of control over, or confidentiality of, their personal or personally identifiable information. For organizations, privacy is the meeting of this expectation through the design and manifestation of data processing that facilitates individual autonomy through notice and control over the collection, use and sharing of personal information.
- Cybersecurity: focused on the security of digital systems – including underlying data, technologies and processes. Effective cybersecurity mitigates the risk of unauthorized access and damage to digital processes and systems, ensuring resiliency. It also ensures the confidentiality, integrity and availability of data and systems.
- Safety: encompasses efforts to prevent harm (e.g. emotional, physical, psychological) to people or society from technology uses and data processing.
Following on from our article about the importance of trustworthy development and deployment of intelligent technologies, we asked members of the Forum’s ICT Strategy Officers Community – a globally diverse group of 40 active senior strategy leaders representing companies across the technology stack – for their insights and experiences adopting the Forum’s Digital Trust Framework principles.
Here’s what some of them had to say on promoting security and reliability.
Mark Patterson, EVP and Chief Strategy Officer, Cisco
In today’s increasingly complex threat landscape, even advanced companies struggle to keep up with modern cyber-attacks. Our recent Cybersecurity Readiness Index revealed that only 3% of organizations globally have the "mature" level of readiness needed to be resilient against today’s cybersecurity risks. With the rapid evolution of technologies like AI, cybersecurity must be pervasive throughout your infrastructure to ensure resilience against these threats.
The Cisco network has 31 million networking devices that connect with 1 billion clients every month and our security suite observes over 800 billion events per day. We build solutions throughout a company’s network and provide the tools to monitor and mitigate threats. AI-powered threats require AI-powered defense. Protecting high-value data, AI models, and toolchains is imperative to ensure business resilience. As more of our enterprise customers lean into the AI era and the promise it brings, we’re focused on providing comprehensive security controls for the AI stack as well as enhancing our portfolio with AI capabilities.
With the rapid evolution of technologies like AI, cybersecurity must be pervasive throughout your infrastructure to ensure resilience against these threats.
”The biggest challenge for many organizations is time. Hackers and bad actors work around the clock to infiltrate networks. We must be faster and smarter to stay ahead of them. For instance, AI-powered capabilities like Hypershield can reduce the time from vulnerability announcement to mitigation from 45 days to minutes. We’re also investing in research into technology on the horizon which will likely bring new threats and security needs, such as quantum networking. Securing the tech stack is more critical than ever as threats become more sophisticated, and our most sensitive data is vulnerable.
Security is truly a team sport, and partnerships across the entire ecosystem are key. Trust and partnerships, including with our competitors, are essential for ensuring security across the digital global economy and critical infrastructure.
Harrison Lung, Group Chief Strategy Officer, e&
As a global technology group in diverse markets, we are committed to maintaining and strengthening the trust our customers have in us. We achieve this by integrating security and reliability into the core of our AI development and deployment.
Our AI is developed ethically with rigorous frameworks, strong data privacy, and careful use case evaluation. We maintain a use case repository for continuous learning and improvement and prioritise data protection and transparency, adhering to regulations and employing robust measures. We're committed to ensuring AI's positive impact on society and are actively involved in developing ethical guidelines and safeguards to address potential risks.
We're committed to ensuring AI's positive impact on society and are actively involved in developing ethical guidelines and safeguards to address potential risks.
”In the UAE, we're privileged to have a regulatory environment that fosters the growth and responsible adoption of AI. The country’s new international AI policy is centred on six principles: advancement, cooperation, community, ethics, sustainability and security. While the AI Security Policy seeks to enhance confidence in AI solutions and technologies, stimulate their development, and mitigate cybersecurity risks.
We actively participate in shaping the AI landscape and collaborate with public and private sector partners to ensure the responsible and secure adoption of AI technologies, aligning with the UAE's policies. Through collaborations with public and private sector partners in the UAE, we've established initiatives like the Mobile Security Operation Centre (MSOC) to combat cyber threats effectively.
How is the Forum tackling global cybersecurity challenges?
Mikael Bäck, VP and Corporate Officer, Ericsson
Mobile networks are becoming the platform of innovation serving business, government and society at large in a sustainable way. As the pillars of digital transformation, mobile networks demand stringent digital security and resilience. This is essential to ensure the ongoing operation of our societies and economies, which are becoming increasingly reliant on digital technologies. It also serves to reinforce trust in the ongoing digital transition.
Addressing security risks holistically calls for a comprehensive holistic trust and security framework. This holistic framework integrates four key processes: telecommunication standardization, vendor product development, deployment, and operations. When implemented together, these processes form the security posture of the networks we deploy.
Addressing security risks holistically calls for a comprehensive holistic trust and security framework.
”Ericsson's Security Reliability Model (SRM) stands at the heart of our approach to product security and the vendor product development process. It incorporates security and privacy considerations into every phase of the product life cycle, enabling us to effectively manage threats and vulnerabilities. This risk-based strategy, tailored to the specific environment and the ever-changing landscape of technology, empowers us to navigate rapid tech advancements and adapt to evolving global laws.
Over 5000 dedicated Security Masters and Champions bring the SRM to life, conducting tasks such as automated vulnerability analysis, secure coding, and fuzz-testing protocols. We balance a comprehensive top-down strategy with the practical expertise of our on-the-ground professionals. Our model to address the security and privacy aspects of our products, enables our customers to operate them securely and in compliance with relevant privacy laws and regulations.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
Eric Loeb, EVP Government Affairs, Salesforce
We earn the trust of our stakeholders through security, compliance, privacy, and performance. We’re committed to transparency, and our Trust site provides real-time, proactive updates on product status, security issues, upcoming maintenance, and more.
Our stakeholders know their data is theirs – to be accessed when, where, and how they intend. This ethos extends from our customer relationship management (CRM) software to Agentforce, our new suite of autonomous AI agents that augments employees.
In the era of agentic AI, data security and privacy are more important than ever.
”We advise over 150,000 customers, including public sector organizations, highly regulated industries, and most of the Fortune 500 - empowering them with the tools and resources they need to run their businesses safely. For example, Salesforce requires multi-factor authentication (MFA) when accessing our products, adding an extra layer of protection against cyberthreats like phishing attacks.
In the era of agentic AI, data security and privacy are more important than ever. But realizing the benefits of AI – where autonomous agents supercharge worker productivity and create new opportunities everywhere – hinges on one critical element: trust. Salesforce’s Einstein Trust Layer protects customer data and promotes the responsible use of AI across the Salesforce ecosystem. For example, features like zero data retention ensure customer data isn’t retained by third-party LLMs for training purposes or product improvements, and no human being at third-party providers looks at data sent to their LLM.
Sean Morton, SVP Strategy & Services, Trellix
As a leader in cybersecurity, we understand the importance of security and reliability. Our customers entrust us with their most sensitive data, and we are committed to ensuring their trust is well-placed.
These principles guided us in developing our own GenAI technology, Trellix Wise, which streamlines customer security operations using our security platform, from detection to investigation and response. It follows strict coding practices, from planning and design to development, testing, and deployment.
Customers entrust us with their sensitive data, and we are committed to ensuring their trust is well-placed.
”We designed it with the core principle that customers’ data remains within their private environment and is not used to train models beyond that enclave. A strict feedback and guardrail framework for making decisions is provided to align outputs with expected behaviours. Additionally, we employ an automated evaluation framework that tests the same use cases against multiple models to ensure we always use the best, most correct model.
As with any GenAI system, challenges remain. For example, the industry needs a robust, easy way to remove sensitive data subsets from trained models to comply with privacy requests and regulations like GDPR. Nonetheless, we are encouraged and excited to continue investments in GenAI to strengthen the cybersecurity community.
Ajay Bhaskar, Chief Strategy and Transformation Officer, Wipro
Our approach to responsible AI involves four pillars: individual, social, technical, and environmental. The technical level is where we focus on building AI systems that are robust and safe and that protect both personal and company data. To that end, we have established a centre of excellence that is responsible for embedding security, privacy, and trust principles into the design stage. The goal is to identify potential threats and map out the threat criteria before our solutions go into production. Once the solution is in use, we continually evaluate the performance of models against the set criteria and monitor against model drifting and other security risks.
Our approach to responsible AI involves four pillars: individual, social, technical, and environmental.
”Wipro Enterprise Generative AI framework, or WeGA, brings these principles to life by integrating these governance and security layers into its AI solutions. For example, we recently deployed scalable, GenAI-based chatbot solution for a US-based health insurer. It indexes thousands of plan documents and provides instant answers to complex questions. The safety guardrails embedded in the solution ensure safe and responsible AI, and the model is fine tuned to deliver personalized and empathetic responses.
Looking ahead, the biggest challenge for organizations will be to make security and reliability a global corporate effort. Building these principles into the design stage will be critical in making safety and reliability a core part of innovation.
Contents
Mark Patterson, EVP and Chief Strategy Officer, CiscoHarrison Lung, Group Chief Strategy Officer, e&Mikael Bäck, VP and Corporate Officer, EricssonEric Loeb, EVP Government Affairs, SalesforceSean Morton, SVP Strategy & Services, TrellixAjay Bhaskar, Chief Strategy and Transformation Officer, WiproMore on Fourth Industrial RevolutionSee all
Tom Crowfoot
December 11, 2024