Emerging Technologies

Why the global financial system needs high-quality data it can trust

the inside if the London Stock Exchange: Trust is critical to financial data

Trust is critical to financial data Image: London Stock Exchange Group

David Schwimmer
Chief Executive Officer, London Stock Exchange Group (LSEG)
This article is part of: World Economic Forum Annual Meeting
  • In artificial intelligence (AI), the value of data isn’t just in its volume but in its integrity and trustworthiness – poor data leads to unreliable results and AI risks such as hallucinations and bias.
  • Data transparency, security, and integrity—such as “watermarking” for financial data—are critical for compliance, customer confidence, and effective AI deployment.
  • Industry-wide coordination, standardized definitions of “data trust”, and interoperable regulations are essential to fostering reliable AI systems and scaling global financial innovation.

More than a century ago, reels of ticker tape were considered the cutting-edge of real-time data technology. Today, digital data serves as the lifeblood of the global and economic financial system. However, without pinpoint accuracy and trust in that data, we risk detrimental consequences for the whole economy.

As a global data and analytics provider, LSEG (London Stock Exchange Group) delivers around 300 billion data messages to customers across 190 markets daily, including 7.3 million price updates per second.

We are also seeing how AI is transforming finance. It’s supercharging productivity internally and in our customers’ products, enhancing financial workflows by boosting efficiency, enabling more informed decisions, and strengthening the customer experience.

As the financial services sector continues to explore the possibilities of AI, there is an enormous appetite for data. This continues to grow: customer demand for our data has risen by around 40% per year since 2019.

But without the right data, even the best algorithms can deliver mediocre, or worse, misinformed results. Poor quality data increases the risk of AI hallucinations, model drift and unintended bias. The growing complexity of contracts and rights management in this field creates inherent challenges in avoiding licensing or contractual breaches.

Have you read?

Building on data integrity and digital rights

There are great new opportunities for processing large unstructured datasets through generative artificial intelligence (GenAI) models, but their worth is limited without trustworthy and licensed data. Data in GenAI isn’t just a quantity game; it’s a quality game.

Many businesses are critically considering how to embrace AI opportunities with high-quality data. At LSEG, we’ve developed a multi-layered strategy that may help guide others in the financial services industry.

The first layer is ensuring data integrity and relevance, which are critical requirements in large language models (LLMs). “GPT-ready” datasets – curated and validated by trusted data providers – are in high demand, and we expect that demand will grow as more businesses explore GenAI’s uses.

High-integrity data acts as a security net when working with LLMs and other AI applications.

The second layer is digital rights management. Customers expect solutions that verify which sources can or cannot be used in LLMs, govern responsible AI policies, protect against IP infringement and differentiate usage rights.

Trust and transparency in financial data

These layers are underpinned by “data trust,” an approach to data that is built on the foundation of information transparency, security, and integrity.

When data leads to big decisions, customers need peace of mind to track where data is coming from and ensure that it is secure, reliable and able to meet regulatory and compliance standards. Put simply, it’s “watermarking” for financial data.

All financial services companies must raise the bar on the calibre of their data.

To increase trust in data across the industry, we need greater standardization, coordination, and a stable regulatory environment, underpinned by clear principles on AI’s responsible and ethical use.

The more standardized the industry definition of data trust, the easier it will be to ensure the flow of high-quality data. If the core principles of transparency, security and integrity of information are applied to the standard of data, we will be able to foster real-time, pinpoint accuracy across the sector.

For AI to meet its potential in addressing the world’s biggest challenges, we must be able to trust the data that’s going into it.

Laying the ethical groundwork for innovation

The industry should aim for the highest level of transparency so that customers can see what a dataset contains, who owns it, and how it is licensed for use.

Regulations such as the European Union’s AI Act and the Digital Operational Resilience Act introduce safeguards, clear accountability and a focus on governance and preparedness in financial services.

Voluntary guidance, including the National Institute of Standards and Technology’s AI Risk Management Framework in the United States, can also help organizations measure and manage risks to AI systems and data.

It’s clear these regulations serve as good starting points for how the financial sector should continue to develop safe and fair practices of AI. They have inspired our own Responsible AI Principles at LSEG.

Moving forward, policymakers must recognize the need for high-quality data as we develop the AI-enabled tools of the future.

We support the use of internationally agreed-upon definitions relevant to AI and data. We also need more rigorous parameters for managing intellectual property and digital rights.

The path to global AI regulation

At the same time, regulatory requirements for technology must be more interoperable. The more specific the rules, the more difficult it is for global companies to scale up quickly.

When companies need to make business decisions in different jurisdictions, this can impact everything from the location of a data centre to the choice of a cloud provider.

As AI technology develops, policymakers should ensure legislation is flexible enough to align with other jurisdictions while remaining relevant for upcoming AI use cases.

None of this will be easy, but businesses in the financial and tech sectors, regulators, and consumers can all contribute to this conversation. We will need varying expertise and understanding as we embrace the technology that will alter our lives.

For AI to meet its potential in addressing the world’s biggest challenges, we must be able to trust the data that’s going into it.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Banking and Capital Markets

Related topics:
Emerging TechnologiesTrade and InvestmentFinancial and Monetary Systems
Share:
The Big Picture
Explore and monitor how Banking and Capital Markets is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Why children need to be included in discussions about AI

Aleksander Dardeli

February 14, 2025

1:57

This device saves water and traps microplastics as you wash your clothes

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum