How identity fraud is changing in the age of AI

Identity fraud takes on new forms as cyber criminals target organizations and individuals using AI tools Image: Markus Spiske/Unsplash
- Identity fraud volumes are on the decline around the world but the complexity of attacks is rising sharply, driven by access to AI tools.
- Amid the rise in AI and full digitization of organizations, identity fraud now affects all sectors and demographics, with payment-method fraud surging.
- Regional trends diverge, highlighting the need for adaptive, AI-aware verification and stronger public-private cooperation.
A new analysis of global verification data suggests that identity fraud is undergoing a structural transformation. Sumsub’s Identity Fraud Report 2025–2026 offers a detailed view into how identity crime has evolved over the past two years.
Rather than a simple increase or decrease in fraud, patterns point to a redistribution of risk; shifts in technique, sophistication and intent are being driven in large part by the growing accessibility of AI tools.
A new phase: Fewer attacks, but far more complex
The transition from 2024 to 2025 marks an inflection point. Last year, the spread of “fraud-as-a-service” platforms and inexpensive toolkits lowered the barrier to entry for would-be attackers. By 2025, that phase had matured into a model where fewer actors were involved, but their operations had become more coordinated and technologically advanced.
This “sophistication shift” is reflected in the numbers. Multi-step fraud attacks—schemes involving several coordinated stages—rose by 180% year-on-year in 2025. At the same time, the overall fraud rate edged slightly downward, from 2.6% to 2.2%. The decline suggests that superficial or opportunistic attempts are falling away, while more organized and deliberate activity persists—meaning that even in the fraud space, it is all about quality over quantity.
How is the Forum tackling global cybersecurity challenges?
A shared experience across sectors and demographics
From a public-policy perspective, one of the most striking findings of the report is how widely fraud is now experienced across both institutional and individual levels. The data suggests that identity crime has reached a point where its impacts are no longer confined to high-risk industries or vulnerable populations. Instead, it has become a mainstream, broadly distributed societal issue.
The most dramatic shift identified in the report is that payment method fraud now surpasses ID document fraud, with a fraud rate of 6.6%. This is not an isolated anomaly but a signal that criminals are no longer focused merely on account creation or identity bypassing. Instead, they are embedding themselves into transactional flows to achieve instant monetisation.
Among surveyed companies, 40% reported being targeted by fraud in 2025. These cases spanned organizations of varying sizes and regulatory maturity, from digital-first platforms to traditional services undergoing digital transformation. Meanwhile, 52% of end users reported experiencing fraud attempts or successful fraud within the same period. This alignment between institutional and individual experience highlights the extent to which fraud has become an embedded feature of the modern digital environment.
For policymakers, the trend is significant. It indicates that even as regulatory regimes strengthen, particularly in financial services and payments, fraud has diffused into adjacent domains where oversight may be uneven. These are sectors where users are less likely to expect formal identity verification and where service providers may lack established compliance frameworks.
At the same time, the survey shows that 75% of respondents expect fraud to become more AI-driven in the near term. This expectation is shared across age groups, regions and levels of digital literacy, suggesting a broad public understanding that the tools enabling fraud are advancing faster than the average user’s ability to recognise them.
For governments, this growing exposure is a warning sign: they need to work more closely with the private sector. That means running public-awareness campaigns so people can spot manipulation, aligning standards for identity verification and creating privacy-respecting ways to share data so cross-platform attack patterns become visible sooner.
Regional differences reveal emerging local pressures
The regional findings from the report reveal a fraud landscape that is far from uniform. Its scale and root causes shift from region to region, shaped by local regulations, economic realities, levels of digital adoption and how active or organised local fraud networks are.
Europe and North America both recorded declines in fraud rates in 2025 (-14.6% and -5.5%, respectively). These regions benefit from relatively mature regulatory ecosystems, established enforcement mechanisms and higher adoption of strong verification standards. Yet even where rates are falling, the sophistication of attacks is rising—the “fewer but smarter” trend that defines this year’s dataset. For regulators, this suggests that compliance frameworks remain effective at deterring opportunistic fraud but must evolve to address more advanced cross-channel manipulation.
Asia-Pacific (APAC) presents a different picture, with fraud rising 16.4% year-on-year. One of the clearest signals from the data involves money-mule recruitment; one in four respondents reported being targeted. While roughly 80% recognise the term “money mule,” many lack clarity on the legal and financial implications of participating in such schemes. This awareness gap is consequential. Mule networks are the backbone of many large regional and cross-border fraud schemes, helping criminals move illicit money across borders. Governments could seriously weaken these networks through targeted awareness campaigns—especially for younger jobseekers and migrant workers, who are often the ones recruited.
The Middle East, with a 19.8% increase and Africa, with a 9.3% increase, show signs of rapid digital adoption without parallel scaling of verification infrastructure. In many markets, the expansion of digital banking, e-commerce and fintech outpaces the rollout of robust identity frameworks, creating openings that professionalised fraud groups are quick to exploit. Capacity-building, training and regional cooperation may be especially impactful in these environments.
Countries such as Argentina (3.8%), Latvia (3.7%) and Pakistan (5.9%), illustrate another trend: elevated fraud rates often coexist with complex economic conditions and high mobile-first adoption. These factors can amplify fraud risk when identity systems are fragmented or when informal employment structures create fertile ground for recruitment into organised fraud networks.
Taken together, these regional variations highlight the importance of adaptive, context-sensitive regulatory approaches. A single global model is unlikely to be effective. Instead, what seems to work best are frameworks that balance local realities with cross-border alignment, especially as fraud networks are increasingly fluid, moving across jurisdictions and digital platforms with ease.
AI is reshaping both offense and defence
A clear theme running through the data is the rising influence of AI on the fraud ecosystem. In 2025, AI-assisted document forgery, recorded at 0% the previous year, rose to 2% of all fake documents identified. As generative tools improve, the threshold for producing convincing fabricated content continues to decline.
The report also shows the rise of AI fraud agents. These agents combine generative AI, automation frameworks and reinforcement learning; they create synthetic identities, interact with verification systems in real time and adjust behaviour based on outcomes, which makes them more sophisticated and harder to detect.
Trajectories indicate these agents could become mainstream within 18 months, particularly in organised fraud networks. This trend signals how quickly agentic AI is evolving from lab curiosity to operational threat.
At Anthropic researchers uncovered a state-linked espionage campaign using autonomous AI agents for reconnaissance, phishing and network infiltration—entirely without human control.
While this case was about espionage, the same technology stack could be repurposed for identity fraud, signalling how rapidly agentic AI is evolving from lab curiosity to operational threat.
Preparing systems for the next phase
The data points to a future where fraud risk is less about volume and more about adaptability. Even in regions where rates are declining, the underlying techniques are becoming more intricate. As AI’s role expands, both legitimate and malicious agents may soon need their own verification pathways.
Our findings reveal, above all, the need for systems that evolve as quickly as the threats they are designed to counter. Fraud prevention is becoming less about individual checkpoints and more about continuous, contextual assessment, drawing together behavioural signals, device data and real-time analysis into a single picture.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Agustina Callegari and Khalid Alaamer
December 10, 2025






