Cybersecurity

Unmasking the AI-powered, remote IT worker scams threatening businesses worldwide

Hands at a keyboard as AI-powered scams increase in hiring processes.

Strengthening defenses requires a layered approach. Image: Unsplash

Brett Winterford
Vice President, Okta Threat Intelligence, Okta
  • AI has enabled a new class of cybercriminals to set their sights on the lucrative opportunities presented by directly infiltrating businesses.
  • Advances in generative AI have empowered fraudsters to exploit the hiring process for in-demand remote technical roles.
  • Organizations today must strengthen their defenses to ensure the integrity of their hiring practices.

Generative artificial intelligence (AI) is changing the cybersecurity landscape by putting enhanced capabilities in the hands of threat actors.

With AI-powered tools, one threat actor can now do the work of several. For example, in Anthropic’s recent analysis of an AI-orchestrated cyber espionage campaign, researchers observed the threat actor using AI to perform 80-90% of the attack with only sporadic human intervention.

Many people have already felt the effects of improved social engineering tactics, which rely on a human to divulge compromising information like a password, banking details or other personally identifiable information (PII). Threat actors have also been observed using AI to spoof log-in pages on the web with the intent to harvest user credentials.

Moreover, AI has enabled a new class of cybercriminals to set their sights on the lucrative opportunities presented by directly infiltrating businesses.

Specifically, the evolution of generative AI has empowered fraudsters to exploit the hiring process for in-demand remote technical roles. Leveraging AI tools to build fictitious resumes painting them as ideal candidates, and using deepfake technology to pass screenings and conduct interviews, scammers have been observed successfully landing remote IT staff jobs.

The emergence of these AI-powered worker scams has resurfaced some of the underlying challenges of identity security in the AI era. As AI tools continue to improve, and cybercriminals build agentic flows into their operations, organizations must understand how the attack surface has extended into recruitment and onboarding, as well as the role effective identity management plays in strengthening their defenses.

Unpacking the threat

In recent years, the tech sector was the poster child for remote work opportunities. With a high concentration of software engineering and related technical positions - roles that could be performed essentially anywhere - tech companies had the luxury of sourcing talent from around the world. As such, the tech industry became the initial target of these remote worker scams.

State-backed actors have orchestrated the most prominent examples of this ruse to date. Motivated primarily to help raise funds for their state, targeting remote jobs offers these threat actors a payday at the expense of unsuspecting businesses.

But the rapid pace of digital innovation has led to a growing number of remote technical positions in industries outside of tech. For example, healthcare organizations have expanded hiring for mobile application development and electronic record-keeping platforms. In financial services, new positions have opened in back-office processing roles like payroll and accounting.

The latest research shows about half of the companies targeted by these attacks weren’t in tech and about one-quarter of all targets were located outside of the United States.

Exploring the tactics

To facilitate these attacks, threat actors are leaning on generative AI tools just about every step of the way. Based on activity observed by Okta Threat Intelligence, here’s what a scammer’s typical path to fraudulent employment might look like.

The attacker starts by creating a fake job posting on an AI-enhanced recruitment platform. It looks similar, or maybe even identical, to a posting from one of their target organizations. As legitimate candidates apply to this fabricated listing, the threat actor studies what real applications look like and trains AI on these submissions to develop their own application for the actual job opening.

After refining the resume, the scammer tests this manufactured persona against applicant tracking software, improving their chances of moving beyond automated screenings used by many recruiting platforms.

Once an application is successful and an interview is scheduled, the threat actor turns toward an AI-based webcam interview review service. By conducting mock interviews through one of these services, they can test the efficacy of deepfake overlays and how large language models (LLMs) respond to challenging technical questions, which helps them to script interview answers.

It’s not clear exactly what proportion of interviews convert to a job offer, but should the fraudster gain employment, they rely heavily on AI-powered chatbots to carry out the day-to-day responsibilities of their job.

Loading...

Improving defenses

Flexible working arrangements have been established as the new norm for many industries. According to the United States Bureau of Labor Statistics (BLS), the number of Americans teleworking surged to 23% last year, which accounts for more than 35 million workers.

The reality is today’s organizations must strengthen their defenses to ensure the integrity of their hiring practices. Businesses can take the following steps to bolster their processes:

1. Tighten screening and recruitment: Human resources and recruiting teams should be trained to identify the subtle red flags associated with fraudulent candidates. Some of the common tells include candidates being swapped out between interview rounds, refusing to turn on their cameras or using an extremely poor internet connection.

Implementing a structured technical and behavioral verification process, such as requiring a live skills demonstration under direct observation, can help teams identify potential fraudsters. Additionally, recruiters should be investigating their candidates’ digital footprints and the legitimacy of their provided work history to ensure samples or projects aren’t cloned from existing profiles.

2. Rigorously verify identities: Organizations need verifiable government ID checks at multiple stages of recruitment and into employment. Third-party services can help authenticate identity documents and academic credentials. To prevent location spoofing, organizations should cross-reference their candidates’ stated locations with technical data, like an IP address, time-zone behavior and payroll information.

The identity verification process shouldn’t disappear after an employee begins onboarding. Organizations should enforce role-based and segregated access controls, defaulting new contingent workers to begin with the least privilege and access until probationary and verification checks have been completed.

3. Monitor for insider threats: Organizations need to implement a dedicated insider risk function to proactively manage potential threats. This often takes the form of a working group spanning team members from HR, legal, security and IT. This function monitors for anomalous access patterns, like large data pulls, off-hours logins from odd geographies or VPNs, and credential sharing - all of which can be indicators of unusual insider activity.

Organizations must also educate and empower their staff to observe and flag suspicious activities.

Preparing for the AI-powered future

As generative AI continues to shift the playing field of cybersecurity, the hiring pipeline is increasingly becoming a meaningful attack vector.

Because these scams have expanded to more industries, no organization can safely rely on outdated screening processes. Strengthening defenses requires a layered approach to identity security, emphasizing rigorous verification and continuous monitoring to prevent fraudulent hires from becoming critical insider threats.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
Cybersecurity
Artificial Intelligence
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Cybersecurity
See all

How identity fraud is changing in the age of AI

Katherine Cloud and Ilya Brovin

December 11, 2025

Fighting Cyber-Enabled Fraud: A Systemic Defence Approach

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum