AI is interacting with the real world. What does this mean for cybersecurity?

Robots, potentially powered by AI, mean real-world consequences on top of traditional cyberthreats. Image: Shutterstock
Daniela Rus
Director, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology (MIT)- As AI systems begin to act in the physical world, cyber failures become physical failures that can directly affect human safety, operations and trust.
- With cybersecurity stakes rising as a result, we are scaling autonomy and capability faster than scaling the safety foundations they rest on.
- Physical AI adoption will depend on secure-by-design systems, harmonized standards and greater investment in safety and resilience.
Physical AI enables autonomous systems, like robots and self-driving cars, to perceive and act in the real world. Unlike embodied AI, which centres solely on the intelligence collected when a hardware "body" interacts with its environment, physical AI is the overall broader framework that allows software "brains" to control various physical bodies, including robots, robotic arms, drones and autonomous vehicles, more intelligently. The field is new enough that it lacks a standardized name or a mature technology stack, yet many believe it represents the next stage in AI development.
Researchers believe the day is fast approaching when general-purpose physical AI systems may finally be commercially viable across multiple domains rather than just in structured industrial settings. Adding to this enthusiasm, analysts at Morgan Stanley say the market for humanoid robots will grow to reach $5 trillion by 2050. As this convergence matures, it brings a new class of real-world consequences on top of traditional cyberthreats. Without foundational trust, safety and security designed in from the start, the very autonomy that drives this potential growth becomes its greatest vulnerability.
Why does physical AI change the game?
Cybersecurity has traditionally focused on protecting digital assets within a clear and limited scope.. This paradigm is undergoing a fundamental shift as digital attacks yield real-world consequences for critical infrastructure and healthcare. Despite massive investment in cybersecurity, the frequency and sophistication of cyberattacks continues to climb.
As AI is increasingly embedded in physical systems (in autonomous vehicles, robotic logistics platforms, surgical instruments, and critical infrastructure), the vulnerabilities that the cybersecurity community has long struggled to mitigate are acquiring a material dimension with direct implications for human safety. By bridging the gap between code and kinetic action, physical AI is translating traditional cyberattacks into direct, life-critical safety risks.
Case 1: Autonomous vehicle controller logic attacks
The autonomous vehicle provides a particularly instructive case. Modern automobiles already depend on dozens of electronic control units communicating over internal networks, and researchers have repeatedly demonstrated the feasibility of remote access to braking, steering and acceleration subsystems.
Physical AI extends this vulnerability significantly. In fully or partly autonomous vehicles, the entire operational loop – perception, planning and actuation – is mediated by software. A compromise at any point in this chain produces an unpredictable multi-tonne machine operating at highway velocity. One can envision an attack that subtly inverts controller logic: The vehicle is expected to accelerate, but decelerates instead. At low speeds, such an inversion is disorienting. At high speeds, amid dense traffic, the potential for catastrophic harm is large.
Case 2: Supply-chain disruptions
The implications for supply-chain integrity are different. Physical AI systems are increasingly deployed in sorting, packaging and distribution operations. A compromised warehouse robot could mislabel items, redirect shipments or introduce contaminated goods into a distribution pipeline. Consider a pharmaceutical fulfillment facility in which an AI-driven labelling and picking system has been subtly reprogrammed: Incorrect medications placed in correctly labelled containers, at industrial scale, prior to detection. The downstream consequences, which include adverse patient outcomes and public-health emergencies, are qualitatively distinct from and far graver than those associated with conventional data breaches.
Case 3: Perception system attacks
Another example of risk concerns integrity attacks on perception systems. Physical AI relies on sensor arrays, like cameras, lidar and radar, to interpret the world. However, adversarial tampering, such as applying visual "perturbations" to traffic signs, can trick these systems into misidentifying a stop sign as a speed limit. Beyond the road, this vulnerability extends to airport security, agriculture and industrial inspection, where manipulated sensor data causes AI to execute actions based on a distorted reality.
The governance gap for physical AI
As often is the case, innovation has outpaced the global standards meant to govern it, resulting in a fragmented digital security and physical safety standards landscape.
Currently, physical AI systems are governed by a patchwork of over 30 safety and cybersecurity standards spanning automotive, industrial control, medical devices and cross-domain AI governance frameworks. Most were designed for predictable, non-AI systems and struggle with the "black-box" nature of evolving technology. Manufacturers also face a multiple compliance challenge, in which they must navigate overlapping requirements like the EU AI Act and the EU Machinery Regulation, which often lack a common technical map.
To bridge this gap, physical AI requires a shift from static, reactive governance to a more coordinated "runtime" model. Security must be integrated at the architectural level, from silicon to software, utilizing hardware-enforced limits and mechanical overrides that operate independently of AI decision-making. Regulatory bodies must harmonize standards to mandate adversarial resilience as a foundational requirement. And organizations must include continuous monitoring and stress-testing that accounts for real-world, kinetic consequences.
Physical AI promises enormous benefits: safer roads, more efficient factories, better care. Every one of these promises depends on the assumption that these systems will perform as intended. Cyber adversaries who can break that assumption break the promise.
How the Forum helps leaders understand cyber risk and strengthen digital resilience
Today’s allocation of funding does not reflect the scale of the risks. Investments flow into building intelligent systems, but not proportionally in ensuring that they are secure, resilient and safe.
The window to get this right is small. Before physical AI is widely deployed, we need the security foundation that will make it trustworthy, and the trust architecture that will unlock the market’s projected growth. Protecting human safety cannot be retrofitted. It is a prerequisite and a shared responsibility for cybersecurity and safety as a common good.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Hajar Algosair, Wissam Yassine and Hassan Abulenein
April 27, 2026






