Why spatial computing, wearables and robots are AI's next frontier
Spatial computing is going to fundamentally change how we use and interact with AI.
Image: REUTERS/Fabrizio Bensch
Stay up to date:
Artificial Intelligence
- Recent trademark filings and product launches show AI companies targeting the physical world with wearables and robots.
- This move into spatial computing requires a huge amount of advanced data.
- A new AI frontier is emerging, in which the physical and digital worlds draw closer together through spatial computing.
Artificial intelligence’s (AI) next great leap will be powered by hardware. As the digital and physical worlds merge, frontier technologies like spatial computing, extended reality (XR) and AI-powered wearables are ushering in a new computing paradigm.
Recent trademark filings by ChatGPT creators OpenAI for humanoid robots, augmented reality (AR) glasses, VR headsets, smartwatches and smart jewelry reflect this shift, as does Meta’s investment in AI-driven smartglasses.
This evolution isn’t just about expanding AI’s reach; it’s about AI moving beyond screens and into the physical world. For AI to interpret and interact with the environment in real time, it needs new hardware, sensors and interfaces.
Spatial computing on the rise
Spatial computing, an emerging 3D-centric computing model, merges AI, computer vision and sensor technologies to create fluid interfaces between the physical and digital. Unlike traditional models, which require people to adapt to screens, spatial computing allows machines to understand human environments and intent through spatial awareness.
Control of this interface is critical. As AI-native hardware becomes part of everyday life, shaping how people interact with intelligent systems will define how immersive and useful those systems are. Companies that lead in AI-hardware integration will set the tone for commerce, communication and daily interaction.
This is where XR and wearables matter most. AI needs spatial intelligence – an awareness of physical space – to reach its potential. AR glasses, AI-powered headsets and smart rings or watches allow AI to interpret gestures, movement and environments more naturally.
Kristi Woolsey, Global Lead for XR and Spatial at BCG, put it succinctly: “AI has put us on the hunt for a new device that will let us move AI collaboration off-screen and into the world. Hands-free XR devices do that. AI is also hungry for data, and the cameras, location sensors and voice inputs of XR devices can feed that need.”
This hardware shift makes AI more accessible and integrated into daily life, not just as a tool on a screen, but as a companion in the real world.
AI agents and physical AI
NVIDIA CEO Jensen Huang recently emphasized that the shift from generative AI to agentic AI marks a turning point toward Physical AI. These AI agents – systems capable of acting autonomously in real time – will rely on spatial hardware to function. Whether embedded in smartglasses, humanoid robots or wearables, these agents will observe, adapt and collaborate.
Venture capitalist Vinod Khosla predicted in a Bloomberg interview that the humanoid robot market could eventually surpass the auto industry. The building blocks of that vision are already being laid in today’s AI-integrated devices.
Together, innovations in hardware, advances in spatial computing and the rise of AI agents are creating a new foundation for how we interact with machines and information.
Three drivers of AI hardware's expansion
As AI leaves the cloud and steps into our physical spaces, it will be shaped by how it integrates into our environments. This new phase demands more than algorithms. It needs hardware that can sense, process and respond.
1. Real-world data and scaled AI training
AI is only as effective as the data it learns from. Tomorrow’s AI systems will require spatial data: depth, motion, object recognition and environmental mapping.
Wearables, AR devices and robots are essential tools for gathering this data in real time. Unlike traditional data pipelines, these devices let AI learn from direct interaction with the world around it, improving how it responds to real-world contexts and unpredictability.
2. Moving beyond screens with AI-first interfaces
The next computing platform is immersive, multimodal and AI-native. We’re moving beyond screens like smartphones or tablets towards interfaces that feel more like natural extensions of ourselves.
Meta’s Ray-Ban smart glasses are one example. Users can ask AI questions, record moments and receive contextual support, all without looking at a screen. OpenAI’s interest in AR glasses hints at a future where AI assistants aren’t locked in apps. They live on our faces, in our ears and on our wrists.
These wearables will make AI feel more ambient, intuitive and ever-present, seamlessly integrated into both work and personal life.
3. The rise of physical AI and autonomous agents
AI is evolving from passive tool to agentic collaborator. These autonomous systems can act, decide and engage based on what they see and sense in the environment.
AI agents embedded in wearables might guide users through tasks, respond to visual cues or anticipate needs based on behaviour and context.
For example, smart rings could capture gestures and provide haptic feedback for immersive interaction. AI glasses might offer real-time overlays with directions, translations or task support. Smartwatches could monitor biometrics and deliver proactive health recommendations.
Together, these innovations signal the rise of a new kind of AI, one that acts in the world rather than just informing from a distance.
A multimodal, multiagent future
The era of software-only AI is coming to a close. The next chapter belongs to physical computing, where intelligent systems interact with and respond to the world around us.
Hardware is becoming the medium through which AI lives. As XR, spatial computing and AI-powered devices converge, they are forming the infrastructure of the next industrial revolution.
The critical question is no longer if AI will integrate with the physical world. It’s how fast and how deeply. This convergence marks the dawn of a new computing era, one that’s immersive, intelligent and everywhere.
Accept our marketing cookies to access this content.
These cookies are currently disabled in your browser.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Fourth Industrial RevolutionSee all
Jovan Jovanovic and Dino Osmanagić
May 16, 2025
Elena Fersman
May 14, 2025
Lee Poh Seng and Heng Wang
May 12, 2025
Jake Okechukwu Effoduh
May 7, 2025
Satyanarayana Jeedigunta and Drishti Kumar
May 7, 2025