This month in AI: Shopping agents, AI's energy bill and new codes of conduct
Adding to cart ... a quarter of young consumers now use AI to shop, according to US data. Image: Reuters/Aly Song
- New AI agents are being released to help shoppers research, compare and purchase goods online. But with this shift comes opportunity and risk.
- In this month's roundup of AI insights, we look at the latest developments and new research, as well as updates to AI governance.
- Plus, a chart on how much written content is now generated by by AI compared to humans.
1. AI agents hit the high street
Would you trust an AI agent to do your shopping? Recently, mainstream assistants including ChatGPT and Google have begun rolling out features that allow users to ask AI agents to research, select and purchase goods on their behalf.
And it seems consumers are also warming to the concept. Data from Statista captured over the past 12 months shows that around a quarter of Americans between the ages of 18 and 39 say they like to use AI to shop, or have used the technology to search for products. Around 2 in 5 have followed recommendations from AI-generated digital influencers - an early signal of how discovery and persuasion may evolve in the coming year.
To meet this shift, payment providers have begun introducing guardrails to help merchants distinguish between legitimate AI agents and malicious bots. Visa’s Trusted Agent Protocol (TAP) was launched in October, with the company citing Adobe research that indicated AI-driven traffic to retail sites has surged 4,700% year on year in the US.
Merchants, meanwhile, are considering how they can adapt their websites to improve discovery via LLMs. In India, the Economic Times reports that Amazon India and Flipkart are altering product listings to drive visibility for AI agents. In the US, Walmart has struck a partnership with OpenAI to create AI-first shopping experiences. And in China, Alibaba has launched an AI mode, facilitating an end-to-end trade experience for shoppers, supported by LLMs.
While the shift brings opportunities for businesses, it also brings risks. Recent weeks have also seen growing scrutiny of AI shopping agents, with concerns around fraud, misuse and questions over who is responsible when an agent makes a purchase on a user's behalf.
A recent report by Boston Consulting Group finds that retailers could face reduced insight into customer behaviour, loss of loyalty and diminished cross-selling opportunities as AI becomes more of an intermediary. Three areas in particular are emerging as priorities:
- Identity, consent and disclosure: Competing platforms and protocols are taking different approaches to identity protection, consent logging and disclosure at checkout. Some organizations, such as Visa, are leaning into their own trusted agent protocols, which aim to verify the credentials of purchases by looking at agent intent, consumer recognition and payment information. OpenAI’s Agentic Commerce Protocol, meanwhile, is open; it allows merchants and developers to build their own integrations.
- LLM and agent optimization: As AI agents become a new gateway to products and services, retailers are exploring how to optimise their user journey for generative engine searches and experiences (GEO and GXO). This means aligning web copy and journeys with LLM functionality.
- Responsible AI controls: Responsible implementation remains essential. Our Responsible AI Innovation Playbook finds that while organisations recognise the importance of responsible AI, maturity in implementation continues to lag. Balancing competitiveness with robust risk management will be critical as agentic commerce scales.
Video: Watch Bobak Hodjat, Chief Technology Officer at Cognizant, explain the difference between AI agents and Generative AI.
2. What else is moving in AI?
- Concerns around energy consumption and the continued rise of AI use are rising: In the State of AI newsletter from the Financial Times and MIT, Casey Crownhart and Pilita Clark explore how power limitations could impact the growth of the technology. Elsewhere, a report from Goldman Sachs has projected ~175% growth in power demands for data centres by 2030 compared to 2023. The report explores 6 key areas that need to be considered: pervasiveness of AI, productivity of servers and compute, prices of electricity needed to expand supply, policy initiatives, parts availability to source varied types of generation, and people availability for infrastructure construction and maintenance.
- In the US, the FDA has released a report exploring the use of generative AI-enabled digital mental-health medical devices: Building on a committee meeting held in November 2024, the report explored the potential benefits and risks of the technology, and the FDA has invited feedback on “perspectives related to generative AI in digital mental-health medical devices and considerations for risk mitigation frameworks for these devices”.
- Anthropic has shared how the company disrupted a “sophisticated espionage campaign” where attackers deployed agentic AI capabilities to advise and execute an attack. The culprits targeted tech companies, financial institutions, chemical manufacturing companies and government agencies. “A fundamental change has occurred in cybersecurity,” Anthropic said. “We advise security teams to experiment with applying AI for defence in areas like Security Operations Centre automation, threat detection, vulnerability assessment and incident response”.
- Cursor, a start-up focused on code generation, has nearly tripled its valuation in the past five months to $29.3 billion, Reuters has reported. Its latest $2.3 billion funding round was led by investment management firm Coatue and venture capital firm Accel, the company said on its website.
- In Europe, working groups will look to create a code of practice for AI-generated content: The European Commission has embarked on a series of workshops spanning until May 2026 to create a code of practice for marking and labelling AI-generated content. Working groups made up of providers and deployers will help draft the code, which, if approved by the Commission, will serve as a voluntary tool to demonstrate compliance with key obligations of the AI Act.
3. AI insight in a chart
AI is now producing a significant share of online written content: AI-generated writing has surged from near-zero in 2020 to parity with, and at times surpassing, human-written content by 2025. This shift underpins ongoing global efforts to label and authenticate AI-generated information.
4. AI must reads
- Organizations are moving beyond predictive models, embracing AI agents as true collaborators: AI Agents in Action: Foundations for Evaluation and Governance is a Forum whitepaper, produced in partnership with Capgemini, which aims to help industry leaders rethink how they design, evaluate and safely govern smarter, more autonomous systems.
- Rethinking financial services in the age of AI: This article from Gregory Van, CEO of wealth-management platform Endowus, explores how AI can be used to improve access to financial advice, enabling more people to receive guidance that’s relevant and transparent.
- Why the risk of overlooking responsible AI can no longer be ignored: Responsible AI is a frontline defence against serious legal, financial and reputational risk, especially when it comes to understanding and explaining AI data lineage. In this piece, Kathrin Kind, Chief Data Scientist at professional services firm Cognizant, discusses why the ideal scenario is to embed trusted data practices and master data management from the start.
- How AI can accelerate the energy transition: With the right clean-energy investments, there is potential to go beyond meeting AI energy needs to strengthen power systems for everyone, in line with sustainability goals. In this piece, Ginelle Greene-Dewasmes from the World Economic Forum's Centre for AI Excellence looks at how collaboration can ensure AI helps rather than hinders the energy transition.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Bob Mumgaard and Eric Schmidt
January 12, 2026




