Why governing AI means governing cognition

Artificial intelligence is increasingly a foundational aspect of everyday life. Image: Unsplash/NASA
- AI has transitioned from a technical tool to foundational infrastructure by automating human reasoning and cognition.
- Delegated cognition enhances global productivity while centralizing economic power and increasing energy consumption demands.
- Future governance must prioritize human agency and transparency as AI systems increasingly influence societal decision-making.
Artificial intelligence is rapidly transitioning from a specialized technical capability to a foundational aspect of everyday life. Language models are no longer confined to research labs or enterprise workflows. They are now embedded in how people write, search, learn and make decisions, increasingly functioning as cognitive companions.
This shift represents more than a technological upgrade. It marks a structural change in how cognition itself is governed. Over the past decade, digital social media platforms created economic value by capturing the behavioural traces of everyday interaction. If the last decade of digital transformation was defined by extracting social data through these platforms, the current phase is characterized by the delegation of cognitive effort to AI systems, with workplace surveys indicating employees are already using AI tools for significant portions of their work. This transition raises an urgent question: What does it mean to govern systems that do not merely shape behaviour, but increasingly participate in reasoning and judgement?
From social platforms to delegated cognition
Social media platforms transformed everyday interaction into a source of economic value by engineering how people connect, express approval and share attention. These platforms, which have billions of users, generated vast amounts of structured, computable data through standardized actions. This architecture enabled unprecedented scale. However, it also concentrated interpretive and economic power in the hands of a few dominant digital platforms, a shift reflected in the latest global advertising forecast.
Language models extend this logic to a deeper layer of human activity. Rather than capturing social interaction alone, they operate directly on language, the primary medium of human reasoning. In doing so, they enable a new form of value creation based on delegated cognition: the outsourcing of certain forms of mental effort to automated systems.
Language models as cognitive infrastructure
Language models function as a new kind of infrastructure. Like earlier digital platforms, they are designed to be frictionless and scalable. Yet, unlike social platforms, their primary influence lies in how people understand and make decisions. Each interaction reveals fragments of human reasoning. These data flows have the potential to generate substantial economic value, estimated at $1.3 trillion for the AI sector by 2032, while conferring significant cognitive power. That power lies in shaping what becomes intelligible and actionable.
At the same time, the benefits of language models are substantial. They can significantly enhance productivity. Studies show language models can reduce task completion times, with nearly half of tasks in common jobs completed faster and a substantial portion of work across most roles accelerated. They also lower barriers to specialized expertise, and enable participation across linguistic divides. By offering real-time synthesis and decision support, these systems help individuals and organizations navigate complexity that would otherwise demand considerable time and institutional capacity.
A defining feature of contemporary language model use is their growing role in opinion formation and decision support. Users increasingly consult AI systems not only for factual information, but for guidance in everyday decision-making. While these systems lack intent or values, their outputs are shaped by training data and design constraints. This introduces the risk of automation bias: the tendency to over-trust machine-generated outputs, because they appear confident and neutral.
For the everyday user, the distinction between assistance and influence is often unclear. Language models do not issue commands, yet their fluency confers persuasive authority. In low-stakes contexts, this may appear benign. At scale, however, sustained reliance on automated reasoning raises important governance questions.
The economic logic behind delegated cognition
The rapid adoption of language models is underpinned by strong economic incentives. Global investments in AI reached $259 billion in 2025 alone. As with earlier platform models, access is often subsidized in exchange for data while premium capabilities are monetized.
The economics of delegated cognition also carry material consequences. While earlier digital tools distributed energy costs across individual devices and organizations, large-scale language models rely on centralized data centres with significant and sustained energy demand. In 2024, data centres consumed about 415 terawatt-hours of electricity globally, representing roughly 1.5 % of total electricity use. These centres are often located in specific jurisdictions, tying everyday cognitive activities – from writing to decision-making - to distant energy markets and regulatory regimes. What appears decentralized at the user level is, in practice, increasingly centralized at the infrastructural level.
This dynamic reinforces concentration. The development and governance of large-scale language models are increasingly controlled by a small number of actors. As these systems become infrastructural, questions of transparency and accountability become matters of public interest.
What this means for AI governance
For global governance institutions, the rise of delegated cognition signals the need to evolve existing AI governance frameworks. Many current approaches originate from earlier efforts to regulate the platform economy, including data protection regimes such as the European Union’s General Data Protection Regulation and the risk-based structure of the European Union’s Artificial Intelligence Act. These frameworks were designed to govern data practices and to limit identifiable harm. While they remain necessary, they were not designed for systems that actively shape how information is interpreted and decisions are made. Language models, therefore, function less as passive processors and more as intermediaries within decision-making processes.
This shift places human agency at the centre of AI governance. As language models increasingly influence how problems are framed and options are prioritized, transparency becomes an institutional requirement rather than a technical safeguard. Where AI-mediated outputs affect real-world outcomes, clear lines of accountability are essential. Where system behaviour reflects cultural imbalance, inclusion must be addressed as a structural concern. Public trust will depend on whether governance frameworks are perceived to serve the broader public interest.
Taken together, these dynamics suggest that language models represent a new phase in the evolution of digital systems – one in which cognition itself becomes a site of automation and coordination. As these systems increasingly shape how individuals and institutions understand and decide, they must be governed not only as technologies but as societal systems. For those responsible for public and institutional decision-making, the task ahead is not to resist automation, but to govern it in ways that safeguard human agency in an age of intelligent assistance.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
