Digital innovation and governance aren't mutually exclusive – they are inextricable

Responsible governance is essential for protecting children's digital lives. Image: Unsplash/Thomas Park
Wanjuhi Njoroge
Graduate Student, Master in Public Administration, John F. Kennedy School of Gov, Harvard University- Amid growing unease about the impact of social media and AI, only governments can step up to provide the needed level of regulation.
- Previous technological leaps, like nuclear weapons, show the dangers of leaving responsible governance too late.
- Public regulation is often associated with overreach, but innovation can thrive directed by clear rules.
Is Australia’s decision to ban social media use for anyone under 16 an example of government overreach, or an early signal of what responsible leadership in the digital age may require? Since Australia’s move, other governments have begun to debate similar restrictions, sparking intense public discourse. Together, these developments reflect a growing unease about how largely market-driven digital platforms are shaping young lives.
That unease extends beyond policy-makers into families and communities. In Brookline, Massachusetts, parents formed Brookline Kids Unplugged to encourage children to step away from screens, reflecting growing discomfort with how digital platforms are reshaping childhood.
Many of the immediate risks are now well documented. UNICEF reports that children in many regions are exposed to harmful online content at unprecedented rates, with one in five in Africa and South-East Asia experiencing some form of online sexual harm. For women and girls, the picture is even more troubling. Analysis of large datasets of deepfake content finds that nearly all sexually explicit deepfake videos are created without consent and that about 99% of the individuals targetted are women. One in three deepfake tools available today allows users to create sexually explicit content. These outcomes are not accidental; they reflect design choices that prioritize growth, novelty and scale over safety.
Governments as guardians
This reality raises a central question on how digital innovation should be governed in the public interest. One answer is already clear. The responsibility for protecting human dignity and safety must not be left to industry alone. Governments have always carried the duty to uphold the rule of law and to protect citizens from harm. That responsibility must not be outsourced to private actors whose incentives are shaped by growth and profit, and whose core purpose is not the protection of the vulnerable.
The AI industry is still nascent, giving governments a narrow but important window to shape its trajectory. History shows how often societies respond to powerful technologies only after damage has already occurred. Nick Bostrom captures this danger in The Vulnerable World Hypothesis, arguing that some technologies may become so powerful that a single misuse could cause irreversible harm. Nuclear weapons offer the clearest example. Serious international efforts to govern them came only after the bombing of Hiroshima and Nagasaki at the end of the Second World War. That pattern of reacting after the worst has already happened is not one we can afford to repeat with AI. Some risks are simply too large to wait for their own Hiroshima.
There are already early signs of what responsible leadership looks like. At the international level, the European Union has begun to set clearer expectations around transparency, accountability and risk management in the development and deployment of AI systems. In the United States, AI governance is taking shape through a patchwork of state laws, federal agency guidance and voluntary standards. Several states have introduced laws addressing issues such as deepfakes, AI transparency and algorithmic discrimination. California, for example, has passed measures requiring disclosure when content is AI-generated and placing restrictions on discriminatory uses of automated decision systems in areas such as employment. Other states have adopted laws targetting deceptive AI-generated media, including deepfakes used in elections or harassment.
Massachusetts is investing in AI literacy for teachers through Project Lead, helping schools navigate both opportunities and risks as these tools enter classrooms. At the community level, local organizations are using hackathons to explore positive applications of AI, while encouraging responsible development. Across research and policy communities, early efforts on bias mitigation and safety standards are beginning to coalesce, offering templates that other regions can adapt. Translating this shared commitment into concrete standards will require specific, enforceable interventions at the point where harm occurs.
One such intervention is the requirement that digital platforms apply watermarking to AI-generated content created on their systems in ways that make synthetic media identifiable and traceable. Such systems should clearly signal when content has been generated or materially altered by AI and preserve information about its origin. Companies often argue that users will find ways to remove watermarks. That risk should not weaken the case, but should intensify efforts to develop more robust and persistent technical markers that are difficult to remove without visibly distorting the underlying content. Placing responsibility at the point of creation and circulation addresses harm where it occurs.
Overreach, or duty of care?
Critics of stronger state involvement raise several important objections. Some argue that government intervention risks overreach and infringes on personal freedom and autonomy. Others contend that technological solutions are preferable to legal ones, or that the evidence of harm is not definitive and that correlation does not imply causation. There are also practical concerns that age-based restrictions are ineffective or easily circumvented, that regulation will stifle innovation, and that decisions about children’s digital lives should rest with parents rather than the state.
These concerns deserve to be taken seriously. Yet none of them justify inaction. The absence of perfect evidence does not negate the presence of credible and growing risk, particularly when harms disproportionately affect children and women. Technological safeguards are necessary but insufficient when the same firms designing these systems also set the limits of their own accountability. Parental responsibility matters, but it cannot substitute for public standards when platforms are engineered to maximize engagement at scale. Innovation thrives not in regulatory vacuums, but in environments where clear rules protect trust, safety and public legitimacy. In high-risk domains, waiting for definitive proof or perfect compliance before acting has historically meant acting too late.
The most profound challenge of the AI era may not be technical. It is ethical. At a recent event at Harvard, a speaker suggested that the true breakthrough in the age of AI will be humanity’s ability to protect dignity and agency. That insight is especially urgent when thinking about children. What does it mean for a child whose closest companion is a bot? How will emotional development change when algorithms become sources of comfort or validation? What happens to identity when real and synthetic experiences blur?
These questions extend far beyond the scope of any technology company. They are questions of public policy, education, psychology and long-term human development. Above all, they are questions of state responsibility.
How the Forum helps leaders make sense of AI and collaborate on responsible innovation
Innovation does not require less governance. It requires governance that matches the scale and speed of technological change. The AI industry is nascent, and this moment offers a rare opportunity to do things differently. The question is no longer whether governments can act. The question is whether they will.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Cathy Li and Benjamin Larsen
March 16, 2026






