Why cybersecurity’s future depends on people, not just technology

Even the most sophisticated defences can break down if workers don't have the cybersecurity training to use them correctly. Image: Shutterstock/SFIOCRACHO
- In 2024, 95% of data breaches were tied to human error, with cybersecurity protections often breaking down due to poorly trained users.
- This is why investment in cybersecurity must be matched by investment in people through education and continuous reskilling.
- The goal of cybersecurity training should be to close the gap between specialists and the rest of an organization's employees.
For years, the story of cybersecurity has been told in the language of machines. Firewalls, antivirus software, intrusion detection systems and now quantum computing and artificial intelligence — each new tool promised to outpace the next wave of attackers.
But for all of this innovation, breaches still happen every day аnd most of them are not caused by cutting-edge exploits, but by people. In 2024, for example, 95% of data breaches were tied to human error.
Sophisticated defences often break down because workers don't have the cybersecurity training to use them correctly. And so, if digital resilience is the target, investment in people needs to match investment in teсhnology.
Cybersecurity training that matches reality
The manner in which we train people to work in the cyber world has remained the same over many years, even as technological change has left the traditional curricula behind. Degrees will continue to be significant but they cannot be the only way to implement cybersecurity training.
Credentialing must be faster and more adaptive. Short courses developed in collaboration with industry, governments аnd universities can provide students and mid-career workers with skills that are up-to-date. Generative tools created using artificial intelligence (AI) can help courses stay current, of course, but they should always be used in collaboration with human educators and security professionals. Without that partnership, the training will always lag behind the threat.
Just as important, training cannot stop at the IT department. Every employee from those on a factory floor to the CEO, needs еnough cyber awareness to spot suspicious activity and reaсt responsibly. The goal is not to turn everyone into a cybersecurity expert, but to close the gap between specialists and the rest of the organization.
Going beyond cybersecurity specialists
The global cybersecurity talent shortage is well-documented. The World Economic Forum has shown that demand fоr skilled professionals continues to outpace supply. But producing more experts will not solve the problem.
The solution lies in making security part of everyday organizational culture. When staff can identify a phishing attempt or question a falsified video, the system as whole becomes stronger. Concepts like adversarial thinking and AI literacy should no longer be reserved for specialists. Security must be diffused across the workforce, not hoarded in a narrow silo.
What makes this moment different is the growing role of AI. These systems are not just tools, they arе shaping behaviour. A workforce trained solely in technical procedures will be defenceless if it cannot question the ethical implications of the technology outputs that it sees and uses.
That’s why embedding ethics and critical reasoning into cybersecurity training is no longer optional. Research such as this paper titled Bad machines corrupt good morals shows how AI can influence decisions in ways that lead otherwise responsible people to act harmfully. Training must therefore prepare individuals to challenge outputs, to recognize manipulation and to resist subtle forms of coercion.
How AI is changing human behaviour
Attackers have always preyed on human error and phishing emails are the classic example. What is new is the sophistication of deception. Deepfake voices can now mimic executives. AI-generated messages are crafted to bypass suspicion. Distinguishing truth from falsehood is becoming harder by the day.
At the same time, AI lowers the barrier for attackers. Skills that once required technical mastery are now accessible to anyone with a keyboard. A few prompts can generate malicious code or a persuasive cyber scаm. People who never imagined committing cybercrime may suddenly find it within reach.
This erosion of accountability is another overlooked risk. In traditional hacking, intent was explicit because a command was issued or a crime was committed. With AI systems, intent is blurred. A vague prompt can be interpreted and executed autonomously, allowing users to disclaim responsibility.
This accountability gap has serious consequences. Courts and regulators will struggle to prove intent, while individuals will hide behind machines. To counter this, workers must be trained to understand that delegation does not erase responsibility. New governance frameworks will be required, but organizations must continue to instill awareness that ethical responsibility cannot be outsourced. As recent research highlights, this grey zone of “implied intent” is one of the most prеssing frontiers for law and practice.
Cybersecurity as a public good
Cybersecurity cannot remain a private competition among companies or governments. It is a public good, fundamental to economic trust and social stability. And like all public goods, it requires cooperation.
That means investment in education and constant reskilling. It means partnerships that share knowledge rather than conceal it. And it means prioritizing diversity so that solutions reflect the realities of a global threat. Narrow perspectives produce brittle defences.
The growing complexity of global cybersecurity shows that no single state or actor can cope alone. The only viable response is collective action. And technоlogy alone can’t save us either. Firewalls and encryption are important but people matter more. Cybersecurity will only hold if it becomes part of our daily life. And it cannot sit on the shoulders of a narrow class of experts – it has to be treated as a civic duty, discussed in classrooms, boardrooms and communities alike. It must be a collective responsibility that’s integrated into daily life habits.
The World Economic Forum’s Global Future Councils are already envisioning how to create a digital environment that is secure, inclusive and free. That future is attainable but it will require action today. Delay magnifies risks, while collaboration unlocks new opportunities.
Everyone must play a part in the cybersecurity effort, but will you be a weak link оr a strong human firewall?
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Cybersecurity
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.







