A significant number of sessions at this year’s World Economic Forum Annual Meeting in Davos concerned the promise of emerging technologies for problems of general human concern – for health, education, multiculturalism, poverty and national security, among others.

Speakers in these sessions were some of the top technological entrepreneurs and researchers of our time and it was an honour to hear them speak. I was struck by the ingenuity and potential of the technical innovations they proposed. Yet, occasionally, I found myself worrying that too many of these solutions might be ignoring the human in human concerns and that their solutions risk expensive failure. I would venture to say that without an understanding of who the users of a technology are, their goals, their fears, their desires and their prior experience, no technology will achieve its potential.

Some examples that I hope will be evocative. In the case of cybersecurity, the most secure devices imaginable are useless unless people actually observe security protocols (and therefore unless the security protocols are understandable, trustworthy and fulfill the users’ goals). Likewise, social media and communication sites that seek to protect young people by forbidding use to those under a certain age may assuage their conscience, but not protect the population unless they come to understand why parents may lie about their children’s ages to create accounts for those children.

And while universal health records may ultimately result in better care as doctors have access to complete information about a patient’s health, they are useless until their designers understand the motivation behind patients’ occasional need to obscure their identity in seeking healthcare (in the case of stigmatized tests such as HIV, for example, or care for stigmatized conditions, such as depression).

In the Forum’s Global Agenda Council on Robotics & Smart Devices, we are developing new models that take into account the social, economic, legal and cultural dimensions of developing new technologies and integrating them into society, as well as the technical dimensions. These new models are applied throughout the cycle of technology development, evaluation and integration. They highlight both risks and opportunities, appeal to and involve multiple stakeholders, and depend on an intrinsically multidisciplinary approach to the problems we address.

At the Human-Computer Interaction Institute of Carnegie Mellon University, we are fond of asking our students to repeat the following mantra: “the user is not like me, the user is not like me”. This simple phrase reminds our brilliant students that brilliance is not enough, that our most fabulous technological innovations may founder if we do not take into account the real people we are designing for, their cares and concerns, their desires and needs, the problems they face and their hopes for the future.

For those of us who develop technologies to try to address human concerns and make the world a better place, it is essential to understand the nature of the human in human concerns.

Justine Cassell, Director, Human-Computer Interaction Institute, Carnegie Mellon University, USA; Global Agenda Council for Robotics & Smart Devices

Picture: Child with Virtual Peer, Justine Cassell