Cybersecurity

Why trust is the next step in the future of AI

A hand is silhouetted in front of a computer screen in this picture illustration taken in Berlin May 21, 2013. The Financial Times' website and Twitter feeds were hacked May 17, 2013, renewing questions about whether the popular social media service has done enough to tighten security as cyber-attacks on the news media intensify. The attack is the latest in which hackers commandeered the Twitter account of a prominent news organization to push their agenda. Twitter's 200 million users worldwide send out more than 400 million tweets a day, making it a potent distributor of news. REUTERS/Pawel Kopczynski   (GERMANY - Tags: CRIME LAW SCIENCE TECHNOLOGY) - RTXZUYD

Knowing when a bot sounds trustworthy is the next step in digital security. Image: REUTERS/Pawel Kopczynski

Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Cybersecurity?
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Cybersecurity

Trust is important when it comes to design. Whenever we arrive at a new website or consider a product, we rely on feeling like we trust a brand in order to interact with it. Whether you trust the service or not is the difference between whether you will sign up, make an order, refer a friend, or come back for a second spin—all within a matter of a few unconscious seconds.

But that’s no longer the only place we look for trust. The graphic user interfaces (GUIs) you use to interact with websites are slowly being complemented or replaced entirely by voice user interfaces (VUIs), such as personal-assistant bots. It’s the difference between Amazon—a GUI—and Amazon’s VUI helper, Alexa. We therefore need to learn how to distinguish between voices—not just graphics—that we can trust.

We have good reason to be suspicious. Consumers have learned to intuit that it’s a bad idea to subscribe to a website that imitates Facebook’s look and feel, use a search engine that fabricates Google’s logo and color scheme, or use an online marketplace that lacks a security symbol. We’ve learned what visual security cues to look for—but what about audible cues?

Scanning for audible security cues is a skill we’ll have to acquire in the near future—and as VUIs are becoming a more and more common interface in our daily lives, the sooner, the better. Take virtual personal assistants (VPAs), such as Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana. In the future, VUIs will be able to be used in situations where a human-less GUI could feel impersonal. For example, you could interact with a friendly VUI when confirming a large bank transfer or have your blood-test results read to you via a health app. But in order to interact with these products and reveal sensitive information to them, they first need to gain our trust.

Research shows that tone of voice is more important than words when it comes to making first impressions. Research shows that tone of voice is more important than words when it comes to making first impressions: Varying your pitch and volume in certain ways can actually convey more trust. In other words, it’s not what you say—it’s how you say it. These studies can be applied to designing bot voices by creating a naturally surrounding humanized voice rather than a robotic one, developing a unique tone that reflects the equally unique visual look and feel of a brand, and making sure your VUI can reply like a human: “I don’t understand your input” isn’t as effective as “I didn’t catch that, say again?”

A new field of “voice designers” will therefore have a key role in creating trustworthy VUIs. Building trust by GUI design already has best practices to follow: Colors, images, and language all guide our decision-making. For VUI design, it’s an emerging world of tone, speed, accent, utterances, and more.

For example, designers need to make sure their bots are breathing between sentences so that the users will be able to process and think about their next action and feel like they’re speaking to a human. Without these small silences, it seems like the other person (or bot) isn’t actually listening to you. Programmers could even add affirmative “mmms” to listening responses, much like an audible head nod that signifies we’re engaged in real life.

But with trust comes responsibility. When we really trust someone, they can convince us to do things we may not do normally. Tech giants therefore face a great dilemma when building super-smart VUIs: How to service users with what they need, but also maintain ethical boundaries concerning privacy and safety. Today, websites are sharing browsing history, personal data, and more private items to make an extra buck. Will they one day sell conversations with your personal assistant as well, or manipulate users to make them spend money on their platform?

These are questions consumers, designers, and tech companies must consider now, as VUIs will soon become more than just an added feature to existing platforms. Voice-activation will change the way we interact with our computers, the way we shop, and our interaction with those around us, so we need to tread carefully. It’s a lot of work to gain our trust—but it’s just as easy to lose it.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
CybersecurityArtificial Intelligence
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

More on Cybersecurity
See all

'Pig-butchering’ scams on the rise as technology amplifies financial fraud, INTERPOL warns

Spencer Feingold and Johnny Wood

April 10, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum