Artificial Intelligence

Can businesses help build trustworthy and accurate generative AI?

Some have warned against the era of fakery ushered in by generative AI

Some have warned against the era of fakery ushered in by generative AI Image: Tumisu/Pixabay

Mark Esposito
Chief Learning Officer, Nexus FrontierTech, Professor at Hult International Business School
Terence Tse
Executive Director, Nexus FrontierTech, Professor of Finance, Hult International Business School
Tahereh Sonia Saheb
Research fellow, Hult International Business School
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • Content generated by AI, such as ChatGPT, has raised questions of accuracy and trustworthiness.
  • Businesses should be aware that while generative AI technologies have sped up the creation of content they should not rely upon them solely.
  • They should instead use these technologies as assistive tools or in building solid AI strategies to mitigate the risks.

Automation relies on human dependence on machine intelligence, which is deeply affected by the universal values of accuracy and trust. Automation and efficiency initiatives will be hampered by a lack of adherence to these principles.

Have you read?

An entirely novel wave of automation entered the world in November 2022 with the launch of ChatGPT and its potent computational capacity and ability to generate content on its own. Some incorrect content produced by ChatGPT and its rival Bard, however, has damaged public belief in these artificially intelligent machines. While many were enthralled by how quickly these tools could produce content, many were worried about the accuracy and trustworthiness of this machine-generated material.

Accurate generative AI content

A major problem with deep learning algorithms that generate content is whether or not that content is fraudulent, erroneous, spreading disinformation or simply wrong. Some have warned against the era of fakery ushered in by generative artificial intelligence (AI) technologies, arguing that robust AI regulations and strategies should be devised to prevent defamation of individuals and businesses.

This situation is getting more challenging: a recent study suggests individuals have only a 50% chance of correctly identifying whether AI-generated content is real or fake. Although programmers work to train their algorithms on ethical and correct data, there are now start-ups that assist organizations in identifying fraudulent records, such as OARO, which assists businesses in authenticating and verifying digital identity, compliance and media.

It took ChatGPT just five days to reach one million users. But there's still a question - if businesses can trust generative AI?
It took ChatGPT just five days to reach one million users. But there's still a question - if businesses can trust generative AI? Image: Statista

Businesses should be aware that while generative AI technologies have sped up the creation of content and created new types of automated content generation machines, they should not rely upon them solely. Instead, they should use these technologies as assistive tools or in building solid AI strategies to mitigate the risks.

It is likely that generative AI will not only play a significant roles in industries where content generation is critical for business, but also in the proposition of other digital environments, such as the Metaverse and the other digital universes that will occur in the future. It is therefore essential that organizations create a governance body to oversee AI-generated models and their integration into more subtle forms of automated decision-making. This will help but it won’t automatically address the issue of trust, which raises the question: can businesses trust generative AI?

Trustworthy generative AI content

Can we trust these automated content-creation tools? Proponents claim that generative AI is trustworthy because a variety of factors have increased the outputs' dependability and credibility. The main deciding elements are the relevancy and quality of the training data and the business case for it.

AI as a strategic business function could rise to the moment

Businesses can create a variety of plans to improve the reliability of generative AI. Using or creating communicative platforms where firm personnel, such as marketing agents, may offer their feedback and inputs to supplement and modify the materials produced by AI could be a major and fundamental step towards greater reliability.

Businesses can use agile project management practices similar to those used in software development. Human staff members can edit and improve portions of content as generative AI creates them, before moving on to the next portion and repeating this process until the content is complete. These agile-like methods in content generation with the help of generative AI offer an alternative to the linear interaction of asking a question and receiving an answer, as found in chatbots. Businesses would benefit from being able to generate content in continuous portions and cycles while incorporating user feedback at every stage.

These agile-like methods in content generation with the help of generative AI offer an alternative to the linear interaction of asking a question and receiving an answer, as found in chatbots.
These agile-like methods in content generation with the help of generative AI offer an alternative to the linear interaction of asking a question and receiving an answer, as found in chatbots. Image: Hult International Business School

The race for credibility has begun

Previous studies have shown that from a psychological standpoint, robots and their outputs are viewed as more believable than outputs produced by people. Additionally, proponents assert that since machine-generated information is developed to be objective and is based on data and mathematical algorithms rather than the subjective judgments of humans, it is not vulnerable to human prejudice.

These findings show great promise for helping firms create a trustworthy and unbiased brand for themselves. If customers see machines as objective, companies can use this to their advantage, especially when dealing with customers' financial or personal information, as Penn State researchers found. According to the report, individuals have faith in technology: they think it respects their privacy and doesn't have any hidden motivations. Businesses should therefore create strategies to support this customer perception of content produced by AI.

There is no predetermined growth trajectory for ChatGPTs; many factors will shape how it evolves. The future of technology is often pulled and pushed by weights and factors of resistance. Tinkering is critical at this phase, so we can engage and produce a new canvas for collaboration between humans and machines, one where AI augments humans.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

AI for Impact: The Role of Artificial Intelligence in Social Innovation

Darko Matovski

April 11, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum