Asia’s Generative AI Experts Call for Guardrails and Responsible Design

Published
28 Jun 2023
2023
Share

World Economic Forum, public.affairs@weforum.org

  • Biggest concerns with AI are bias, user competency, transparency and predictability.
  • In all critical fields, the decision-making power must rest with a human and education on AI is key.
  • To ensure a positive future, it is crucial to prioritize responsible design and release practices from the beginning.
  • Watch the session at wef.ch/amnc23 and follow using #amnc23 or #2023夏季达沃# on Weibo and WeChat.

Tianjin, People’s Republic of China, 28 June 2023 – Researchers are cautiously optimistic about where generative artificial intelligence (GenAI) will lead humanity, but “we must develop more guardrails.” This is according to Pascale Fung, Chair Professor, Department of Electronic and Computer Engineering, Hong Kong SAR, China. She was speaking at the session Generative AI: Friend or Foe? during the Annual Meeting of the New Champions in Tianjin, China.

Fung said, “We must have a framework for developing responsible GenAI mode, not just by keeping a human in the loop but to fundamentally understand these models and align them with what humans want at the data level, training model, task performance level.” Powerful and creative, these models mark a “quantum jump in human civilization”.

GenAI provides a potential pathway to artificial general intelligence, said Zhang Ya-Qin, Chair Professor and Dean, Tsinghua University, adding that his department is applying it “in almost everything” from transport to biological computing. “Once you have a [GenAI] model, you can generate new structures quite easily,” he said. This opens up possibilities for model-to-model interaction (large models training smaller proprietary models), reinforcement learning and more.

The session was facilitated by the AI Governance Alliance, a multistakeholder initiative that unites industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems. It follows a World Economic Forum summit on responsible AI leadership, which convened thought leaders and practitioners and produced the Presidio Recommendations on Responsible Generative AI.

Fung, who contributed to the recommendations, explained the importance of transparency in GenAI models so that researchers can mitigate “model hallucination” and moderate bias, train AI on data from numerous languages and avoid toxic content.

Cathy Li, Head, AI, Data and Metaverse and Member of the Executive Committee, World Economic Forum, moderated the session. She said, “It’s crucial for everybody to understand the enormous potential that we see with this novel technology but also the challenges and responsibilities that come with it.”

Also speaking was Emilija Stojmenova Duh, Minister of Digital Transformation, Ministry of Digital Transformation of Slovenia. She said that “AI can really boost innovation”, adding that “fears should not hinder the innovation.” The government of Slovenia is keen to use AI to improve the services it offers its citizens and to upskill public servants, citizens and teachers.

Caution is warranted and necessary, said Chen Xudong, Chairman and General Manager, Greater China, IBM. IBM abandoned research on facial recognition models due to their potentially harmful use. With the right guardrails, however, the technology could substantially shorten the discovery period for new medicines.

While AI is being used to automate the writing of code, this could generate outcomes that may not be controllable, Ya-Qin warned. Boundaries between informational and biological worlds are crucial, and AI is not viable for sectors that involve sensitive data and critical decision-making. He accepts students using ChatGPT in their assignments, he added, as long as they disclose it.

AI’s use in education got a boost during the pandemic. Students were able to continue their education despite lockdowns as long as they had access to a mobile phone or a tablet. “AI can reduce the gap between wealthy and poor families by bringing down costs and small companies can compete as well,” said Wang Guan, Chairman, Learnable.ai. “Once we fine-tune and retrain our models to be super-good, it gives us a very strong competitive power.”

The future of education lies in “teaching humans to be more human” – to have more critical thinking skills, to study the humanities, history, philosophy and arts along with mathematics and sciences, and to acquire interdisciplinary learning. Specialists in ethics need to understand AI systems, and specialists in AI need to understand ethics. “We must teach younger generations to be Renaissance Men and Women,” Fung added.

About the Annual Meeting of the New Champions

The Annual Meeting of the New Champions 2023 takes place 27-29 June in Tianjin, People’s Republic of China, under the theme, “Entrepreneurship: The Driving Force of the Global Economy”. The meeting will renew momentum for innovation and entrepreneurship to drive growth and a more equitable, sustainable and resilient global economy. For further information, please click here.

Notes to editors

Read the Forum Agenda also in Mandarin | Japanese | Spanish
Learn about the Forum’s impact
Check out the Forum’s Strategic Intelligence Platform and Transformation Maps
Follow the Forum on WeChat, Weibo, LinkedIn, Facebook, Instagram, Twitter, TikTok and Flipboard
Listen to Forum Podcasts
Watch Forum videos
Subscribe to Forum news releases

All opinions expressed are those of the author. The World Economic Forum Blog is an independent and neutral platform dedicated to generating debate around the key topics that shape global, regional and industry agendas.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum