Generative AI: a game-changer that society and industry need to be ready for
Generative AI has become a 'hot topic' for technologists, investors, policymakers and society. Image: Unsplash
Listen to the article
- Despite the current downturn and layoffs in the tech sector, generative AI companies continue to receive huge interest from investors.
- While generative AI has people excited about a new wave of creativity, there are concerns about the impact of these models on society.
- Only when solid checks and balances are in place can there be a more thoughtful, beneficial expansion of generative AI technologies/products.
In the wake of newly released models such as Stable Diffusion and ChatGPT, generative AI has become a 'hot topic' for technologists, investors, policymakers and for society at large.
As the name suggests, generative AI produces or generates text, images, music, speech, code or video. Generative AI is not a new concept, and machine-learning techniques behind generative AI have evolved over the past decade. Deep learning and General Adversarial Network (GAN) approaches have typically been used, but the latest approach is transformers.
A Generative Pretrained Transformer (GPT) is a type of large language model (LLM) that uses deep learning to generate human-like text. They are called "generative" because they can generate new text based on the input they receive, "pretrained" because they are trained on a large corpus of text data before being fine-tuned for specific tasks, and "transformers" because they use a transformer based neural network architecture to process input text and generate output text.
Despite the current market downturn and layoffs in the technology sector, generative AI companies continue to receive interest from investors. Stability AI and Jasper, for example, have recently raised $101 million and $125 million, respectively, and investors like Sequoia think the field of generative AI can generate trillions of dollars in economic value. Over 150 start-ups have emerged and are already operating in the space.
Emergent capabilities of generative AI systems
Generative AI stretches beyond typical natural language processing tasks such as language translation, text summarization and text generation. OpenAI’s latest release ChatGPT, which caused a viral sensation and reached a million users in just five days, has been described as breaking ground in a much broader range of tasks. The use cases currently under discussion include new architectures of search engines; explaining complex algorithms; creating personalized therapy bots, helping build apps from scratch; explaining scientific concepts; writing recipes; and college essays, among others.
Text-to-image programs such as Midjourney, DALL-E and Stable Diffusion have the potential to change how art, animation, gaming, movies and architecture, among others, are being rendered. Bill Cusick, creative director at Stability AI, believes that the software is “the foundation for the future of creativity”.
Based on a new era of human-machine based cooperation, optimists claim that generative AI will aid the creative process of artists and designers, as existing tasks will be augmented by generative AI systems, speeding up the ideation and, essentially, the creation phase.
Beyond the creative space, generative AI models hold transformative capabilities in complex sciences such as computer engineering. For example, Microsoft-owned GitHub Copilot, which is based on OpenAI’s Codex model, suggests code and assists developers in autocompleting their programming tasks. The system has been quoted as autocompleting up to 40% of developers’ code, considerably augmenting the workflow.
What are the risks?
While generative AI has people excited about a new wave of creativity, there are concerns about the impact of these models on society. Digital artist Greg Rutkowski fears that the internet will be flooded with artwork that is indistinguishable from his own, simply by telling the system to reproduce an artwork in his unique style. Professor of art Carson Grubaugh shares this concern and predicts that large parts of the creative workforce, including commercial artists working in entertainment, video games, advertising, and publishing, could lose their jobs because of generative AI models.
Besides profound effects on tasks and jobs, generative AI models and associated externalities have raised alarm in the AI governance community. One of the problems with large language models is their ability to generate false and misleading content. Meta’s Galactica – a model trained on 48 million science articles with claims to summarize academic papers, solve math problems, and write scientific code – was taken down after less than three days of being online as the scientific community found it was producing incorrect results after misconstruing scientific facts and knowledge.
This is even more alarming when seen in the context of automated troll bots, with capabilities advanced enough to render obsolete, The Turing Test – which tests a machine’s ability to exhibit intelligent behaviour similar to or indistinguishable from a human. Such capabilities can be misused to generate fake news and disinformation across platforms and ecosystems.
Large models continue to be trained on massive datasets represented in books, articles and websites that may be biased in ways that can be hard to filter completely. Despite substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human feedback (RLHF) in the case of ChatGPT, OpenAI acknowledges that their models can still generate toxic and biased outputs.
How is generative AI governed?
In the private sector, two approaches to the governance of generative AI models are currently emerging. In one camp, companies such as OpenAI are self-governing the space through limited release strategies, monitored use of models, and controlled access via API’s for their commercial products like DALL-E2. In the other camp, newer organizations, such as Stability AI, believe that these models should be openly released to democratize access and create the greatest possible impact on society and the economy. Stability AI open sourced the weights of its model – as a result, developers can essentially plug it into everything to create a host of novel visual effects with little or no controls placed on the diffusion process.
In the public sector, little or no regulation governs the rapidly evolving landscape of generative AI. In a recent letter to the White House, US Congresswoman Anna Eshoo highlighted "grave concerns about the recent unsafe release of the Stable Diffusion model by Stability AI”, including generation of violent and sexual imagery.
Other issues surround intellectual property and copyright. The datasets behind generative AI models are generally scraped from the internet without seeking consent from living artists or work still under copyright. “If these models have been trained on the styles of living artists without licensing that work, there are copyright implications,” according to Daniela Braga, who sits on the White House Task Force for AI Policy.
How is the World Economic Forum ensuring the responsible use of technology?
The problem with copyright is also visible in the field of autocompleted code. Microsoft's GitHub Copilot is involved in a class action lawsuit alleging the system has been built on “software piracy on an unprecedented scale.” Copilot has been trained on public code repositories scraped from the web, which in many cases, are published with licenses that require crediting creators when reusing their code.
What's the road ahead?
While generative AI is a game-changer on numerous areas and tasks, there is a strong need to govern the diffusion of these models, and their impact on society and the economy more carefully. The emerging discussion between centralized and controlled adoption with firm ethical boundaries on the one hand versus faster innovation and decentralized distribution on the other will be important for the generative AI community in the coming years.
This is a task not only reserved for private companies, but which is equally important for civil society and for policymakers to weigh in on. This includes disruption of labour markets, legitimacy of scraped data, licensing, copyright and potential for biased or otherwise harmful content, misinformation, and so on. Only when solid checks and balances are in place can more thoughtful and beneficial expansion of generative AI technologies and products be achieved.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Tech and Innovation
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Forum InstitutionalSee all
Gayle Markovitz and Spencer Feingold
December 2, 2024