AI leaders call for pause in systems training, plus other AI stories to read this month
This artificial intelligence round-up brings you the key AI stories from the last month. Image: REUTERS/Yves Herman
Listen to the article
- This artificial intelligence round-up brings you the key AI stories from the last month.
- Top stories: AI experts urge development pause; Generative AI could automate 300 million jobs; AI to assist medics with cancer screening checks.
1. Tech leaders call for pause in AI systems training
Key figures in artificial intelligence and digital technology have published an open letter calling for a six-month pause in the development of AI systems more powerful than OpenAI's ChatGPT 4.
The signatories to the letter, published by the Future of Life Institute, warn that "advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources".
The letter has been signed by more than 1,400, people including Apple co-founder Steve Wozniak, Turing Prize winner professor Yoshua Bengio and Stuart Russell, Director of the Center for Intelligent Systems at Berkeley University.
The letter was also signed by Elon Musk, who co-founded OpenAI, the developer of ChatGPT. Musk's foundation also provides funding to the organization that published the letter. A number of researchers at Alphabet's DeepMind added their names to the list of signatories.
The letter accuses AI labs of rushing into the development of systems with greater intelligence than humans, without properly weighing up the potential risks and consequences for humanity.
"Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," the letter states.
The signatories to the letter call for AI developers to work alongside governments and policy-makers to create robust regulatory authorities and governance systems for AI.
In an apparent response to the open letter, Sam Altman, CEO of OpenAI, whose ChatGPT-4 has led the development of AI in recent months, posted this tweet.
The tweet essentially summarises a blog post by Altman dated 24 February 2023. In the blog, Altman says his company's mission is, "to ensure that artificial general intelligence [AGI] – AI systems that are generally smarter than humans – benefits all of humanity".
Altman also acknowledged the potential risks of hyperintelligent AI systems, citing "misuse, drastic accidents and societal disruption". The OpenAI CEO went on to detail his company's approach to mitigating those risks.
"As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential."
2. Up to 300 million jobs could be affected by AI: Goldman Sachs
The emergence of generative AI has raised a fundamental question in the minds of millions of workers: could a machine do my job? A research paper published by investment bank Goldman Sachs has come up with some odds for the chances of a machine taking over. The paper says that if generative AI delivers on its promised capabilities, it "could expose 300 million full-time jobs to automation".
The researchers looked at job roles in the US and Europe to establish their figures. The conclusion is that two-thirds of current jobs in those territories are at risk from some level of automation and that generative AI could take over a quarter of current work tasks.
These findings reflect research in the World Economic Forum's Future of Jobs Report, which finds that by 2025, the time spent on work tasks by humans and machines will be equal.
That's not necessarily bad news for workers. A survey by Gartner found that 70% of employees would like AI to help them with specific tasks in the workplace.
As the chart above shows, workers want AI to do some of the heavy lifting on data processing, digital tasks and information discovery. Problem solving and the automation of workplace safety are also on the AI wish list for workers.
If we do reach a point where AI is providing significant help to us at work, Goldman Sachs estimates it could eventually boost global GDP by 7%.
3. News in brief: AI stories from around the world
The UK Government has released details of a new regulatory approach for AI in a white paper. The government says its AI regulatory strategy is built on five principles, including safety, transparency and accountability. There is no plan to create a dedicated AI regulator. Instead, existing bodies such as the Health and Safety Executive and the Human Rights Commission will oversee the development and integration of AI. Critics of the proposals told the BBC that the government's approach lacked statutory authority and warned about "significant gaps" in the proposed regulatory framework.
China's Baidu has unveiled its much-awaited artificial intelligence-powered chatbot known as Ernie Bot. Reuters reported on the launch, in which brief videos showed Ernie carrying out mathematical calculations, speaking in Chinese dialects and generating a video and image with text prompts. Baidu is seen as a leader in a race in China among tech giants and start-ups to develop a rival to ChatGPT.
An AI programme will assist medical staff in a British hospital with the task of checking breast screening scans for signs of cancer, reports The Times newspaper. The AI will work alongside human researchers at Leeds Teaching Hospitals NHS Trust to check mammograms from almost 7,000 patients. In the trial, two human medics and an AI will check the slides from the scans. If all agree there is no sign of cancer, a patient will be given the all-clear. If any of the three disagree, the scan will be reviewed again and may result in a patient being called for further treatment.
Microsoft has launched a tool to help cybersecurity professionals identify breaches, threat signals and better analyze data, using OpenAI's ChatGPT-4. The tool, named Security Copilot, is a simple prompt box that will help security analysts with tasks like summarizing incidents, analyzing vulnerabilities and sharing information with co-workers on a pinboard. The assistant will use Microsoft's security-specific model, which the company described as "a growing set of security-specific skills" that is fed with more than 65 trillion signals every day.
How is the World Economic Forum ensuring the responsible use of technology?
4. More on AI on Agenda
The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone, says Bill Gates. The Microsoft founder and philanthropist says AI will change the way people work, learn, travel, get healthcare, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it. Read more from Bill Gates by clicking the link above.
What do experts in the field of AI research think our future might look like when living and working alongside hyperintelligent technology? Artificial intelligence that surpasses our own intelligence may sound like science fiction, but it may soon be part of our everyday lives. The charts in this article show the views of 356 experts as machines get smarter by the day.
Artificial intelligence is reaching a sort of tipping point, capturing the imaginations of everyone from students to leaders at the world’s largest tech companies. Excitement is building around the possibilities that AI tools unlock, but what exactly these tools are capable of and how they work is still not widely understood. We could write about this in detail, but given how advanced tools like ChatGPT have become, it only seems right to let generative AI explain what generative AI is.
More on Emerging TechnologiesSee all
Devanand Ramiah
December 6, 2024