This article is published in collaboration with Quartz.
Before Skynet can become self-aware, before the robots can rise up, we need a system in place to safely pursue research into artificial intelligence. Or so argues Eric Schmidt, the chairman of Google’s parent company, and Jared Cohen, the head of its tech-minded think tank, Google Ideas.
“First, AI should benefit the many, not the few.”
Life-altering technology, Schmidt and Cohen argue, should benefit everyone, not just businesses. “As a society, we should make use of this potential and ensure that AI always aims for the common good,” they wrote.
BothGoogle and Facebook have recently made overtures to bring greater transparency to their AI research. Facebook recently revealed the designs for the servers it uses for AI research, while Google open-sourced the code behind its AI engine,TensorFlow. Critically, though, neither company gave away the data they use to train, test, and strengthen their AI algorithms, which could be the determining factor to their success.
Researchers need to ask themselves, while systems are still being developed, whether the data they’re using to train AI systems are right, whether there are any side-effects of their research they need to consider, and whether there are adequate failsafes in place within the system. “There should be verification systems that evaluate whether an AI system is doing what it was built to do,” Schmidt and Cohen wrote.
To keep up with the Agenda subscribe to our weekly newsletter.
Author: Mike Murphy is a reporter at Quartz, covering technology.
Image: The hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard during a demonstration by the German research centre for artificial intelligence. REUTERS/Fabrizio Bensch.