Humans have officially given their voice to machines.
A research paper published by Google this month—which has not been peer reviewed—details a text-to-speech system called Tacotron 2, which claims near-human accuracy at imitating audio of a person speaking from text.
The system is Google’s second official generation of the technology, which consists of two deep neural networks. The first network translates the text into a spectrogram (pdf), a visual way to represent audio frequencies over time. That spectrogram is then fed into WaveNet, a system from Alphabet’s AI research lab DeepMind, which reads the chart and generates the corresponding audio elements accordingly.
You can listen to two samples below. Keep in mind one sample from each sentence is generated by AI, and the other is a human hired by Google. We don’t know for sure which is which. (However, if you reveal the “page source” and look at the filenames of each on the Google research website, one is labeled “gen,” ostensibly to mark the generated sample.)
“George Washington was the first President of the United States.”
“That girl did a video about Star Wars lipstick.”
The Google researchers also demonstrate that Tacotron 2 can handle hard-to-pronounce words and names, as well as alter the way it enunciates based on punctuation. For instance, capitalized words are stressed, as someone would do when indicating that specific word is an important part of a sentence.
Here’s an example:
“The buses aren’t the problem, they actually provide a solution.”
“The buses aren’t the PROBLEM, they actually provide a SOLUTION.”
Unlike some core AI research the company does, this technology is immediately useful to Google. WaveNet, first announced in 2016, is now used to generate the voice in Google Assistant. Once readied for production, Tacotron 2 could be an even more powerful addition to the service.
Have you read?
However, the system is only trained to mimic the one female voice; to speak like a male or different female, Google would need to train the system again.