The young woman calling the emergency number is panicking: her voice is breaking with emotion as she struggles to tell the call-centre operative her name, her address, and the details of her father’s sudden fall to the floor.
The phone operative, trained to deal with emergencies, calms the woman. As she speaks to the caller about her father – who seems to have hit his head – an artificial intelligence (AI) system “listens” in on the conversation, identifying key words and phrases.
Even within the moments it takes for an ambulance to be dispatched to the caller’s address, the AI has spotted that the man’s heart may have stopped beating.
After getting the young woman to check her father’s breathing, the call operative is able to tell the young woman how to perform mouth-to-mouth resuscitation and heart massage that will help her father’s chances of survival until the trained ambulance crew arrives.
This scenario is not from a fantasy TV show. It is a promotional video for an AI tool that has been developed in Denmark and has been used within the country to prevent deaths from out-of-hospital cardiac arrests. It is now set to go on trial in four other European cities this year as part of a partnership with the European Emergency Number Association, which operates in more than 80 countries.
Corti, the company behind the technology, has been trialling its product in Copenhagen, where it has been listening in to calls made to the official 112 emergency number. The company says its system analyzes emergency calls to learn words and characteristics associated with cardiac arrests and applies them to a neural network. This can predict more accurately than a human if someone’s heart has stopped.
Research has found that emergency dispatchers in Copenhagen recognize cardiac arrests over the phone in about 73% of cases, but Corti AI’s could spot them 95% of the time. A cardiac arrest differs from a heart attack in that the whole heart stops beating, while a heart attack is caused by a blockage in the supply of blood to part of the heart.
Corti’s AI is intended to provide unflustered assistance to both the person making the phone call and the call-centre operative who fields it.
Frequently the people in need of help are unable to make emergency calls themselves, which are instead made by friends, relatives or passers-by with no medical experience.
Research in the UK found that 20% of cardiac arrests occur in public places and 80% at home, while the British Heart Foundation says cardiovascular diseases cause more than a quarter of all deaths in the country. In the US, 350,000 people have cardiac arrests outside of hospitals each year: 90% of them die.
But Corti’s engineers say that if others make calls, the AI can still hear background noises – such as irregular breathing – and provide emergency centre responders with appropriate advice. The ability to spot a cardiac arrest is vital to a victim’s chances of survival, as every minute that passes without assistance reduces their chances of being revived by 7%-10%.
Have you read?
However, a report by website The Verge raises some areas of concern. This says that the AI cannot explain how it makes decisions. Additionally, The Verge says the full study of 161,650 calls has not been published, so the amount of “false positives” it may have identified is unknown.
“This is another example of the need to test and verify algorithms,” says Kay Firth-Butterfield, head of Artificial Intelligence and Machine Learning at the World Economic Forum.
“We all want to believe that AI will ‘wave its magic wand’ and help us do better and this sounds as if it is a way of getting AI to do something extremely valuable.
“But,” Firth-Butterfield added, “it still needs to meet the requirements of transparency and accountability and protection of patient privacy. As it is in the EU, it will be caught by GDPR, so it is probably not a problem.”
However, the technology raises the fraught issue of accountability, as Firth-Butterfield explains.
“Who is liable if the machine gets it wrong? the AI manufacturer, the human being advised by it, the centre using it? This is a much debated question within AI which we need to solve urgently: when do we accept that if the AI is wrong it doesn’t matter because it is significantly better than humans. Does it need to be a 100% better than us or just a little better? At what point is the use, or not using this technology negligent?”
These questions are likely to become more pressing in the future.