It’s hard to discuss technology innovation these days without running into the words Artificial Intelligence (AI). The media is telling everyone that robots are coming for our jobs, while entrepreneur and tech visionary Elon Musk goes further and warns that AI might cause the end of humanity.
In a less dramatic sense, AI is already making a substantial impact in a variety of fields. AI-based systems are being rolled out in a number of healthcare settings to aid doctors in the diagnosis of a variety of diseases. Autonomous driving software is on the cusp of the mainstream, as automaker Tesla prepares to send a car from Los Angeles to New York by the end of 2017. Google’s driverless cars are accumulating millions of miles of driving experience. Many hedge funds are incorporating machine-learning models into their investment algorithms.
Even here, in India, AI is making its presence felt.
Grey Orange Robotics is having great success making and deploying robots to automate warehouse operations. IBM Watson for Oncology has been deployed at Manipal Hospital to assist doctors in cancer diagnosis and treatment. Artleus, a Bengaluru-based startup, is using machine learning algorithms for early detection of diabetic retinopathy, a leading cause of preventable blindness in India.
Despite these multiple applications, there is also a growing unease about the many ethical concerns with AI.
Consider a classic example: in an imminent collision, how should a driverless car decide whether to save its two passengers, or five crossing pedestrians? It’s a sobering question.
Or consider the ethical dilemma of using AI-based software to decide which loan applications get approved. There is evidence that many of today’s systems will discriminate against economically disadvantaged groups that have historically had limited access to credit. Similarly, judges in the U.S. are asked to consult models that predict recidivism – tendency of a criminal to re-offend – before making sentencing and parole decisions.
Recent reports have shown these systems to have a race bias. That resulted in, among other examples, an 18-year-old black girl with no prior record who had attempted to steal a used bike and scooter getting a higher risk score than a 41-year-old white man arrested for shoplifting who had already served five years in prison for attempted armed robbery. These algorithms learn from past data and if there are racial biases in policing, the algorithms pick up the biases as well. In short, machine learning systems trained on biased data might institutionalize those biases.
Justifiably, there is a growing debate on the ethics of AI use. How do we roll out AI-based systems that cannot reason about some of the ethical conundrums that human decision-makers need to weigh – issues such as the value of a life and ending deep-seated biases against under-privileged groups? Some even propose halting the rollout of AI before we have answered these tough questions.
I would argue that it’s not acceptable to reject today’s AI due to perceived ethical issues. Why? Ironically, I believe it might be unethical to do so.
At its core, there is a “meta ethics” issue here.
How can we advocate halting the deployment of a technology solely because of a small chance of failure, when we know that AI technologies harnessed today could definitely save millions of people?
Consider the appalling amount of road fatalities due to human error. Every year, more than 140,000 people are killed on Indian roads. We know that AI self-driving technologies could be harnessed to reduce these deaths greatly. Buses and trucks can be fitted with AI-based systems to supplement or replace drivers. Should we consider halting the rollout of driverless cars just because we don’t yet know how driverless cars should prioritize who to save in the case of an accident? According to Eric Horvitz, an AI researcher and head of Microsoft Research, “some of these cases might be considered edge conditions, and they need to be addressed very carefully and [they need to be] compared to the ethics of allowing cars to continue to be the source of about thirty five thousand deaths a year in the US, and over a million deaths worldwide every year.”
In healthcare, recent studies have found that over five million Indians die in hospitals annually due to avoidable human error. We know that AI decision-support tools and alerting systems could greatly reduce those deaths. Furthermore, there is a huge shortage of doctors, especially in rural parts of India. AI-based technologies can address this shortage today.
If we can improve the status quo right now, isn't it our ethical responsibility to accelerate the rollout of AI technologies?
I am not trivialising the ethical dilemmas surrounding AI. Nor am I suggesting that we throw caution to the wind. There are tough questions ahead, and we need to maintain a transparent debate if we want the public to one day trust these new technologies. Yet, these concerns need to be addressed in parallel with the rollout of AI in critical markets like transportation and healthcare. While it’s important to ask whether AI-supported judgment is perfect and meets all our expectations, we also need to ask whether they improve the status quo despite any imperfections and if that alone is enough to start the process of rolling them out.
It’s true there are ethics issues with today’s AI. And yet, it may be ethical to roll out AI today.