For years a science fiction staple, robot combatants are moving into the realm of reality. While drones and other unmanned weapons are already being widely deployed, they are controlled remotely by humans. Lethal autonomous weapons (LAW) are robots that can select, attack and destroy targets without human intervention.
With remarkable advances in artificial intelligence some experts believe we are years rather than decades away from having this capability. So significant is this development that it has been called the third revolution in warfare, after gunpowder and nuclear arms.
So what would it mean if we can replace soldiers with autonomous weapon systems?
International humanitarian law — which governs attacks on humans in times of war — has no specific provisions for such autonomy, but may still be applicable. The 1949 Geneva Convention on humane conduct in war requires any attack to satisfy three criteria: military necessity; discrimination between combatants and non-combatants; and proportionality between the value of the military objective and the potential for collateral damage.
Can a machine ever reliably make such apparently subjective decisions? And would we want to allow them to even if they could?
Opponents say machines should never have the power to make life and death decisions. Others argue that as the technology improves, it will eventually reach a point where machines are better at avoiding civilian casualties than human soldiers.
What do decision-makers need to know about autonomous weapons in order to decide on an international standard for how they could, and should, be deployed?
Vote in the poll above and continue the conversation on Thursday, January 21st at 09:00 EST / 15:00 CET. Tune in for a livestreamed discussion on the possible, plausible and probable impacts of artificial intelligence on defence systems at the World Economic Forum Annual Meeting in Davos.
The session was developed in partnership with TIME.