International Journal of Applied Philosophy

ONLINE FIRST

published on January 24, 2017

Ryan Jenkins, Duncan Purves

A Dilemma for Moral Deliberation in AI

Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are indispensable in human moral decision making, such as empathy. To the extent that robots may be robustly artificially intelligent, these concerns subside, but they give way to new worries about creating artificial agents to do our bidding, if those artificial agents have moral standing. We suggest that the question of AI consciousness poses a dilemma. Whether artificially intelligent agents will be conscious or not, we will face serious difficulties in programming them to reliably make moral decisions.