Philosopher Ryan Jenkins (California Polytechnic State University) was recently scheduled to give a talk at the Center for Values and Social Policy on “The Algorithmic Distribution of Benefits and Burdens.” Weather intervened and he was forced to cancel the trip, but he has kindly provided a blog post on the subject for What’s Wrong?
“The Algorithmic Distribution of Benefits and Burdens”
Ryan Jenkins, California Polytechnic State University, RyJenkin@CalPoly.edu
This paper grew out of a talk I wrote (but was ultimately unable to deliver) for the Center for Values and Social Policy. I am delighted to have the opportunity publish it here on the CVSP’s blog instead, and welcome comments while it is still inchoate. The post has benefitted from comments from Keith Abney and conversations with Duncan Purves.
Most of us believe it’s not wrong to outsource our decisions to algorithms, since we use spell checkers when typing and turn-by-turn directions when driving. But these decisions are not terribly morally significant. Meanwhile, morally important decisions are increasingly being outsourced to machines. Some examples of these, such as autonomous weapons or driverless cars, have attracted significant public attention, and have worried many. How should we feel when algorithms are used to make morally significant decisions, and where should the threshold be that determines whether it is permissible for algorithms to make morally significant decisions?
The adoption of algorithmic decision-making offers monumental benefits to several areas of society. Some of these benefits are already being exploited. For example: judges are consulting algorithms to determine the sentences of convicted criminals; most stock trades are conducted by algorithms that attempt to maximize wealth; hospitals use algorithms to diagnose and treat disease; and the FBI uses algorithms to determine who is placed on the “No Fly” list.
Two examples of algorithmic decision-making on the horizon provide an especially stark contrast in terms of their public perception. Those two examples are driverless cars (or “autonomous vehicles,” more broadly) and autonomous weapons (or “killer robots,” more pejoratively). Polling reveals overwhelming opposition to autonomous weapons. This is perhaps not surprising, given that many people shudder to consider “life or death” decisions being in the mechanical hands of unfeeling machines. At the same time, however, polls find overwhelming support for driverless cars.
This appears to be an inconsistent pair of intuitions about whether it is permissible to use algorithms to make morally significant decisions. And, as mentioned already, these are just two of many autonomous technologies making morally important decisions. (We are not discussing robot factories, automated payroll, etc. We are discussing decision making about what end state is to result, not machines simply executing a list of deterministic instructions.)
As the benefits of algorithmic decision-making become irresistible for corporations, governments and, perhaps, the military, we should expect to see human decisions “off-loaded” to a greater and greater extent to machines. Judging when and whether these uses are morally permissible is becoming increasingly urgent, yet a comprehensive general theory of algorithmic morality is missing. We should think carefully about which theories we could use to most capably and reliably judge algorithms that are making morally important decisions.
I will take as fixed points our intuitions about driverless cars and autonomous weapons, since these two intuitions are empirically documented (a rare occurrence when discussing the public’s reaction to emerging technologies). Further, these feelings are rather intense: among those who oppose autonomous weapons, that opposition is strong; among those who support the introduction of driverless cars, many are enthusiastic.
Notice that I do not suggest that intuitions are always or even generally justified. I am open to the possibility that there is no plausible moral theory that could accommodate these two apparently conflicting intuitions. However, if we employ reflective equilibrium, we should accept that our intuitions about particular cases—as well as about features of moral theories or meta-ethical theories—are entitled to some weight in our deliberations. Moreover, we should accord them a weight that reflects the strength of those intuitions, and these intuitions are widely, strongly held. So, I will begin by presuming that the introduction and use of driverless cars is permissible, and that the use of autonomous weapons is impermissible.
Now then, is there any moral theory that could account for this difference in our intuitions? And if so, how might that theory equip us to judge the other algorithms that are already making morally important decisions in society?
Some possible hypotheses
Hypothesis 1: It’s morally wrong to delegate any life-or-death decision to algorithms. “Life and death decisions” might mean decisions that could foreseeably, directly result in a person’s death, or could mean decisions that determine who out of a possible class of victims die (that is, when one or more of them must die).
However, neither interpretation will be able to distinguish autonomous weapons from driverless cars. Driverless cars clearly make decisions that could result in someone’s death. In cases where a crash is unavoidable, for example, driverless cars may have to determine (through whatever decision procedure) who will die and who will live.
Notice that we can dispense at the same time with the similar hypothesis that killing people via machine is inherently disrespectful. This theory has received much support in the literature, though it is difficult to see why it does not at the same time imply that driverless cars are morally objectionable. Driverless cars, just like autonomous weapons, are programmed (or taught) in certain ways that determine which people they will kill. And programming a car to result in someone’s death could constitute a premeditated killing.
The next hypothesis is an improvement.
Hypothesis 2: It is wrong to design a machine to kill. Driverless cars are designed only to kill when it would minimize the total number of lives lost (i.e., when a crash is inevitable). Therefore, they are designed instead to save lives, and so are not problematic.
This hypothesis is only plausible if the intend/foresee distinction is legitimate. However, it is quite controversial among ethicists, fewer than half of whom would probably say that intentions themselves matter morally. Some consequentialists might say that intentions are important because they are an indicator of how a person might act in the future, or in similar situations, but the bare difference between acting with a good and bad intention, to them, is not morally significant.
Virtue ethicists may suggest that intentions matter morally for what they indicate about your character, but they do not affect the deontic status of your action, i.e. whether it is right or wrong. However, virtue ethicists would likely complain that this misses the point. They might say that persons rather than actions are the proper locus of moral evaluation, and that intentions are central to the evaluation of a person’s character.
More importantly, however, is that it’s not clear this captures a real difference between driverless cars and autonomous weapons: driverless cars are also designed to kill under certain circumstances. Their programming will lead them to directly, intentionally cause the death of someone who would not otherwise die, and that this could occasionally happen is a foreseen effect of their programming. It is not clear what else could it mean to say that a machine is “designed to kill.”
On the other hand, if it is true that driverless cars are designed to save rather than kill, the same could plausibly be said of autonomous weapons. War is only justified as a project of saving lives such that the evils averted are proportionate to the evils inflicted. Thus, deploying autonomous weapons might lead to fewer people dying than would otherwise have died (both in terms of civilians saved and in terms of our soldiers who are not put in harms way). Deploying autonomous weapons could positively save lives, and this is surely part of the impetus for their development.
It is plausible for both autonomous weapons and driverless cars that their intended function and their most common use are to save many more lives than they take. To that extent, the machines in question seem morally equivalent.
In fact, if we suppose that in both cases these machines are killing only when they have a good reason to, then the way in which driverless cars kill could actually be more troubling. Suppose that autonomous weapons are used only in just wars, and choose to kill only when their targets are actually liable to harm, for example, by threatening one of our soldiers or by threatening an innocent civilian. Driverless cars, on the other hand, could routinely kill genuinely innocent bystanders. While there is no consensus on the conditions under which a person is liable to harm or whether (or when) it is permissible to kill innocent bystanders, killing an innocent bystander is clearly much more controversial than killing soldiers in war who are liable to be killed. There is no sense in which innocent bystanders are liable to be killed, yet it is possible that driverless cars will aim toward them to avoid deadlier crashes. (Killing innocent bystanders may be justified as a lesser evil, though lesser evil justifications are also contentious.) Bystanders killed by driverless cars could simply be in the wrong place at the wrong time. Their deaths are more problematic than the deaths of soldiers liable to harm in war.
Hypothesis 3: Autonomous weapons and driverless cars are both immoral to use or deploy. However, the other uses of algorithms mentioned above are defensible, if they do not kill anyone.
This hypothesis is also problematic. Stock trading algorithms, government welfare gatekeeping, the No-Fly list, etc., can affect the lives and wellbeing of hundreds of millions of people, and can determine the distribution of trillions of dollars in wealth. They can also infringe on a person’s rights. Being mistakenly placed on the No-Fly list, for example can infringe on a person’s Constitutional freedom of movement. Driverless cars, even when they are not redirecting harms to “optimize a crash,” could put millions out of work.
It is true that deaths do not directly result from the operation of these other algorithms, but if harms and benefits aggregate, then the decisions of these other algorithms are at least as important as directly lethal decisions. If that is true, then algorithms are already making decisions that are “worth” the lives of many people. For example, if we accept the EPA’s appraisal of the value of a human life at around $7 million, then the October 2013 flash crash that wiped out $6.9 billion in value wrought damage equivalent to around 750 human lives. This hypothesis cannot justify a distinction between autonomous weapons and driverless cars, on the one hand, and other algorithmic distributions of benefits and burdens.
Risks versus benefits
Hypothesis 4: Algorithms making morally important decisions ought to be judged according to the risks and benefits they offer. Autonomous weapons carry greater risks than driverless cars, given the expected benefits. Accordingly, autonomous weapons are morally worse than driverless cars.
In the end, the simplest explanation may be the best. The benefits of both of these technologies are clear and have been lauded elsewhere. As for the risks: we can calculate these by looking at three kinds of mistakes that algorithms could make: empirical, moral, and practical mistakes. In different domains, these mistakes will be more or less likely, and will be more or less consequential. So, this account can justify the differential in trust that we currently express, and can be extended to other uses of algorithms.
Empirical mistakes are mistakes in understanding states of affairs. For some machines and algorithms, these will be relatively uncommon. For example, a stock-trading algorithm has a comparatively easy task, and everything it needs to know can be represented to it as numerical data. Autonomous weapons and driverless cars, on the other hand, need to incorporate computer vision, which identifies objects and anticipates their behavior. For example, autonomous weapons will need to distinguish between a soldier carrying an RPG and a reporter carrying a camera with a long lens. This is a much greater challenge, not to mention the complex inferences they will have to make about causal structures, humans’ intentions, whether they are threatening or out of combat, etc.
Moral mistakes are mistakes in reliably applying a moral framework to determine what is to be done. Some algorithms are deployed in contexts and given tasks that are relatively simple. Algorithms that determine a person’s eligibility for government benefits, for example, are presumably running through a simple, deterministic checklist of conditions. Mistakes they make could be very consequential, but the task they face is less daunting. Compare this to autonomous weapons that have to make decisions involving “multiple layers of interpretation and judgment,” including judgments about how much harm a person might be liable to (itself a controversial question).
Practical mistakes are mistakes in executing a decision, once reached. Again, the simplicity of executing a task can range from choosing one from among a few clear options: to buy, hold, or sell a stock, to choosing where to steer and whether to accelerate or brake given the conditions of the road, to choosing how to inflict harm on a person given the distance to the target, their movement, the wind speed, etc.
This theory of risks and benefits can accommodate our apparently conflicting intuitions about driverless cars and autonomous weapons. Driverless cars present lower risks than autonomous weapons. They will be deployed in relatively simple contexts and have to follow relatively simple rules. And they offer high rewards. Autonomous weapons meanwhile may present high risks and relatively low rewards. The contexts in which they are deployed could be nearly unbounded and much less predictable than the (admittedly sometimes chaotic) context of a city street or a highway. Autonomous weapons could be purposefully deployed to these complex contexts because they would stand a better chance than human soldiers of navigating them. The dangers following from their empirical or moral mistakes could be disastrous. Meanwhile, the benefits they confer, in terms of civilian lives saved and our own soldiers’ lives saved, are comparatively slim. (About 33,000 people die on US highways each year, and around 90% of accidents are caused by driver error. Many of these accidents we could expect to be avoided by autonomous cars.)
The most plausible reason to be worried about autonomous weapons or driverless cars is that they will make mistakes and people will die. Of course, as these technologies are refined in the future, as machine vision and machine learning advance, these worries become less justified all around. This theory that relies on risks and benefits relies on empirical claims that we may have no way to investigate without simply deploying autonomous weapons and driverless cars. If I am right, we are confronted with a catch-22: that we cannot know whether these technologies are permissible to deploy until we try.
Ryan Jenkins is an Assistant Professor of Philosophy and a Senior Fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo (Cal Poly). He studies normative and applied ethics, especially military ethics and the ethics of emerging technologies. He earned his PhD in philosophy from the University of Colorado Boulder. You can learn more about his work at http://calpoly.academia.edu/RyanJenkins/.
It is likely that driverless cars will cause fewer deaths than cars driven by people. They also solve the “drunk driver” problem which is one of the major causes today of crashes.
As for weapons, the issue is more complex. It may also encourage the US (which is more likely to use such technology) to fight even more wars since there would be far fewer Americans killed relative to the number of enemy killed.
LikeLike
I think there is a more plausible hypothesis in the vicinity of hypothesis #2. That would be, it is immoral to design a machine whose primary purpose is to kill. I think this would distinguish terminators from self-driving cars. Perhaps a more general point along these lines would be something like: it is immoral design a machine that is by its very nature always making moral decisions.
But, I can see problems with the appeal to the purposes of cars vs. terminators. Maybe a non-teleological way to capture the intuition is just to appeal to the frequency with which the machines will have to make moral decisions. Automated weapons systems will make life or death decisions every time they are deployed whereas self-driving cars presumably would not. I have the intuition that self-driving cars are OK even if they have to make life or death decisions every once in awhile. However, if I imagine a self-driving car that would have to make a life or death decision every single time it is driven, I no longer feel that such a vehicle would be ok.
LikeLike