What’s Wrong With Voyeurism?

160411_r27947_rd-1200x630-1459452789

Gay Talese and the Case of the Harmless Voyeur
by David Boonin (Department of Philosophy, University of Colorado)

The publication last year of The Voyeur’s Motel, Gay Talese’s controversial account of a Denver area motel owner who purportedly spent several decades secretly observing the intimate lives of his customers, raised a number of difficult ethical questions.  Here I want to focus on just one: does the peeping Tom who is never discovered harm his victims?

The peeping Tom profiled in Talese’s book certainly doesn’t think so.  In an excerpt that appeared in the New Yorker in advance of the book’s publication, Talese reports that Gerald Foos, the proprietor in question, repeatedly insisted that his behavior was “harmless” on the grounds that his “guests were unaware of it.”  Talese himself does not contradict the subject of his account on this point, and Foos’s assertion seems to be grounded in a widely accepted piece of conventional wisdom, one that often takes the form of the adage that “what you don’t know can’t hurt you”.  But there’s a problem with this view of harm, and thus a problem with the view that voyeurism, when done successfully, is a harmless vice.

To see the problem with the view of harm that Foos’s defense of his behavior presupposes, it’s important to be clear about just what this apparently common-sense account of harm is supposed to be claiming in the first place.  The claim can’t be that if you never become aware of my act then my act can’t harm you.  If I secretly slip a drug into your drink that causes you to suffer stomach pains, for example, people who think “what you don’t know can’t hurt you” will surely agree that my act harms you even though you aren’t aware of it.  When they say “what you don’t know can’t hurt you,” they don’t really mean that you have to be aware of my act in order for my act to harm you.  They simply mean that in order for my act to harm you, it must have at least some effect on at least some of your subsequent conscious mental states.

Similarly, if I lie about you to your boss and as a result you’re not offered a promotion that you would otherwise have been offered, people who think “what you don’t know can’t hurt you” will agree that my act harms you even if you didn’t know that you were being considered for the promotion in the first place.  So they don’t mean that my act has to cause your conscious mental states to change from one state to another in order for my act to harm you either.  Preventing your mental states from changing from one state to another has an effect on what mental states you have, after all, just as causing your mental states to change from state to another does.

So when Foos says that his behavior was harmless because his guests “were unaware of it,” the most charitable interpretation of his statement – that is, the most plausible interpretation – is that if my act has no effect on any of your subsequent conscious mental states, then my act can’t harm you.  If your life will feel exactly the same to you whether I do the act or not, that is, then you are no worse off if I do the act than if I don’t.  If this claim about harm is correct, then it really does seem that Foos didn’t harm the people that he is said to have secretly observed.  Their lives felt exactly the same to them as they would have felt had Foos not been secretly watching them.  And, at least on the face of it, the claim that what you don’t know can’t hurt you, when understood in the way that I have interpreted it here, seems quite plausible.

But now consider the following thought experiment, which I borrow in a modified form from the philosopher Robert Nozick. Imagine a device that can simulate all of the experiences that a person might hope to have over the course of a life so perfectly that the person hooked up to it genuinely believes that the experiences are real.  Bill, for example, thinks that he is making all of his dreams come true: making great friends, marrying a wonderful man and raising a lovely family of thriving children with him, succeeding at an important and challenging job, climbing mountains, helping others, making scientific breakthroughs, and much more.  But he is actually spending his entire life floating in a darkened tank, hooked up to an experience machine that is manipulating his brain, completely isolated from the rest of the world, and simply thinking that all of these things are happening.  And now ask yourself: if you were given the opportunity to permanently connect yourself to such a machine, if you were assured that once you were connected to it you would forget that you were connected to it and would think that you were still inhabiting the real world and really doing the things that you want to be doing, and if you wanted to make the choice that would be best for you, would you choose to connect yourself to the machine?  Assume for the sake of the example that your life in the real world feels quite good to you and will continue to feel quite good to you but that your life connected to the experience machine would feel even better.

Most people who encounter this kind of example seem to be strongly inclined to reject the offer.  This seems to show that, at least upon reflection, most people are strongly inclined to believe that a person’s life can go worse for them than they think it’s going, and for reasons that have nothing to do with the content of their conscious mental states.  If all there is to how well your life is going is how well it feels to you, after all, then you would find it obvious that you would be better off having yourself plugged into the machine.  But very few people seem to have that reaction to examples like Nozick’s.

Now, assuming that you respond to Nozick’s example in the way that most people do, consider this variation on the story: Ted is asleep and lying next to him is an experience machine.  The machine is programmed so that if you put Ted into it, the rest of his life will feel exactly the same to him as it will if you don’t put him into it.  It’s just that, if you do put him into the machine, he won’t actually be doing any of the things that he wants to be doing.  He’ll think that he’s doing them, but he’ll really be floating in a darkened tank while the device manipulates his brain into thinking that he’s doing them.  Do you think Ted’s life will go better if you leave him alone than if you put him into the machine?  If you respond to the original version of the story in the way that most people do, I suspect that your answer to this question will be yes.  You would feel sorry for Ted if he spent the rest of his life connected to the machine in a way that you wouldn’t if he remained free of it.   You’d think there was something pathetic about him living such a life, you would not want a loved one to live such a life, and so on.  If this is your reaction to this version of Nozick’s story, it means that you think you would make Ted’s life go worse if you plugged him into the machine even though your act would have no effect on Ted’s subsequent conscious mental states.  And this commits you to concluding that even the most plausible interpretation of Foos’s claim about the nature of harm is false: an act can harm a person even if the act has no effect on their conscious mental states.  The fact that Foos’s acts had no effect on the conscious mental states of the people he secretly observed, then, does not mean that his acts did not harm them.  His defense of his behavior as “harmless” is philosophically unsound.

In addition, and more importantly, there seems to be a natural explanation for why your hooking Ted up to the experience machine for the rest of his life would make Ted’s life go worse for him even though it would not make his life feel worse to him: by hooking Ted up to the experience machine, you would frustrate certain desires that he has about how his life goes.  Ted wants to do certain things in his life, not just to have the experience of mistakenly believing that he is doing them, and by hooking him up to the experience machine you would prevent him from satisfying these desires about how his life goes.  An act can harm a person by frustrating their desires about how their life goes, that is, even if they are never made aware that their desire has been frustrated.  If this is the correct lesson to draw from the story of Ted and the experience machine, then we can draw an importantly stronger conclusion about Foos and his motel: not only is Foos’s defense of the claim that his acts were harmless unsuccessful, but the claim itself is false. Foos’s acts did indeed harm the people he secretly observed.  Those people had a strong desire to do what they did in private – that is why they shut the doors and closed the curtains before they took off their clothes – and because of Foos’s acts this strong desire of theirs was frustrated.

That they never knew that their desire to live certain parts of their lives in private was frustrated by Foos does not mean they weren’t harmed by what Foos did.  It simply means that they never knew that they were harmed.  But, at least if Talese’s account is to be believed, we can know that they were harmed by Foos even though they never knew this.  And, more importantly, anyone who might be tempted to emulate Foos’s behavior and to appeal to Foos’s defense as a way of rationalizing it, can — and should — know that their behavior would harm innocent people, too.

 

 

2 responses to “What’s Wrong With Voyeurism?

  1. I wonder if you might be able to complicate and enrich this (very good) analysis by making use of the distinction between harming and wronging.

    In short, harming someone makes them worse off than they otherwise would have been. By contrast (and, as I cannot put it better, I’ll just quote a 2003 article by Rahul Kumar) being wronged “requires that the wrongdoer has, without adequate excuse or justification, violated certain legitimate expectations with which the wronged party was entitled, in virtue of her value as a person, to have expected her to comply”. Wronging has to do with how someone treats another person irrespective of what happens as a result. If this is right, then one can wrong another by exposing them to a particular risk—even if that risk never materializes into an instance of harm. Wronging and harming often go together, but need not.

    If that is all correct, then “harm” is not the only metric that we should use to evaluate whether an action is morally bad. Of course, the post above doesn’t put forward that thesis (it simply chooses to focus on harm while ignoring other metrics of moral evaluation by name). But, I did think it was worth raising the distinction in a blog post for general discussion.

    Like

  2. Take the case of grudges. If I held a grudge against my colleague who desired nothing more than to be approved by anyone, is holding bad thoughts of him so unethical and morally wrong?

    Like

Leave a comment