Build Your Own Theology: Do the ends justify the means?

Normally I would say a big fat NO

Because the ends justifying the means... has been used to justify so many terrible things, I just can't stomach it. The reason, I think, that this is... is because as the reading says "Our means are always implicated in our ends; they form a great chain of cause and effect. We can never do only one thing. Our actions have many consequences, not just the ones that are apparent." and then gives examples of like, war and stuff. So think about that video at the flight museum trying to justify the dropping of the atomic bombs on Japan because it supposedly saved more lives in the long run and kept the war from dragging on forever? Was that really justified? Did it not possibly have lasting and unintended effects? I suppose you could say there have been mainly positive effects apart from the immediate loss of life, because now the world knows just how horrifying atomic warfare is, and is less likely to engage in it again (but then what was the cold war all about??) and Japan doesn't seem to be holding a grudge against us, but... I just ... I really am not prepared to ever state that violence is justified simply because it prevents other violence. Because you cannot predict the way that violence will grow. We start as a culture to accept that violence is part of life, something to be praised in certain circumstances even... and that makes it easier to use it when we don't have to. 


I think about my horror about the end of the Watchmen movie (I won't spoil it for you. If you HAVE watched it, you know what I mean) and how thought-provoking that whole situation was. Is killing a hundred an acceptable sacrifice to prevent violence between thousands? I can't imagine giving a "yes" to that answer. I do not feel I have any right to judge whether any other being is worthy of life or death. In the words of Gandalf the Grey: "Many that live deserve death. Some that die deserve life. But can you give it to them, Frodo? Do not be too eager to deal out death and judgment. Even the very wise cannot see all ends."

I understand part of this may be a knee-jerk reaction though. I mean... if some guy is molesting a bunch of helpless kids or something and you're the only one around who can stop him... do you shoot him or what? Is it less violent to stand by and do nothing? Is it useful to go and throw yourself between them only to get knocked out, so that the violence encompasses you as well? This is the kind of thing my brother probably worries about and is why he's preoccupied with mentally preparing himself to harm others if necessary. Obviously, ideally, we want to do the least amount of harm possible. Cue Asimov's three laws of robotics, the first of which is "A Robot may not injure a human being, or, through inaction, allow a human being to come to harm." There comes the question of what harm is most important to prevent? I imagine a robot in that situation would try to restrain the man and could do it because robots are strong. But say the robot was weak or small and couldn't do it? It would need to incapacitate the man somehow. So if it had some means of doing so I think it would have to. Maybe knock the guy out, or shoot his leg so he couldn't follow while you took the kids to safety. In that case I would condone a certain amount of violence. But I would be upset if it seemed killing him was inevitable. Caught between killing someone and letting someone be killed... initiate roblock! Mental freeze-out! But that's not useful either. 

I am thinking on a certain Star Trek Voyager episode with the EMH and that Cardassian doctor, where the EMH could save a life if he used the Cardassian Doctor's research, but that research was obtained through torturing prisoners with medical experiments. The EMH didn't want to do it, even though much of modern medical knowledge has been similarly obtained through torturing animals. At the point where the episode is occurring, these cruel methods are not generally being used on animals or humanoids. Still, he balks.
Just stand there and let me yell at you while completely ignoring my own hypocrisy

I think there is the distinction between using positive outcomes of negative actions in order to create more positive outcomes... and justifying or supporting more negative actions because they produce some positive outcomes. So like... it's not good to say "well the animal's dead already so what's the harm in buying the meat to give to a hungry neighbor", because if you have an alternative to paying for the death of an animal, you should always take it. But if you are supporting demand for animals to be killed, you have created the positive outcome of feeding people by directly supporting a negative action (killing animals). However, if a plane crashes in the Andes and a bunch of your fellow humans are already dead, you might be justified in eating them in order to survive. I may be grossed out but I'm not gonna judge you for it. Your action in that case doesn't precipitate more killing of humans for cannibalistic purposes (unless you get some weird taste for human flesh but let's not go there). It's making the most of a bad situation. If two people are stranded on a desert island, is one justified in killing and eating the other in order to survive longer? Or in purposely starving the other to death in order to survive longer? That is the way I feel about killing an animal in some imaginary life or death situation. I guess I pity anyone who is desperate enough to make such a choice. Maybe that's arrogant of me. I can't say since I've never been in a situation like that. I don't speak of veganism to people who have no other option than to eat meat because I've never encountered anyone like that.

But as for the Cardassian doctor thing... his bad methods were no longer being pursued, but the results of his research could save an innocent life. I believe using the research would be justified if it saved a life and if the use of it did not act as a stimulus for similar cruelty in the future. This is not supporting the continuation of bad research methods. It is using the positive products of a bad situation to create something better. Say I hated plastic with a burning passion (which I kind of do). Would I go and bomb a plastic recycling plant? Say that plant was creating affordable and much-needed items for a poor population which had been ravaged by environmental poisons and disaster? Plastic is terrible because it uses up resources to produce and then never breaks down, continually poisoning and littering the planet. But if it's already been made, the best thing we can do is find positive ways to use it, and halt or diminish the demand for new plastic. The existence of that plastic recycling plant may seem to be condoning the creation of plastic (without plastic, it wouldn't exist), but it is in fact creating a more positive outcome than if it hadn't existed. 

I suppose what it comes down to is that... yes, there is violence in the world. Sometimes we cannot always act in an entirely ideal way, because the world we live in is not ideal. But we can try our best to not deviate from our ideals more than absolutely necessary. 

Some days I wish I could do something drastic to shut down all the factory farms. I wish I could force people to go vegan, to stop killing and exploiting animals. I fully condone the deception required in order to make undercover videos of factory farms. I would condone stealing animals from factory farms unless this proved to cause more negative effects for the animals remaining... for example, if it damaged the credibility of the animal rights movement to the point where we could not win any more people to our side. When you really think about it, theft might result in them trying to breed or buy even larger quantities of animals, or in cutting cost of care even further, leading to even greater neglect or abuse. Being branded eco-terrorists does very little to help animals. I think this is part of the thing about our ends being implicated and tied up into our means. If we are not careful, we may accomplish one positive end only to find that we have cut off future opportunities to do more good. So that is why the liberators of animals have to be careful how they go about things. It is incredibly frustrating at times. I think of the undercover investigators themselves, who must stand by and watch these innocents being killed and hurt. Some people have the audacity to call them cold-hearted, saying "if I were there I would have grabbed that piglet and run! How could you just stand by with your videocamera?" But the undercover investigator is working for a greater end. They must gather enough footage to present as evidence against the company, which could result in the company shutting down or having to change their practices due to public outcry. The end here is to try and prevent as much harm for as many animals as possible. It is heartbreaking that so many animals must suffer and die before the truth can be brought to light. But what else can we do? The world is so perverted that there is no way to stop all of this at once without creating a potentially worse situation.

Some people argue that getting companies to tone down the violence a little bit will only result in the general public becoming complacent about animal rights, thinking "well, they're not getting REALLY tortured anymore, so I don't have to worry about whether exploiting them is right or wrong." They want to take things a step further and say "what use is going from cages to cage-free when this only tricks the public into thinking the chickens are actually treated well, when they're still packed into a tiny space and not allowed to really live their full lives, only until they're big enough to eat?" The opposing side counters with the argument that we must do whatever we can to improve the lot of the animals, since it is highly unlikely that everyone will go vegan no matter how terrible things are in factory farms. We have to do whatever we can, even if it takes a long time. We have to be patient.

not cage free

cage free

And maybe those people who feel good about their choice to buy cage-free eggs will balk at being told that that's not good enough. But maybe they won't. Maybe they'll learn about how bad the situation still is, and eventually take another baby step toward a more compassionate lifestyle. I think this comes back, again, to us, to how careful we must be in approaching others. We should not criticize them for not making a big enough step toward veganism or a big enough try at helping animals. We must appreciate every effort anyone makes. Discouraging positive behavior simply because it does not do very much good is likely to do even less good than if we had said nothing at all. 

So I guess this all goes back to another quote from the reading which is by Saul Alinsky:
"The true question... is 'Does this particular end justify this particular means?'"

In other words, we have to use our brains and think about what the potential outcomes might be before making any decision which could be seen as violent, harmful, or otherwise immoral or illegal. This goes back to the situational ethics idea from last week (I haven't quite posted a journal on that one yet, since everything overlapped so well with the journal on motivation). We must take into account what the reality of the situation is, and then decide how best to proceed. Sometimes there is no hard and fast rule. For me, the closest thing that I have to a hard and fast rule is "avoid doing harm as much as possible." But the flip side of that is "prevent as much harm as possible". Which once again goes back to the lovely First Law of Robotics. I will repeat all three laws here because I really believe they are a good model for human morality as long as there is room to interpret them situationally.

A robot may not injure a human being, or, through inaction, allow a human being to come to harm. (obviously we cannot prevent all harm everywhere, but we must do what we can to not cooperate with evil through our silence or inaction).
A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law. (we can take this to mean we must obey the laws of the land and cooperate with others unless this is causing harm)
A robot must protect its own existence as long as such protection does not conflict with the first or second law (we must have respect and regard for ourselves. I would say that this one is the one we most need to stretch because humans need to take care of themselves before they can really care for anyone else. We must find our own strength so that we are not a burden to others, and then we are free to be a more positive influence in preventing harm and cooperating freely and sincerely with others, not through coercion or resentful obligation.)

Obviously Asimov wrote many stories about how the laws' strictness could be an obstruction to proper functioning and utility of robots. That is because robotic brains have a very difficult time with creating exceptions or abstractions. These three laws are as close as anyone could come to concrete rules which leave little room for interpretation, and even still there are questions and ambiguities which throw some robots for a loop. The robot stories Asimov writes warn against following a rule simply for its own sake without questioning the effect it might have. But it also warns against how strict rules often result in trying to find loopholes rather than actually following the spirit of the law, the reason the law was written.

So there's another thought dump about ethics. Conclusion: our means should always complement our ends as much as possibly. We need perspective in decision-making and to ask ourselves what our true end goals are (not merely short-term or immediate), and whether our means will hurt or help our quest toward that end. Similarly we should question whether our end goal is even worthy. And remember the reason for rules and moralities. It is to affirm life and help everything live to its fullest potential of happiness... in my opinion.

0 comments:

Post a Comment