FEATURE

Cognitive Science and Animal Rights

Cognitive Scientists and Philosophers hold no monopoly on theories of mind, consciousness and free will. Every person who comes into contact with species other than their own at one time or another employs their own theories to guide their interactions. Standard philosophical arguments about the ethical treatment of non-human animals ultimately appeal to some kind of capitulation of the existence and intrinsic value of non-human minds. What I purport to reveal is an unsettling consequence of accepting this stance: there are no reasonable grounds for not extending these principles to artificial systems.

It is widely accepted in Western culture that to inflict pain upon another human being is not ethical. We often justify this kind of conclusion with arguments like, "we have to consider the interests of other people", and " we wouldn't like to be treated that way, so neither would this other person". What this amounts to is objectively attributing thoughts and feelings to other people based upon our knowledge of our own subjective thoughts and feelings. I know what it would be like for my friend to feel pain from a scraped knee because I know what it is like for me to feel pain from a scraped knee, and because I believe that my friend's central nervous system supports his mind in the same way that mine does.

Can we use the same kind of reasoning to formulate an ethical stance of our treatment of non-human animals? In his essay, "What Is It Like to Be a Bat", Thomas Nagel points out that any such attempts are fundamentally flawed. He concludes that a human can never in principle know what it is like to be a bat, simply because a human is not a bat. Any attempt at mimicking the perceptual effects of "bat- like" perceptions begs the question because that would only amount to a human perception of "bat- like" experiences.

Is there a more valid way to formulate an ethical stance? Marian Stamp Dawkins points out that two main strands are discernible from the "bowl of spaghetti-like reasoning" that we use as ethical bases for our treatment of animals: we tend to value other animals which are clever or which show evidence of the ability to reason (this view has its origins in Descartes), and we value other organisms that show evidence of the ability to suffer and to feel pain (this view has its origins in Bentham). Dawkins defines suffering as states in which an organism would rather not be and from which they would probably try to escape if possible. She recognizes that animals lack the ability to alert humans to their states of suffering using language, and proposes three other sources of evidence: the general state of health of an animal (for example, squealing, struggling, convulsions), physiological signs (for example, increased heart rate, brain activity, hormone levels), and overt behavior.

As an example of behavioral evidence of an animal being in such a state of suffering, Dawkins describes a situation in which rats were encased in air-tight containers, subjected to tobacco smoke. Over time, the rats learned to plug up the smoke-vents with their own feces, thus expressing, according to Dawkins, "what they thought of what was being done to them".

Dawkins then outlines experimental procedures that could in principle be used to determine more exactly just how unpleasant a particular state is to an animal. She puts forth the definition that an animal can be said to be "suffering if it is being kept in conditions that it would work hard to get out of, if given the chance, or if it is being kept in a condition without something that it would work hard to obtain if given the chance", where "working hard" is defined as something like expending energy, or going without food.

What Dawkins' definitions amount to is an ethical stance that values an organism that has the capacity to show that something matters to it. According to her account, this can be best revealed by the extent to which it is able to evaluate the world and work out how to bring about a change in the world.

In his book, "Practical Ethics", Peter Singer offers his own philosophical contemplations on a sound ethical basis for human treatment of non-human animals. His foundation is that the fundamental principle of equality, on which the equality of all human beings rests, is the principle of equal consideration of interests. He then argues that having accepted this principle as a sound moral basis for relations with other humans, we are also committed to accepting it as a sound moral basis for relations with non-human animals. In the same way that our concerns for the interests of other humans should not depend on their race or intelligence, our concerns for the interests of non-human animals should not depend on their not being human, or their level of intelligence.

Singer also appeals to the view put forth by Bentham that all that is required to entitle a being to equal consideration is the capacity for suffering. This taken with the above course of reasoning suggests that Singer is advocating the moral stance that we must consider the interests of animals, (human and non-human), as long as those animals have the capacity to suffer, and as long as those animals have interests to be considered.

In synthesizing the above views put forth by Singer and Dawkins, the resulting ethical stance is that based upon the principle of equal consideration of interests, we are morally bound to consider the interests of non-human animals that have the capacity to suffer, where suffering is defined and can be experimentally revealed in the ways described above by Dawkins.

To what extent can these same principles be extended to artificial systems? Should humans be morally bound to consider the interests of artificial systems in the same way, by the above arguments, that they are morally bound to consider the interests of non-human animals?

I propose that an artificial being could, in principle, satisfy the above criterion. Imagine a mobile floor- sweeping-robot built out of some kind of heat-sensitive material, such that if exposed to heat above a certain temperature for a prolonged period of time, the robot would melt. Suppose that the robot has been programmed to avoid termination of its functions. The robot is equipped with heat detectors and has been programmed to avoid areas of its environment in which it detects excessive heat. In addition, the robot has been equipped with crumple-detectors, such that when the robots outer layer starts to crumple from any kind of impact, it will reverse its direction of movement.

Imagine that an evil undergraduate lures the floor-sweeping robot into a room using animal-cracker crumbs, and that once trapped inside the room, the undergraduate turns up the heat. The robot begins to sweep up the animal-cracker crumbs, but its heat detectors start to detect heat levels far above its pre-programmed threshold. The robot moves about the room, but its heat detectors register excessive heat everywhere inside the room. It tries to open the door, but the baneful undergraduate has locked it. The robot enters a state in which it emits a loud alarm and flashes the pre-programmed message "heat levels too high" on its display screen. The robot has been pre-programmed to regard heat regulation as its highest priority, so after some pre- determined time the robot begins to repeatedly roll to the back of the room, and hurl itself forward, disregarding the warning-inputs from its crumple-detectors as it repeatedly smashes into the door.

The robot clearly is in a state in which it would rather not be if given the chance; after all, it has evaluated the state of the world and how to bring about a change in the world. In addition, it is working hard to do so: it is conceding structural damage and possible resultant termination in order to forego certain termination due to the heat. Surely the warning alarm and message can be interpreted as some kind of physiological response, equivalent to an organic system releasing hormones in response to some external stimulus. Thus, Dawkins' criterion for 'severe enough' suffering have been met, and its interests have been revealed, so according to Singer, we are now morally bound to consider those interests.

I will be the first to admit that this is an extremely unsettling conclusion. The alternatives, however, are either to abandon this line of reasoning as a defense of non-human animal interests, or to somehow remove artificial systems from the scope of this line of reasoning. What possible reasons could there be for their exclusion? One might argue that because they were created by humans, their internal states were created by humans, and thus are not owned by them. In the same way, however, one could argue that our internal states (our knowledge, thoughts, and especially our autonomic physiological responses) are not owned by us, but are 'pre-programmed' by our genetic background and our environment.

Any conjecture that the internal states of artificial systems are not really equivalent to our (or non-human animal) states of suffering because they are inorganic or artificial can be countered with Nagel's point: a human cannot know what it is like to be an [artificial system] simply because a human isn't an [artificial system]. This fact doesn't change, even if a human was the creator of the states that constitute what it is like to 'be' that artificial system.

Perhaps what is needed is a more stringent way of determining what systems deserve our consideration of their interests, and in that way artificial systems could be indirectly excluded from consideration. It appears that we find ourselves between a rock and a hard place. If we try to proceed by re-tuning these criterion, we encounter the immovable rock. In order to make these criterion more stringent, we would have to use more internal or subjective measures than the external, objective behavioral dimensions offered by Dawkins. Unfortunately any such subjective line of attack ultimately falls victim to Nagel's argument.

If we try to proceed in the other direction, we find ourselves in a hard place to be: either we must forsake that the interests of non-human animals that meet these criterion deserve our consideration, or we must concede that the interests of artificial systems that meet these criterion also deserve our consideration.

- Paul Gribble, Montreal, Canada

Sources