Sentience Part 1: Animal suffering & robot lawnmowers
Most people think it's wrong to hunt whales, but it's perfectly fine to smush a mosquito on your ankle. How do we decide which animals must be protected and which can be killed for food or our own protection? (Or in the case of a mosquito that has already bitten us, bittersweet vengeance.) As Jeremy Bentham put it, the question is not can they reason, nor can they talk, but can they suffer? A precondition for suffering is sentience, the minimum level of consciousness which allows an animal to experience good or bad sensations. But how to tell which animals are conscious? If you're reading this and pondering the question, you know at least you are conscious. The people you meet everyday have basically the same mental equipment as you, and can tell you how they feel, so we can safely grant them consciousness, unless they are sneaky philosophical zombies who only appear conscious.
After that, we're on shaky territory. Other mammals? They can't tell us how they feel, but I don't think we're misleading ourselves when we interpret some of their behaviour as feelings in a similar sense to ours. I grew up on a beef farm, and one of the most distinctive sounds is the first night after the calves have separated from their mothers. They will bellow for hours, even after dark. They are clearly upset, and studies show increased levels of stress hormones in newly-weaned calves (Lay et al 1998, Hickey et al 2003).
We can't exactly prove that a conscious creature is suffering in this case. But the behaviour and the biochemistry line up well enough to give us pretty high confidence. How far can we extend this line of reasoning? The authors of the New York Declaration on Animal Consciousness think we can go very far indeed! The declaration itself is short so I'll quote the whole thing:
Which animals have the capacity for conscious experience? While much uncertainty remains, some points of wide agreement have emerged.
First, there is strong scientific support for attributions of conscious experience to other mammals and to birds.
Second, the empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans, and insects).
Third, when there is a realistic possibility of conscious experience in an animal, it is irresponsible to ignore that possibility in decisions affecting that animal. We should consider welfare risks and use the evidence to inform our responses to these risks.
The thing is, when a layman hears someone arguing for conscious experience in insects, it just sounds like crazy talk! Do they mean conscious in the same way people are? Or even mammals? But it turns out that scientists use the word for a group of related properties, which don't exactly overlap with the non-technical definition.
The most basic thing that gets called "consciousness" is global availability in Dehaene et al's words (based on Baar's idea of a Global Workspace) The idea is that at a minimum, a conscious being needs to have access to sensory information from a number of sources, and can integrate this information to decide on its behaviour. An E. coli bacterium can detect an increasing concentration of food molecules and swim in the right direction. But this happens in a simple(-ish), mechanistic way. And its memory of past events lasts only a few seconds - just enough time to register if the concentration of food is increasing or decreasing. By contrast, you could imagine a plankton-feeding fish that sees more plankton in area of deeper water. But it is also able to consider its current hunger level, the risk of predator attacks based on past experience, and decide its next action. In this sense, the fish has a conscious perception of the food, but the bacterium doesn't.
For Barron & Klein, "subjective experience" is the most basic aspect of consciousness, in the sense of "what it's like to be a _" The phrase originally came from Thomas Nagel, who pondered What it is like to be a bat. Of course, no-one actually knows, but if it even makes sense to ask the question, then the animal is probably capable of subjective experience. Whereas for a jellyfish, there probably is no "what it's like", no more so than for a tree or a rock.
Subjective experience for Barron & Klein is bound up with the kind of centralized decision-making I described in the fish example above, but also in the ability of an organism to model its position and motion through the environment using different sensory inputs.
...subjective experience arises from... [brain] structures creating an integrated simulation of the state of the animal’s own mobile body within the environment.
The quote above is taken from their description of the vertebrate brain structures which support consciousness. They then go on to show how analogous structures exist in insect brains, which provide a similar kind of central decision-making. They have one really neat example: A female solitary wasp that attacks cockroaches can inject a neurotoxin into the cockroach's brain's central complex (CX). This doesn't kill or paralyze the cockroach, it effectively lobotomizes it! It can still move, but it can't decide where to go. All the wasp has to do is tug on its antenna and the cockroach follows.
(The wasp does this to lead the much larger prey back to her burrow, because it's too big to carry off. She then lays her eggs inside the cockroach, so when the eggs hatch the larvae can eat the still-living prey from the inside, all in the safety of the burrow. Aren't parasitoids disgusting fascinating?)
As it happens, Barron & Klein don't locate vertebrate consciousness in the cortex, so the wasp isn't exactly performing the equivalent of a lobotomy (which severs connections in the cortex). For them, the crucial part is the sensory integration and modeling, which they put in the midbrain rather than the cortex. As far as I can tell this a minority position.
This is all fine, and very interesting. But I'm skeptical of the association of centralized modelling and decision-making with subjective experience, and subjective experience with consciousness sentience. One of the drivers of my skepticism is actually articulated by Jonathan Birch, one of the framers of the New York Declaration:
...yes, [insects] do many cognitively impressive things, but we could also design a robot that could do those things, and we wouldn’t think the robot was thereby conscious.
I like to use robot lawnmowers as a thought experiment for consciousness. They "behave" in a superficially animal-like behaviour - grazing, resting, moving. But no-one would think of them as conscious, even if we imagined a robot lawnmower with more advanced abilities. A simple robot lawnmower moves forward until it reaches the edge of a predefined area, then turns in a random direction and starts off again. This is actually very similar to the bacterial motion described earlier. If the bacterium detects an increasing concentration of food it keeps going straight. If not, it randomly tumbles around and starts off in the new direction. In this way, the robot and the bacterium can achieve a kind of directed motion. But (using Barron & Klein's criteria) there's no integration of multiple inputs or modeling of its position in space, so no "subjective experience".
But let's imagine a fancier robot lawnmower that can see the height of the grass height near it. And let's say it has GPS, and can build up a map of the areas where the grass grows fastest. And it has speed sensors on the wheels, so by comparing the GPS velocity to the wheel speed it can tell if the wheels are slipping. And of course it knows its battery level, and where the charging station is. Now let's say it can use all this information to "decide" whether or not to go cut a distant patch of grass, or return to the charging station. This all sounds pretty do-able with current technology. But the behaviour is very similar to that which Barron & Klein would call "subjective experience". Should we wonder what it's like to be a lawnmower?
Let's take another example. Bateson et al suggest that "that honeybees could be regarded as exhibiting emotions." In their experiment they train bees to associate one smell with a food reward and a different smell with a bitter-tasting fake reward. For half the bees, they then simulate a nest attack by shaking them. Shortly after, they present all the bees with smells which are a mix of the previous two. The shaken bees were less likely to try the reward in the presence of the ambiguous smells.
They characterize this as "a pessimistic cognitive bias when they are subjected to an anxiety-like state". Would they use the same language for a robot lawnmower showing similar behaviour? Let's say that getting stuck in muddy patch is is something the lawnmower has been programmed to avoid. It wants to maximize the amount of grass it cuts, but it trades this off against the probability of getting stuck. To put arbitrary numbers against it, we program it to go after a 10kg patch of grass only if there's a less than 1% probability of getting stuck. And it can continually update its P(stuck) by monitoring the amount of wheelspin to see how slippy the surface is.
We could replicate the bee experiment by putting our test lawnmowers on a slip-and-slide for a while to increase their P(stuck). Then we let them loose on a field. The test group will obviously be less likely to go after distant patches of grass because their P(stuck) is higher. But I wouldn't describe it as their anxiety making them more pessimistic!
The Background to the New York Declaration has a useful summary of the recent evidence for animal consciousness. Some of the examples fail the robot lawnmower test, but some are more convincing. In the next essay, I'll look at what kind of behavioural evidence could persuade us that an animal is conscious. And I'll get into what kind of consciousness (and how much of it) really matters for the question of animal suffering.