To suffer or not to suffer?
George Dvorsky published an interview with David Pearce on io9. I have been familiar with Pearce’s ideas for a few years and I think he is an exceptionally smart thinker with whom I agree on some issues and disagree on others. Pearce believes that we should eliminate all suffering of humans and animals through advanced technology.
Animal suffering
What I agree with in context of philosophy of mind and cognitive science, is that it is dubious to assert that morally meaningful suffering or suffering at all requires precisely tuned (such that of humans) or very advanced access-consciousness atop of phenomenal-consciousness (Block). There is plenty evidence of convergent evolution and sufficient studies of homologous structures relevant to the consciousness and experience of animals to be quite certain that angora rabbits screaming as their fur is ripped off or pigs stuffed in small spaces awaiting their slaughtering, are indeed suffering. The science here does not support a Descartes conception of animals as automatons. That would be ethically convenient for us, but is exceedingly unlikely. What we can or should do about this are two separate matters, but as far as I’m concerned, animals suffer and in vitro meat couldn’t come any sooner.
The world as an ethically ideal theme-park
Engineering ecosystems would be a very tricky endeavor to say the least. An endeavor that Pearce would like to make a reality. Surely if one agrees that animals do suffer and that their suffering is bad, one would not prefer that only factory farmed animals should not suffer they way they do. One would also prefer for a cheetah not to rip open that gazelle in the wild. The practical challenge of doing anything about suffering in the wild is immense: ecosystems are complex systems and attempting to control them is akin to attempting to control the weather. Currently, our interventions result in trophic cascades and other such disruptions. Turning the world into a eco-engineered zoo would be infinitely harder. This point is obvious and needn’t be belabored. Suffice it to say that we are quite some time away from even considering to attempt such a massive operation.
Engineering for bliss
With respect to genetic engineering, the challenge here is that we are complex adaptive dissipative systems. There are countless dependencies throughout levels of organization and no handy layers of abstraction like the ones put in the systems we engineer. It’s fiendishly difficult to engineer for a higher level emergent effect like bliss, due to issues like pleiotropy: one gene can affect several traits. For example, oxytocin does not, as is often reported, bestow its carrier with social openness without repercussions. There is some evidence that it merely increases in-group bias, which means that it makes one more cuddly to people of one’s tribe, but potentially harsher towards those outside one’s tribe. There is no one gene for a desire to cuddle, as it is an emergent behavior. So to say that tuning genes for emergent top level is problematic would be an understatement. There is an interplay between the agent and environment that is very hard to balance once you start tinkering with it.
Look at the evidence for increase in cranial capacity due to cesareans. Messing with evolution can yield all sorts of unintended consequences, some of which may be nasty surprises. That is not to say we should succumb to indifference in these matters. There is a lot to improve. Clearly, we are not that intelligently designed and for this reason we have been intervening in nature for many centuries. I think we should continue to, so long it’s done wisely. That it would be “unnatural” to intervene has never stopped us and arguments that reduce to “it is good because it is natural and bad because it is unnatural” are appeals to nature and as such, untenable. Genetic engineering is certainly not inherently bad, it is just extremely challenging. Engineering for bliss as an imperative also relies on committing to ethical positions that are too close to open questions for my taste. Most people today will consider engineering for bliss the epitome of questionable supererogation (going beyond moral duty). Can we be sure that a being is helped and improved when rid of its nociception, for example? Even omitting for a moment, the function of pain noxious stimulus signaling)? What sort of axis is there really in terms of valence? Aren’t feelings and emotions more like (strange) attractors of a larger system where the hedonic adaptation is disrupted when you take out entirely what you deem negative? As mentioned earlier, I doubt that abstractions of mind such as pleasure and suffering map neatly to cognitive functions in the sense that we can count on clean emergent behavior and effects when tuning for them. The idea that suffering is something we should get rid off, is also not without problems. There are fruits that suffering yields which may not easily be substituted with ‘ non-suffering’ processes. And this goes beyond the oft-made link between mental illness and creativity. For example, there are several competing hypotheses regarding the function of depression, with some evidence for the usefulness of depression as an evolutionary adaption to deal with complex issues. This is the basis of the analytical rumination hypothesis. Where would we be without the benefits of such cognitive functions or how would we compensate for them?
Open questions
The (meta-)ethical and axiological questions are bloody hard. If you could push a button to remove all pain and suffering, would be it be a good thing to do? Would you have the right? Would it be much different from a negative utilitarian who presses a button to destroy the world (thereby ending all suffering)? If we could, such as in “Dawn of the planet of the Apes“, increase intelligence in any species so that they could posses cognitive capacities equal to ours, would it be a good thing? If we would do such a thing, would we immediately have to relinquish our self-appointed authority over such beings, by virtue of the acknowledgement of the cognitive capacities they’d share with us? If we are to be ethically consistent, the answer is ‘yes’. They’d be autonomous and intellectually self-sufficient beings to the extent that we are. And we wouldn’t like if others decided to ‘fix’ us with the press of a button without our informed consent, would we? What constitutes involuntary suffering and coercion is not remotely clear-cut.
Then again, those who think we should not meddle with the cognitive states of animals, should be reminded that we’re already doing it on a massive scale, albeit indirectly. Consider the millions upon millions of animals we put through suffering in animal farms. They are all effectively condemned to negative cognitive states by virtue of our treatment of them.
Discussing the future
Pearce does touch upon some of the aforementioned issues. Nevertheless, imperatives in these areas are subject to some of the biggest open questions in philosophy of mind, cognitive science and ethics. Pearce has been and still is ahead of his time. Judging by the comments on the interview, most people think it’s all too far fetched and crazy. So crazy, in fact, that his ideas are not worth thinking about. With that I absolutely disagree. Even if his quest in its entirety is stunningly ambitious for our times, and even if one doesn’t agree with all his conclusions, it touches upon many interesting issues relevant to our present as well as our near future. So while discussion of civilization policies beyond Kardashev type 0 is difficult for most today, we have to realize that there is plenty to discuss and it’s vital that we discuss it sooner than later.