Dualism’s version of the pretty hard problem

Free illustrations of Silhouette

Which physical states or processes are correlated with consciousness? Dualists usually think that a physical brain (or some kind of physical system) is required for consciousness in the actual world. They just think that brain states or processes correlate with consciousness, which is a separate property or phenomenon linked to the physical via “bridge” laws of nature which specify the relationship between the the physical and the phenomenal.

Panpsychism’s version of the pretty hard problem

Which combinations of conscious building blocks combine into conscious systems? One might have thought that panpsychism does a stance on the pretty hard problem: doesn’t panpsychism entail that every system – humans, trees, nematodes, GPT-3 – is conscious? Not necessarily – in fact, most panspychists think that while the fundamental building blocks of reality are conscious, not that every aggregate of these building blocks is itself conscious. On this view, tables are composed of conscious building blocks but tables qua tables are not conscious. In contrast, you are composed of conscious building blocks that combine into your human consciousness. So panpsychists have their own pretty hard problem – they want to know in which physical systems this “combination” into new aggregates of consciousness occurs.

It’s possible that we could come to an answer to the pretty hard problem, and philosophers could still dispute different answers to the hard problem.

Scientific theories of consciousness

Scientific theories of consciousness are best thought of as (in the first instance) answers to the pretty hard problem, not to the hard problem. It’s in the scientific literature on consciousness that one finds theories of consciousness like: global workspace theory, higher-order thought theories, biological theories, predictive processing and Bayesian theories of consciousness , the attention schema theory, midbrain-based theories, Integrated Information Theory, and so on. This is where one will find experiments trying to tease out which brain regions are active when humans consciously versus unconsciously detect a dot flashed on a screen, or whether children born lacking most of their cerebral cortex are conscious, or whether bees are susceptible to some of the same (possibly) conscious-affecting visual effects as humans.

Ways that the hard problem and the pretty hard problem do intersect

With all that said, there are ways that the hard problem and the pretty hard problem intersect. Here are a few:

  1. Does consciousness have sharp boundaries, or vague boundaries? If your stance on the hard problem is that consciousness is fundamentally a physical phenomenon, you are more likely to think that consciousness admits of vagueness, since many of the proposed physical or computational bases of consciousness (“global information broadcast”, “bodily self-modeling”, “internal self-monitoring”) will also admit of vagueness.
  2. Does consciousness require complex cognition? In answering the hard problem, panpsychists (unlike dualists and physicalists) have already taken a decisive stance on this question – they think an electron can have some sort of simple experience, even though they do not think that an electron can entertain complex thoughts. In this sense, panpsychists are more open to solutions to the pretty hard problem that involve very ‘simple’ forms of consciousness – though not exactly for the reason people often think that they are open to it (see above, on combination).
  3. Does phenomenal consciousness exist in the first place? This is obviously one very key way that they intersect! One reaction to the difficulties of the hard problem is to not answer it but reject the question: to deny that phenomenal consciousness exists in the first place. This might seem like a surprising view, since arguably phenomenal consciousness is the very thing that we are the most familiar with and sure of. But even this can be denied. And if your response to the hard problem is to reject it by denying that consciousness exists, then the pretty hard problem as formulated will also not arise.

This leads us to one last point.

Illusionists about consciousness still have some pretty hard problems

This post has claimed that there is a pretty hard problem of knowing what AIs or animals are conscious, a question which can admit of different answers, no matter how hard to arrive at. But in my experience, some EAs (often those of a LessWrong rationalist bent), are suspicious of even admitting that there is a pretty hard problem – they hesitate to acknowledge questions about which systems are conscious. I suspect that this suspicion arises because they think that to concede that there is a pretty hard problem will automatically commit them to a dubious position on the hard problem, and/or set them up for some slight-of-hand involving sophistical thought experiments.

My opinion is that there is a real question here about phenomenal consciousness, and that it doesn’t involve any of those commitments. There is a thin and “innocent” notion of consciousness–the bare fact that we have subjective experience. You do not have to think that consciousness is especially special or spooky or strange, or think that thought experiments about consciousness are a useful methodology, in order to wonder whether chickens are conscious and what their experiences are like.

Still, one can deny the existence of phenomenal consciousness in even this thin sense, and indeed some smart and thoughtful people do. This position is known as “strong illusionism” (henceforth just “illusionism”). Illusionism does dissolve the hard problem, and technically speaking the pretty hard problem as well. But effective altruists who are illusionists should recognize that illusionism still leaves unanswered very important and difficult questions that are closely related to the pretty hard problem.

If you don’t like to talk of “consciousness”, you can still acknowledge that “pain” exists – even if it is not associated with consciousness as we normally think it is. The same is true of any of the mental states which according to the illusionist we falsely take to be conscious. Presumably illusionists still care about many of these states and think that they are good or bad: suffering, pain, nausea, discomfort, joy, satisfaction. The illusionist still has a pretty hard problem for any of these states – what physical systems can have these states? How widely distributed are they in the animal kingdom? Could GPT-3 experience discomfort? Could GPT-11? When and why?

These are still perfectly meaningful questions, and not ones that we are close to having good answers to at this point. Illusionism might arguably set us on a better track to address them than realism about consciousness, but we are still far from knowing the answer.

Like Our Story ? Donate to Support Us, Click Here

You want to share a story with us? Do you want to advertise with us? Do you need publicity/live coverage for product, service, or event? Contact us on WhatsApp +16477721660 or email Adebaconnector@gmail.com

1 thought on “Dualism’s version of the pretty hard problem

Leave a Reply

Your email address will not be published. Required fields are marked *