The pretty hard problem of consciousness

Free illustrations of Meditation

Introduction

A lot of effective altruists are interested in consciousness, because it is an inherently interesting topic, and because it matters for cause prioritization and thinking about the long term future. But the curious reader is confronted with an enormous and intricate academic literature, both philosophical and scientific.

A lot of this intricacy is unavoidable and desirable, because consciousness is a genuinely perplexing and challenging topic. But it can be overwhelming. The purpose of this post is to give readers one conceptual tool for navigating this large literature and for thinking about consciousness – the distinction between the “hard problem” of consciousness, and the “pretty hard problem” of consciousness. This distinction has been helpful for me personally, as I do research for the Future of Humanity Institute on consciousness in artificial intelligence.

The hard problem: Why are physical states associated with conscious experience? For example, why are certain neural firings associated with (for example) the conscious experience of red, rather than with some other experience, or no experience at all?

The pretty hard problem: Which physical states are associated with conscious experience?

This post explains what is meant by “consciousness” in these contexts, and then explores how the two problems can (mostly) be separated, and a few ways they intersect. I also argue that effective altruists who are illusionists about consciousness – that is, who deny that consciousness exists in the first place – do avoid the hard problem but still face difficult questions that are closely related to the pretty hard problem.

Consciousness: what is at issue?

This section clarifies what is meant by “consciousness” or “conscious experience” in these questions. If you are already familiar with the term “phenomenal consciousness” and how it’s used, this section can be skipped.

It’s natural for people to wonder which complex systems have subjective experiences–like pain, or the experience of seeing red–and which do not. Consider the contrast between a human and a laptop: in both cases, there is complex information processing, but in only one case is this associated with consciousness. Michael Graziano (2017) describes this contrast:

You can connect a computer to a camera and program it to process visual information—color, shape, size, and so on. The human brain does the same, but in addition, we report a subjective experience of those visual properties. This subjective experience is not always present. A great deal of visual information enters the eyes, is processed by the brain and even influences our behavior through priming effects, without ever arriving in awareness. Flash something green in the corner of vision and ask people to name the first color that comes to mind, and they may be more likely to say “green” without even knowing why. But some proportion of the time we also claim, “I have a subjective visual experience. I see that thing with my conscious mind. Seeing feels like something.”

In both cases, there is complicated information processing that brings about some behavior or implements some function. But only in the human case is the information processing sometimes associated with subjective experience. In one popular locution, there is something that it is like to be a human seeing green. Here is David Chalmers (1995) on other states that there is “something it is like” to be in:

When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is [conscious] experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them.

Because “consciousness” can refer to many things in different contexts–self-awareness, free will, higher cognition–philosophers often use the term phenomenal consciousness to refer to this “what is it like”, subjective experience phenomenon. And “the phenomenal” is a locution that is used to refer to phenomenal consciousness-involving states and properties more generally. In what follows, by “consciousness” I will mean “phenomenal consciousness” and by “experience” I will mean “phenomenally conscious experience”.

In this terminology, some of your brain states are conscious, like the ones described above. But not all of them: there’s not something it’s like for you when your brain controls your organs or regulates hormones.

So states can be referred to as conscious or not conscious; so can creatures or systems as whole. Systems that are conscious include humans and (almost certainly) pigs; systems that are not include motor engines and clocks; and for many systems, like fish and bees and future advanced AI systems, we are not entirely sure.

So a natural way of thinking of the problem of AI consciousness, or animal consciousness, or digital upload consciousness, is that we are unsure which of these systems have consciousness, and what their experiences (if any) are like.

The hard problem and the pretty hard problem

The hard problem of consciousness

One of many reasons it can be hard to dive into consciousness questions is that there is a large, intricate, and centuries-old philosophical debate about the fundamental nature of the relationship between the physical world and consciousness. The “hard problem of consciousness” is the question of why physical things and physical processes (“the physical”) are sometimes associated with consciousness. Is consciousness ultimately reducible to the physical? Can physical explanations explain consciousness, or do they leave something out?

Many people have the intuitive sense, and many philosophers have argued, that consciousness does not fit easily within the physical realm and is not amenable to purely physical explanation. Some philosophers maintain that purely physical explanations, no matter how detailed and sophisticated, are simply not the kind of thing that could explain why some brain states are accompanied by these qualitative experiences (e.g. the redness of red), rather than some other experiences (e.g. the greenness of green), or by no experience at all. It is in this philosophical literature that people deploy thought experiments such as Mary the super-scientistcolor spectrum inversion, or p-zombies, which are meant to draw our attention to the alleged gap between physical explanations and consciousness. I won’t rehash the arguments or these thought experiments here, but note that this literature concerns the question of the metaphysical relationship between the physical and the phenomenal. It is as responses to the hard problem that we get these oft-debated metaphysical views

Physicalism: fundamentally there are only physical properties / things / processes. Consciousness just is identical to, or grounded in, the physical.

Dualism: fundamentally there are both physical and phenomenal properties / things / processes; consciousness is distinct from the physical.

Panpsychism: the intrinsic nature of matter is phenomenal – so the basic building blocks of reality are, in some sense, both physical and phenomenal.

The pretty hard problem of consciousness

Fortunately for the interested reader, these positions and the millenia-old and intricate disputes between them can be (largely) set aside when we ask, “Which physical states are associated with consciousness, and which are not?” This question is what Scott Aaronson (2014) has dubbed the term “Pretty Hard Problem”; David Chalmers notes its distinctness from the hard problem:

An answer to the Pretty Hard Problem so construed will be a universal psychophysical principle, one which assigns a state of consciousness (possibly a null state) to any physical state…it’s still easier than the original hard problem at least in the sense that it needn’t tell us why consciousness exists in the first place, and it can be neutral on some of the philosophical issues that divide solutions to the hard problem.

Most philosophers and scientists should accept that the Pretty Hard Problem is at least a meaningful problem with better and worse answers. As long as one accepts that consciousness is real, one should accept that there are facts (no matter how hard to discover) about which systems have which sort of consciousness….

One way to see how the hard problem and the pretty hard problem are indeed distinct questions is to note that the pretty hard problem arises as a further question for all of the metaphysical positions we saw above.

Physicalism’s version of the pretty hard problem:

Which physical states or processes are identical to, or ground, consciousness?

Like Our Story ? Donate to Support Us, Click Here

You want to share a story with us? Do you want to advertise with us? Do you need publicity/live coverage for product, service, or event? Contact us on WhatsApp +16477721660 or email Adebaconnector@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *