Why Is Consciousness Perplexing?
views expressed dated: 2023-09
If Shakespeare had Hamlet say "I think therefore I am" would that prove to us that Hamlet exists? Should it prove that to Hamlet, and if so what is such a proof worth? Could not any proof be written into a work of fiction and be presented by one of the characters, perhaps one named "Descartes"?
-- Robert Nozick, Fiction
How does the water of the brain turn into the wine of consciousness?
-- David Chalmers
The self has the power to come into being at the moment it has the power to reflect itself. This should not be taken as an antireductionist position. It just implies that a reductionist explanation of a mind, in order to be comprehensible, must bring in "soft" concepts such as levels, mappings, and meanings.
-- Douglas Hofstadter, Gödel, Escher, Bach
Intro
Perhaps the most controversial and perplexing problem in the philosophy of mind is how to account for phenomenal consciousness. Phenomenal consciousness is typically defined as the subjective character of experience; the taste of an apple or the sensation of stubbing one's toe. The problem can be captured in a simple question: "How does a bunch of brain stuff give rise to my phenomenal consciousness?" or simply "How does my phenomenal consciousness arise at all?". As simple as the question is to formulate, finding an answer without tying oneself in a conceptual knot proves difficult. This problem is known as the "hard problem" of consciousness. For many, even the seemingly simple definition of 'phenomenal consciousness' is tenuous. The definition I will follow here, and I believe the correct one at present, is Eric Schwitzgebel's in "Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage".
Phenomenal consciousness can be conceptualized innocently enough that its existence should be accepted even by philosophers who wish to avoid dubious epistemic and metaphysical commitments such as dualism, infallibilism, privacy, inexplicability, or intrinsic simplicity. Definition by example allows us this innocence. Positive examples include sensory experiences, imagery experiences, vivid emotions, and dreams. Negative examples include growth hormone release, dispositional knowledge, standing intentions, and sensory reactivity to masked visual displays.
I believe this definition is much more theoretically innocent than "What it is like" or explanation via "having qualia".
This is our mystery: "This is my consciousness" we think as we presumably ostend inwardly, "and it seems some way. How can anything physical in my brain make it so?" I believe that whatever philosophical or scientific views we have, confronting this problem is not optional in a complete view of consciousness, if for no other reason than that it has a powerful capacity to baffle people who think about it.
In this note, I hope to propose a better question: "Why does the nature of conscious experience seem perplexing to us at all?" I also will propose an answer to this question which explains why we can avoid a true "hard problem" while explaining (but not brushing aside) our perplexity. I will argue that it may always seem perplexing to us and that finally, this lack of an intuitive nail in the coffin is not itself enough for us to reject purely physical explanations of conscious experience.
The Metaphysical Nature of Consciousness
Is consciousness entirely physical? If not, what is it then? The materialist will say something to the effect of: your experience “just is” this or that brain process. The idealist will say there is nothing but consciousness outright. The dualist will say that your experience is some nonmaterial thing or nonmaterial property of a material thing. The dualist then faces the difficult problem of explaining how this nonmaterial thing or property interacts or plays an explanatory role, the idealist faces the challenge of explaining exactly how "everything can be consciousness", and the materialist faces the challenge of explaining how "This experience" can really be that brain activity.
I believe we should resist the many temptations against a materialist persuasion. My reason for this is simple: I do not think that our epistemic access to our own minds gives us any incorrigible access to their metaphysical nature. Merely having an experience does not grant us any knowledge about its metaphysical status. When Descartes arrived at the conclusion "I think, therefore I am" he only concluded that he "was" as a "thinking thing", the nature of which was wrapped up later in the Meditations. Like Descartes if the Meditations stopped there, we should remain framework-agnostic about the metaphysical nature of experience.
Many anti-materialist rejections of some brand of materialist identity theory rest on the intuition that one's conscious experience just doesn't seem like it is a brain process or a physical thing. This is a critical conceptual mistake. How would your experience seem if it "were" (in the identity sense) something else? Unless we can answer that question with more or less confidence depending on the experience-substrate, this is simply not a valid epistemic move. How can we claim that an experience seeming a certain way is evidence for its metaphysical status being some way when we cannot say how that seeming might differ in any way if its metaphysical status were different?
So why should we lean towards materialism then? I believe because we have so many commitments to the materialist view that are in good order and so few that are committed to any other metaphysical view with any real cash value. Given the data point of our subjective experience, barring convincing evidence or arguments to the contrary, our default view should be materialist.
Many philosophers of both the anti-materialist and materialist persuasion are convinced that there is something special about consciousness with regard to seeming though. These philosophers argue that we are licensed to regard our own experiences with special epistemic status. Thomas Nagel, for example, argues that experience cannot be reduced in the same way we reduce to water:
Experience itself, however, does not seem to fit the pattern. The idea of moving from appearance to reality seems to make no sense here. What is the analogue in this case to pursuing a more objective understanding of the same phenomena by abandoning the initial subjective viewpoint toward them in favour of another that is more objective but concerns the same thing?
Similarly, John Searle claims that:
(Searle believes that consciousness is physical, but that it is "ontologically subjective")Where consciousness is concerned, appearance is reality.
Nagel and Searle want to claim that in some sense the appearance is consciousness and that is our ultimate reality. They argue that we cannot come at it from any other angle. While I disagree with that epistemic claim, I agree that there is more to this relationship than the " to water" relationship, certainly. Experience itself is something real and in the world, but I argue that this has NO ontological implications about what the constituents of that experience could or could not be.
Experience not seeming like it could be this way or that is not sufficient epistemic cash to buy the conclusion that it is not physical or not objective. In fact, it is irrelevant to its metaphysical nature.
What Is Consciousness: Complex Information Processing That Is Most Probably in a Physical Medium
Ultimately, whatever makes up experience, in my view, it must be completely representable in information. Part of any definition of information is that it is necessarily a difference in something. If the world were comprised of some entirely homogeneous-in-every-possible-way-substance information would be impossible, along with a great many things.
A coin, when flipped, gives us one bit of information: heads or tails, or because it has two possible states. Imagine we had a unary device (a one-sided coin perhaps) which only returned the same exact signal no matter what the conditions. That device gives no information. There are two strong reasons to believe experience is entirely information-bound. First, something not expressible in information would have to be made of a sort of homogeneous-in-every-possible-way-substance. That seems totally out of sync with the rest of reality and proposes an even wilder interaction problem than vanilla dualism does. Second, almost everything we know in our universe can be expressed in information, therefore our prior credence in the belief that experience follows the same pattern ought to be very high.
Furthermore, we should believe that the information-substrate for experience is physical. Our best and most well-integrated understandings of the world are physical. Therefore, without strong reasons to add more metaphysical elements to the mix, why is not the highest likelihood explanation that consciousness is entirely physical but just not understood yet? Even the most hesitant materialist, in my view, ought to believe that we should at least exhaust the simpler hypotheses first. This is a likelihood argument in simplicity-form.
A Better Question
Assuming our only data point to be our subjective experience (which I think is false, but it is generous to the dualist and idealist) we should remain metaphysically agnostic about the mind. Starting from that position, and assuming that our physical ideas about the world are not wildly wrong, the likelihood that the mind is physical is higher than the alternatives.
If we accept that phenomenal consciousness, innocently defined, is fully expressible in information and that it is physical, this does not remove the mystery. It moves it. I therefore propose a new question to better target the location of the mystery we should be focused on. We are acquainted with plenty of things that are entirely physical and expressible in information. So why is this one perplexing?
New Question:
"Why does the nature of conscious experience seem perplexing to us at all?"A Note On "Illusionism"
A similar reframing of the "hard problem" is made by Illusionists such as Daniel Dennett, Francois Kammerer, and Keith Frankish. These "Illusionists" claim that we should not ask "How does phenomenal consciousness arise?" but "Why does it seem like we have phenomenal consciousness?" In Frankish's own words:
Illusionism replaces the hard problem with the illusion problem — the problem of explaining how the illusion of phenomenality arises and why it is so powerful. This problem is not easy but not impossibly hard either. The method is to form hypotheses about the underlying cognitive mechanisms and their bases in neurophysiology and neuroanatomy, drawing on evidence from across the cognitive sciences.
Illusionists of this brand deny that phenomenal properties or phenomenal consciousness exist. My proposed alternative question is different than theirs both methodologically and in its claims.
How My Position Is Different:
If illusionists claim that phenomenal properties do not exist and what they mean is that no non-physical properties of experience exist, then perhaps I agree. Even if it is only terminological, they go a step further to say that those properties only "seem to exist" when they actually are something else. I believe this is a step too far and a mistake, if only a semantic one. Methodologically, this seems flawed because even the title "illusionism" seems to be rage-bait for the other side of the aisle. Conceptually, it also seems flawed because we would not try to explain the thermodynamic reduction of heat to someone who had never heard of it by saying "Heat does not exist". Rather, we might say "Heat seems like something more, but really it is the kinetic energy associated with the motion of atoms and molecules." To whatever degree I actually agree with illusionists, I have come to reject the title.
How My question Is different:
The illusionist's question, "Why does it seem like we have phenomenal consciousness?", implies a more ontologically closed mind and I believe adds needless confusion to the debate. My question leaves more doors open to the types of answers we might be looking for and is more open to a paradigm shift.
What Kind of Mystery Is It?
We have a new question to chase down the mystery of conscious experience and with it the hope of more clarity, but I would like to make a quick pit-stop to ask what kind of mystery the hard problem of consciousness is in the first place. A mystery, most often, takes the form of a question or a statement of fact with an implied question. A mystery can also be a conjunction of observations and hypotheses that don't make sense together.
Here is a mystery:
Joe FaceTimes me from Florida,
I see palm trees and beaches in the background.
Minutes later, Joe knocks on my door in Colorado with a pizza and wants to hang out.
We can state the mystery as a question: "How did Joe get to Colorado in 5 minutes?" or as an observation with some background assumptions which imply some missing understanding: "Joe was in FL 5 minutes ago. Now he is in CO. There is no known way of traveling so quickly." For simplicity, I will assume here that all mysteries can be stated in question form equally.
Let's distinguish between several rough types of mystery-questions: Mystery-questions are meaningful questions to which the answer is either:
- : In principle unknowable
- An mystery might be "Which video game would Hamlet like best?" or "What do noumena look like?". These questions are largely worthless in the pursuit of knowledge due to their unanswerable nature.
- : In practice unknowable
- type mysteries might be "What would happen if we piled all of the footballs on earth up in Iowa?", "Exactly what position/velocity would these water molecules have to be in for them to collectively cause a phase-change to gas?", or, on a physical view of consciousness, "How could we simulate your entire life of conscious experiences?".
- : Knowable, but intuitively unsatisfactory
- type mysteries are questions like "Why does the wave function collapse resulting in diffraction patterns even when one electron was observed to go through the slit?" or "How can an infinite number of guests stay at an infinite hotel, Hilbert's Hotel, that is already full?" (they can).
To these type mysteries, we can give an explanation for the facts, but the conclusion seems to lack some intuitive thrust. I can read over and over again what happens in the double slit experiment and why. I can even tell you what would happen in a similar experiment, but it somehow rings hollow. Similarly, I can deduce from some undeniable steps that Hilbert's Hotel can in fact accommodate infinitely many more guests, but it lacks that click that makes it a closed case to our intuitions.
The "hard problem of consciousness" is a type mystery and I will argue that it may always be, but our new question need not be a mystery at all. "Why does the nature of conscious experience seem perplexing to us at all?" I think we can answer this question and in doing so tip our hats to our own confusion, owing nothing more to it.
Are answers that leave type mysteries epistemically inferior? I can see no objective reason to think so. Provided that we are rationally convinced of all the steps to the conclusion, what is the epistemic worth of that missing intuition-nail in the coffin? Do we disbelieve Hilbert because his Hotel "seems like bullshit"? Certainly not. He can show us, with steps we cannot reject, that his Hotel can keep filling up. Do we reject the claims of physicists when they tell us the diffraction pattern really does form on the other side of the double slit? Likewise, no. They can hold us by the hand to the precipice of understanding till the very last step. There is a difference between saying some challenge is yet unmet and acknowledging that the question is a type mystery to which even the best answer will leave us somewhat unsatisfied.
Is This Mysterianism?
Mysterians (most at least) believe that the "explanatory gap" between a science of consciousness and consciousness itself is not closable by the human mind. For thinkers like Colin McGinn, Steven Pinker, Noam Chomsky and many more, consciousness cannot explain itself to itself from within consciousness.
I do not subscribe to this view. I think examples like Hilbert's Hotel and the double slit experiment show that it is possible to have veridical explanations for phenomena and to still not feel that the solution clicks for us. I therefore think it is very early in the game to claim that solely based on a mysterious feeling we will never have a veridical explanation.
The Answer: Why We Are Perplexed
I have argued that the perplexing nature of consciousness is a type mystery, one which we may have an answer for, but will likely remain mysterious or unintuitive to us. With that more clear, we are ready to answer the question of why it seems, and may always seem, perplexing to us. This is our new question.
Many organisms, including humans, are complex adaptive information processing systems. Compressing the complex information in our environments is an essential tool for our survival. We cannot afford a bit-for-bit representation of nearly anything. That would be incredibly wasteful. I touched on this theme as well in a discussion of evolutionary theories of perception. Conscious creatures have evolved to not only compress information to represent their environments but also to compress information to represent their own representations. While this feature alone is not an exhaustive account of consciousness it is certainly a necessary condition for it. Our own conscious experiences, I have argued, are that representational content as information in some physical substrate, namely our brains and bodies. Given that we cannot represent many complex things in a prima facie intuitively-closed way, or even close to as detailed as we can tell they are with tools and technology, why should we expect our representations of our own representations to be any different? This expectation, I claim is the answer to the question of why our conscious experience seems perplexing to us. Consciousness is some pattern of information, but its meta-representational nature virtually guarantees that we do not perceive in a way that will ever click. Why not? I argue because for it to be complete in an intuitively-closed sense, would likely make it so immensely detailed that it would no longer be a useful meta-representation.
Stated succinctly:
Any information processing system that can represent its own processes recognizably to itself and represent those very representations (via compression of information) is highly unlikely to represent them in a way that allows for an intuitive reduction of those representations to their constituents on any lower representational level. The reason for this will typically be that it is inefficient and not necessary for the survival or objective function of the system.An Example: MARY-9000
Perhaps an example will help. Let's say we have the following class
in Python to house our super fancy new conscious AI: MARY-9000. This simple Python code will just 'wrap' the more powerful software invocations that will really make up the meat and potatoes of MARY-9000's mind. Classes in Python, with the keyword class
are used to easily repeat common functions that belong to some "class" of thing. In this case MARY-9000
is an Agent
with the abilities (methods of a class written in the class with def
in Python) to perceive()
, synthesizeInformation()
, predictFuture()
, and most importantly for this example representSelf()
.
# Defines a class called Agent, MARY-9000
class Agent:
# A method that our Agent: MARY-9000 uses to collect data from its senses.
def perceive(self, inputStream):
pass # but actually some crazy GPU-powered matrix algebra AI stuff
# A method that our Agent: MARY-9000 uses to compare past information to present and arrive at cheap conclusions
def synthesizeInformation(self, pastInformation, presentInformationStream):
pass # but actually do super cool AI stuff
# A method that our Agent: MARY-9000 uses to predict the future based on data and present data streams
def predictFuture(self, pastInformation, presentInformationStream):
pass # but actually do super cool AI stuff
# A method called representSelf() that prints out the code that defines this class
def representSelf(self):
# open the file that MARY-9000 runs this class from and print its contents
with open('./MARY-9000.py', 'r') as file:
source_code = file.read()
print(source_code)
This is the code for MARY-9000.py
(the file) and if we write these two lines below, the above is also the output.
# Creates an instance/object of the Agent class to be MARY9000
MARY9000 = Agent()
# Calls the representSelf() method on the agent instance, MARY-9000, causing MARY-9000 to print all of its own code
MARY9000.representSelf()
We want our AI to be able to tell us how it works in more ways than one. representSelf()
is just one of those ways.
Let's say MARY-9000
calls perceive()
while her visual systems are pointed at a mirror. She will intake a datastream which represents her visual form. This is a form of partial self-representation. Likewise, she can call predictFuture()
or synthesizeInformation()
with her own behavior as input.
What's MARY-9000
s analogue for our conscious experience? percieve(representSelf())
is what we are after though and here is the punchline. MARY-9000
will marvel at the fact that she has conscious experiences that are only made up of fancy hardware and software because when she calls percieve(representSelf())
she only perceives a compressed/loosely-translated version of what really goes on when the computer executes MARY-9000.py
.
# MARY9000 "perceives" what her own code represents AS her own code
MARY9000.perceive(MARY9000.representSelf())
For the meta-representation to be useful at all, it must be compressive and predictive, it must be a shortcut to some understanding. If it is not, then it is either too close to a bit-for-bit recounting of every bit that flipped (from Python all the way down to machine code and transistors) when we called some method in our MARY-9000.py
script or it is a too-high-level abstraction that will always leave room for more questions. This is why the "hard problem" is possible.
So, some meta-representation is not intuitively complete because that would preclude it from being predictive and efficient for the goals of a complex adaptive information processing system. The nature of perceive(MARY9000.representSelf())
for MARY-900
or for us our conscious experience is compressive and exists to quickly meta-represent as it pertains to the system's goals, not to be intuitively complete. Sprinkling that understanding on top of the representation costs extra bits and therefore extra energy.
An Empirical Prediction?
At some point, we might design a conscious system that can self-meta-represent in an intuitively complete way. From the perspective of this system, our mysteries of consciousness would not exist. It might be able to simultaneously have conscious experiences and be thinking "Ahh yes, it seems this way because {insert insanely complex chain of computations and events here}".
Crazyism
Eric Schwitzgebel produces a similar conclusion to the one reached in this note in his fascinating paper"The Crazyist Metaphysics of Mind" :
... Some metaphysical theory of this sort must be true - that is, either some form of materialism, dualism, or idealism must be true or some sort of rejection or compromise approach must be true. So something crazy must be among the core truths of the metaphysics of mind.
Schwitzgebel arrives at the same point from different methods. He argues that it is clear that something must be wrong about the folk-metaphysics of mind (common sense) and that we are not, in Schwitzgebel's words, "epistemically compelled" to believe the true conclusion, whatever it may be. He defends this latter point with reference to
... peer disagreement, lack of compelling method for resolving metaphysical disputes about the mind, and the dubiousness of the general cosmological claims with which the metaphysics of mind are entangled.
I largely agree with his conclusions and methods.
My goal in this note is separate. I want to propose a sensible reason as to why we do not feel compelled by whatever true conclusion there is and why peer disagreement might exist. Additionally, I want to say that our lack of "epistemic compulsion", in the case of a step-by-step exhaustive physical account of the mind (should that ever exist), is likely not actually epistemic but rather just a non-epistemic expression of perplexity. In other words, I am arguing that there is a high likelihood that the "explanatory gap" is more of just an "intuitive closure" gap.
I do not believe, and Schwitzgebel likely doesn't either, that all "crazy" conclusions about the metaphysics of mind are equally crazy. I have tried to show that the materialist crazyisms are the least crazy and indeed much more could be said about that.
Conclusion
I have argued that our conscious experiences are expressible in information which is most likely physically instantiated. It is incoherent that anything like conscious experience could not be expressible in information. The likelihood of conscious experience being, at root, a physical thing seems far higher than the unwarranted metaphysical leap of faith alternatives of dualism and idealism, given an agnostic standpoint taking subjective experience as the only data point.
If we accept these ideas and even if we do not, I argue that a better methodological question for the mystery of consciousness than many previously proposed is: "Why does the nature of conscious experience seem perplexing to us at all?". Even for the anti-materialist, this should be a better question. For many dualists and idealists, our conscious experiences are infallibly and incorrigibly known, so it seems natural to ask why there should be any mystery. Why is it before our faces but not crystal clear?
I have tried to make clear what kind of mystery the hard problem of consciousness is. I argue that it is a type mystery, a question/mystery to which the answer is knowable but intuitively unsatisfactory. Its proper answer lacks an intuitive nail in the coffin. This claim about the mysterious status of conscious experience would be incomplete without an explanation of why even a "complete" explanation might seem unsatisfactory.
This note has defended the perplexing nature of consciousness via its role as a meta-representational faculty. These faculties, whether by evolution or design, are likely to be intuitively incomplete to their havers because of their costly nature and the fact that they serve no other purpose for most teleological/evolutionary or design ends. This need not be the case. We may someday create or encounter intelligent systems that do not have this efficient, yet perplexing, limitation. If these systems are demonstrably conscious and do not experience perplexity, then I have perhaps hit on something right here.
If these conclusions make me a "-type-mysterian" then I will happily adopt the title. I assign high credence to the notion that consciousness may always seem perplexing to us (as unmodified humans), even if we get an exhaustive physical account of it.
I will end with a parting shot for the yet unconvinced anti-materialist. If the perplexing nature of consciousness is a type mystery and today, or someday soon, neuroscience can lead us by the hand to show how our experiences arise, for lack of the aforementioned intuitive nail in the coffin, would we be warranted in rejecting that these processes are our experiences? Imagine someone who nods along with every step of an explanation of how heat really is the kinetic energy of atoms and molecules yet remains intuitively unconvinced. All their avowals are those of someone who understands the theory and understands its comprehensiveness, yet they react to the conclusion that heat just is the movements of particles with "epistemic disgust". It just cannot be! I challenge that the heat-denier's epistemic position is the same, should we reach some exhaustive physical account of phenomenal consciousness or be sufficiently convinced it is possible, as the anti-materialist's position is. The epistemic weight of the missing nail in the coffin might be nil, equivalent to saying one 'buys' the theory but it just seems wrong.