Skip to main content

· 15 min read
OLP

Alice: "I don't know what you mean by 'glory.'"

Humpty Dumpty: "Of course you don't -- till I tell you. I meant 'there's a nice knock-down argument for you!'"

Alice: "But 'glory' doesn't mean 'a nice knock-down argument.'"

Humpty Dumpty: "When I use a word, it means just what I choose it to mean -- neither more nor less."

Alice: "The question is whether you can make words mean so many different things."

-- Lewis Carol, Alice in Wonderland

Introduction: What is Ordinary Language Philosophy

I take Ordinary Language Philosophy (OLPOLP) to be any philosophical program that claims that the problems of philosophy are best solved (or, in some cases, completely solved) by paying attention to ordinary language. As the Wittgenstein of Philosophical Investigations says:

Philosophical problems arise when language goes on holiday.

(Though Wittgenstein spawned an era of OLPOLP, it is arguable whether he himself was a through-and-through OLPOLP philosopher.)

An example of tackling a philosophical problem using OLPOLP might be as follows. We might wonder "What is triangularity?" Sure we see triangles, but we don't "See triangularity". In comes the OLPOLP philosopher to insist that looking for a "thing that we see" in place of triangularity is a mistake. We should, they will say, pay attention to how we use the word in ordinary language, and we will see that we have made what Gilbert Ryle would call a category error in assuming that our singular term "triangularity" should refer to something like "That balloon" does. They will say that we can avoid being so misled by sticking to our ordinary language guns and not letting language "Go on holiday".

In this short note, I hope to outline where I think OLPOLP goes wrong and why, and also what it gets right. I will argue that OLPOLP is correct in the following points:

  • You can't get semantics before pragmatics. (meaning is use)

  • Theoretical meanings have to be related somehow to pragmatic meanings or they are useless.

  • If meaning comes from use, then meanings will be an imprecise hodge-podge because "doing" and therefore "use" is an imprecise hodge-podge. This is why "conceptual analysis" has never successfully analyzed a concept in the history of philosophy.

  • If meanings come from uses, then they must follow rules. For communication to be possible, it must be normative.


and I will argue that OLPOLP is incorrect and should be polished up in the following points:

  • Ordinary use has no theoretical primacy. A term can grow to encompass more than its ordinary use.

  • What claims ordinary language sentences should/shouldn't, do/don't commit us to are not known implicitly. They must be accounted for theoretically and explicitly.

  • Not all philosophical problems are dissolved in ordinary language as it evolves and keeps up with science.


I think OLPOLP is an immensely powerful tool and like all powerful tools, it is prone to overuse and misapplication. Although I once believed it, I have long since abandoned the idea that all philosophical problems can be resolved by appealing to ordinary language.

What Does Ordinary Language Philosophy Get Right?

You can't get semantics before pragmatics. Meaning is use.

We cannot pick the meanings of our words out of thin air. So much the worse for Carol's Humpty Dumpty. They depend on learning sets of concepts. It often seems like philosophers are doing this when particular philosophical puzzles arise. An example that tends to bother me is when philosophers fall prey to the notion that we can sort of "mean things at will". They do so when they say, for example, "Maybe the red I see is what you see when I see blue!" How exactly do we mean "red" in that sentence? In the same way that we learn how to use the word "red"! And, if that is the case, then it seems like I cannot be meaning it "in some other way at the same time" that presumably only I could understand. Or if I am, then that way of meaning that just sits flaccidly alongside the pragmatically efficacious one (the one that has cash-value of meaning in our actions) and isn't really a part of communicating with others.

It is as if when I uttered the word I cast a sidelong glance at the private sensation, as it were in order to say to myself: I know all right what I mean by it.

-- L. Wittgenstein, Philosophical Investigations §\S 274

I don't have preconceptual, in this case prelinguistic, knowledge of what "red" is and so how could I possibly mean something by it that is directly out of accord with the pragmatics of that concept (How I learned to apply it). I might as well say, "maybe the feeling I get when I take a bath is the same as the one you get when you have a toothache".

OLPOLP is correct to steer us away from these illusory meanings and the knotty philosophical problems that come with them. I think it is correct, not by virtue of the primacy of ordinary language, but by focusing on what it is that gives our expressions meaning in the first place.

When we are giving some kind of theory, predictive, descriptive, synoptic, or prescriptive, the theory will have no use to anyone if it is not tied to a concept that has pragmatic value to us. I think the free will issue is illustrative of this kind of divorce of theoretical meaning from pragmatic meaning. My typical argument with the determinist who acts like they are the first person to discover that the physical world of science is causally closed goes something like:

Determinist: "The physical universe is a closed system and every event has a cause, so no one ever makes a choice!"

Me: "So if you signed a contract today, you wouldn't be responsible for that contract?"

Determinist: "No because I never chose that. The particles [etc] just caused my hand to write like this and that and my upbringing caused me to be thus and so [etc etc etc]"

Me: "So no one REALLY ever makes a choice? Really we are just misapplying the concept of 'choice'?"

Determinist: "Yes! There are no REAL choices."


It should seem peculiar to us that "real choices", on the determinist's view, turn out to be just the sort we don't have. The pragmatics of what is and isn't a "choice" have gone entirely out the window and with them the notion of what we are talking about having any material consequence. And so the conversation will circle back to "What is a choice?" and the determinist and compatibilist will disagree on that. But why should we care about this particular determinist's definition of choice? It seems like they have a big "So what?" to answer for next if we accept it. When we divorce theoretical "choice" from what practices and actions we take in knowing and deploying the word "choice" in real life, what importance does the theory really have?

OLPOLP is correct to keep us on a slightly tighter leash than we sometimes want to be on when philosophizing. We need to stay somewhat tethered to the common-sense concepts if we want our philosophizing to pertain to them.

If meaning comes from use, then meanings will be an imprecise hodge-podge because "doing" and therefore "use" is an imprecise hodge-podge. This is why "conceptual analysis" has never successfully analyzed a concept in the history of philosophy.

We should assign an extremely low probability to the possibility of 'fully analyzing a concept'. In an effort to make the concept precise and immune to counterexamples, we will confine the concept to a tiny playpen and thereby commit the mistake of divorcing the concept from the vast landscape of actions and practices that give it meaning at all. We either have a chaotic hodge-podge of a concept, a fragile one to counter-examples, or one so surgically defined that it simply has nothing to do with the concept we sought to elucidate in the first place. I believe accepting the hodge-podge is the best play here.

That doesn't mean we can never have precision in more tightly bounded domains. It only means that precision across the board in every practice and context, as the determinist above thinks that can have with "choice", will likely be impossible and should not be expected. If meaning is determined by what we do, and what we do changes, how can we expect to ever catch up?

If meanings come from uses, then they must follow rules. For communication to be possible, some normativity must be assumed.

As Wittgenstein (under some interpretations) and Kripke point out, in their rule-following paradoxes, applying a concept properly, learning a word, following a rule cannot be something that is done all by oneself in a vacuum. Kripke asks us to imagine a rule for a mathematical function called quus. Quus, , is just like 'plus', ++, except:

xy=x+yx⊕y = x+y

if x,y<57x, y < 57

but


xy=5x⊕y = 5

if x,y>57x, y > 57

Suppose I have been teaching you to add all the way up until we have gotten to:

57+5757 + 57

What should you answer? 55? or 114114? The point of the strange example is that it seems like nothing about how I have taught you to add so far has determined whether I meant quus or plus. There is no fact of the matter. The same skeptical challenge could be applied to any rule and so it seems that without some turning of the spade and reaching justification for the rule, this could go on ad infinitum. We reach an Agrippan trilemma of "rule following". It seems no rule can ever be justified merely by example or stipulation of how the rule should be followed because any action can be made to fit the rule and any rule made to fit my past actions. If we need rules for how to follow the rules, then how could a rule ever ground a practice? Wouldn't we then need "rules for following rules for following rules"?

I think the rule-following paradox terminates in doing. We don't learn rules by painstakingly memorizing instructions or "rules for following rules".

Someone says to me: "Show the children a game." I teach them gambling with dice, and the other says "I didn't mean that sort of game." Must the exclusion of the game with dice have come before his mind when he gave me the order?

-- L. Wittgenstein, Philosophical Investigations §\S 70

In this silly example, when the concerned parent says "Not that kind of game!" he didn't consult an inner rule or mental lookup-table of any kind in order to exclude gambling from the kind of games he meant. The meaning of the sentence and the rule that we were to follow was grounded in practice, in the pragmatic "doing of stuff", in habits and the innate nature of us as human beings.

Ordinary language is rough around the edges, does a million more things than state facts about the world, constantly changes, and is a reflection of what we do and what we fundamentally believe must be done. There is no playing the game of philosophy without language or concepts of some kind which set their roots in practice. Therefore, we have to philosophize while keeping in mind that those words and concepts are constructions of the practices and actions that give them meaning in the first place.

What Does Ordinary Language Philosophy Get Wrong?

Ordinary use has no theoretical primacy. A term can grow to encompass more than its ordinary use.

I am often frustrated reading OLPOLP philosophers say things like "This is a misuse of the word thought". I am all for taking the semantics as following the pragmatics. We cannot just mean whatever we want by words without getting the ball rolling through some kind of action in practice. If I want "thought" to mean "words I speak internally", then I am certainly allowed to adopt that as a technical term. It may not relate very well to the ordinary use of the word or concept, but maybe that isn't my goal in whatever particular mode of philosophizing I am doing.

OLPOLP philosophers have attacked neuroscientists' use of sentences like "The brain thinks X". They claim this is a misuse of "to think" as only persons can think. That's where the pragmatics come from they claim, and I agree. We learn to say "think" of someone based on how they are acting etc etc. However, I think it is silly to claim that this kind of metaphorical talk is "nonsense" full stop. We very frequently use intentional language as a proxy to quickly understand complex systems and that should stand on its own theoretical merit, not be judged by the stipulated authority of ordinary language.

What claims ordinary language sentences should/shouldn't,do/don't commit us to are not known implicitly. They must be accounted for theoretically.

There seems to be a sentiment in OLPOLP that all philosophy should do is describe how terms are used. However, in reading some of these philosophers, Ryle particularly, I often noticed that they seemed to be doing a lot more than that. More specifically, I often notice that they argue for or take for granted certain theories of how we should account for a given way of speaking.

These supposed "mere descriptions" from OLPOLP philosophers are often just expedient theories of how something in the world works. Sometimes they are empirical and need evidence, sometimes they are empirical and don't need evidence (because they are so obvious), and sometimes they are synoptic/prescriptive accounts of how language should be viewed to work. Here's an example: Quine (another philosopher who puts pragmatics before semantics) gives the example of "sakes" as a word that takes the grammatical form of a singular term but does not refer to anything. There are no "sakes" to be accounted for. Did we automatically know that there weren't "sakes" by virtue of knowing how to say "for Pete's sake"? Is that question "nonsense"? Is Quine giving a theory of how our words should be viewed? I think he is and so is every OLPOLP philosopher who prompts us to focus on the ordinary use of a word to circumnavigate a philosophical problem. The theory is still a theory whether we got it from observations about ordinary language or from philosophical accounts after the fact. There is no theory-free observations of language that philosophy is entitled to. But don't get me wrong. I'm all in favor of eschewing "accounts of" something when the ordinary concepts will do just fine, as long as we acknowledge that that is also itself a theory.

We don't know all the implications of our claims just by virtue of making or being disposed to make them. Additionally, we don't always know how the ordinary concepts we use hang together, or how they relate to one another. We can be masters of the concepts of "inference" and "belief" without ever having thought about how they relate.

The pragmatic move is a good move, but it does not instantaneously give us a correct and satisfactory account of how our terms work, how their use can be more technically refined, or how they do/should relate to the word as science tells us it exists.

Not all philosophical problems are dissolved in ordinary language as it evolves and keeps up with science.

Even in OLPOLP with a strong adherence to not extending ordinary terms beyond their common use, it isn't clear to me why all philosophizing is off limits by virtue of it not having some property that ordinary language has. I'd agree with Wittgenstein that lots of philosophical problems arise from our "bewitchment by language" (non-referring singular terms being a good example), but it is far from clear to me that we can say all of them are.

Philosopher David Enoch has a very fun example to illustrate this. In Pulp Fiction, Vincent Vega and Jules Winnfield are having a little back-and-forth about why Jules doesn't eat pork.

So by that rationale, if a pig had a better personality, he would cease to be a filthy animal. Is that true?

Enoch points out that both speakers are clearly philosophizing. It seems wrong to me to say that they just crossed the precipice of sense and were talking nonsense to each other even though they thought they weren't. On the other hand, I do think it is possible to philosophize in vain by playing games with words that are too far gone from their practical application for the inferences we might validly make about them to have any relevance at all. I take philosophizing to be a general extension of the basic human discursive practice of making our commitments to certain claims clear, whether to ourselves or to one another. That's what Jules is doing, and that is what I am doing now.

· 11 min read
fractal

A fractal is a way of seeing infinity

-- Benoît Mandelbrot

Nothing is built on stone; All is built on sand, but we must build as if the sand were stone.

-- Jorge Luis Borges

Can You Measure a Coastline?

In 2006, the Congressional Research Service (CRS) of the United States estimated the coast of Maryland to be 31 miles long. Strangely, the National Oceanic and Atmospheric Administration (NOAA) estimated it to be 3,190 miles long. This staggering difference was not a clerical error. These values were the actual reported measurements from each study of the Maryland coastline. (For more see here)

Maryland's is not the first coast in history that cartographers have separately measured to wildly different lengths. The phenomenon is known as the "Coastline Paradox". When estimating the length of any fractal curve via straight lines, the actual length is indeterminate. Suppose I measured the coastline of Maryland with a yardstick, and you with a foot-long ruler. My answer will be far smaller than yours. Without getting into the math, the smaller your measurement-length, when measuring a fractal curve, the larger the length of that curve will be.

coastline

This measurement problem is known as the Richardson Effect and has been studied in detail by Benoît Mandelbrot. Mandelbrot famously described shapes with infinitely embedded and repeating complexity with very simple equations that describe what he called "fractals".

I will not go through the exact definition of a fractal here, but we can simply say for now that a fractal curve is defined as one whose perceived complexity changes with measurement scale.

In Mandelbrot's own words a fractal is, loosely:

a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole

When measuring any fractal shape, real or abstract, we face this coastline problem. Part of what it is to measure is to agree on something. It is on the basis of this embedded self-similarity of fractals that the "Coastline Paradox" is claimed to be a "Paradox". It is claimed that a coastline has no real determinate length, yet we can measure it to have some length.

Is there really a paradox here? I think in a trivial sense yes. I argue that real objects that exhibit fractal curves only have a determinate length insofar as we can agree on a measurement schema. Do we stop at the grain of sand? The Angstrom? The inch? The best answer will be determined by our goals, but that does not mean it will always be easily attained. It has been pointed out recently, on account of this goal-based agreement, that perhaps a coastline is not really fractal in the Mandelbrot sense.

A Strange Philosophical Question

My purpose in this note is not mathematical or cartographic. I want to consider the "Coastline Paradox" as a model for our understanding of reality and pose some philosophical questions based on that comparison.

Mandelbrot considered his fractals to be related to a "theory of roughness". The relationship between a fractal edge and our concept of 'roughness' is easy to see in a physical object. Imagine zooming in on an aerial view of a coastline. The closer we get, the more the shape changes, and embedded patterns are revealed that are self-similar to the overall shape (in some sense, perhaps not exactly as in Koch Snowflake GIF at the start of this note).

The philosophical question I want to ask is this: what if our relationship to reality is analogous? What if the deeper we dig into some phenomena or some realms of understanding, the more embedded complexities we discover ad infinitum? This question decomposes into the questions of whether there is some infinite nature to the depth of inquiry and whether that infinite depth would exhibit some self-similarity the deeper it went. I will focus mostly on the question of the potentially infinite depth of inquiry, where depth refers to gaining more knowledge that is not merely about the knowledge that one already has.

I will argue that the answer to the question: "could inquiry could be infinite?" highlights the goal-dependent nature of inquiry and understanding generally and that if there is a Richardson Effect to finding more facts, then goal-dependent and more pragmatist frameworks for metaphysics and epistemology are better ones. Just as the real Coastline "Paradox" resolves when we agree on a measurement criterion, so the philosophical skepticism of how we could know anything or be acquainted with any reality truly if it is in some deep sense fractal can be sidestepped when we share goals in understanding the world.

Clarification

Admittedly, the general philosophical question of this note is, so far, unclear.

What if the relationship of inquiry to reality is fractal, like the relationship of a measured length to a coastline?

Let's clarify it a bit. There are two ways to interpret this question.

  • Metaphysical Fractality

    Reality IS such that infinitely more detailed descriptions of phenomena will always be true of it.

This would mean that, whether we can know it or not, there is no rock-bottom to inquiry. The more facts we would discover, whether we were to discover them or not, the more other facts there will be to discover. It is important to note that, in order for this thesis to be interesting at all, these additional facts would not merely be "facts about the facts we already have", that is trivially true. The more facts we get, the more facts about facts there are. Fractal 'roughness' is not merely recursion.

Perhaps an example will help this point. Suppose physicists discover GG-ons (some particle that describes all the other higher-level phenomena we know about) and the behavior of all the other things in the universe now makes sense. But wait! If Metaphysical Fractality is true, then we will likely subsequently discover that GG-ons are composed of HH-ons and II-ons etc etc and the process of discovery only moves 'downward' infinitely.

  • Epistemological Fractality

    Our knowledge of reality is (and will be) such that a more detailed account will always avail itself (eventually) whenever a new less detailed one does.

Epistemological Fractality would mean that there is something, perhaps something necessary, about the nature of our knowledge and its relation to the world that makes it never reach able to reach rock-bottom, but always peels back another layer revealing new knowledge infinitely. Again, to be clear, it is trivially true the more knowledge we get, the more knowledge about that knowledge there is. That is not what is meant by Epistemological Fractality.

With either Epistemological Fractality or Metaphysical Fractality being true, a "coastline problem" emerges for our knowledge of the world. If Metaphysical Fractality is true, no matter what the nature of our inquiry is, there will never be an absolute fact that is not an infinite composite of deeper facts. If Epistemological Fractality is true, something about knowledge, but maybe not reality, yields the same effect.

Setting our metaphysical and epistemological baggage down for a minute, these two theories will be identical in most if not all of their predictions. Also, Depending on one's account of truth, these two theories might sound identical altogether. To avoid confusion and stick to the point here I will therefore treat them as the same.

I will cut a lot of philosophical corners here and revise our question to this:

Is either reality itself, or any possible knowledge we could have about it, fractal in nature? Is inquiry of infinite depth?

(Again, I will not address the question of self-similarity which is an important feature of fractals)

Perhaps to answer in the affirmative is just to adopt a certain disposition towards inquiry.

mandelbrot
Zooming-in animation of the Mandelbrot set by Mathigon

How Can We Answer?

How might we answer the above question? Perhaps empirically? We can rephrase our question above as an empirical hypothesis. But is it actually testable? Its infinite nature muddles the meaning of its testability. Alternatively, maybe a rule that we firmly believe in guides us the conclusion that reality is fractal?

Empirically?

Theory: The more we discover about reality, the more there will be to discover about it. (where those new discoveries are not just some recursive description of the already discovered parts)


Suppose, to continue with our mock-physics example, that we continue on to discover ZZ-ons, A2A_2-ons, B2B_2-ons, and eventually to Z2Z_2-ons... A...Z1001A...Z_{1001}-ons.

P(P( Reality is fractal | We find Z1001Z_{1001}-ons ))

==

P(P( Reality only goes as deep as Z1001Z_{1001}-ons | We find Z1001Z_{1001}-ons ))


The evidence always confers the same probability, at some observation nn, on "infinite observations" as it does on "nn observations". So is it impossible to empirically decide? In the case of adding more observations, yes. However, in the case of observations completely ceasing it seems not. We could imagine a case where we discovered Z1001Z_{1001}-ons and that just tied everything up. That's it. Understanding is complete at the discovery of the Z1001Z_{1001}-on. Surely some will find this view absurd, but it is certainly possible unless we take a rather hardline commitment to Epistemological Fractality (or Metaphysical Fractality where we are ignorant of reality for some compelling reason).

If we admit that such a hard stop is possible and if we never observe such a hard stop, which we have not yet, then we have increasing evidence for reality being fractal with each layer of reality we peel back (but we do not have more evidence for it being fractal than for it only going as far as our present depth). This may or may not be significantly dependent on what prior probability we assign to reality being fractal in this way.

A Rule?

The Fibonacci sequence is infinite. How do I know that? I know it because the Fibonacci numbers are the sequence of numbers

{Fn}n=1\{F_n\}_{n=1}^\infty

where

Fn=Fn1+Fn2F_n = F_{n-1} + F_{n-2}

with F1=F2=1F_1 = F_2 = 1, and conventionally defining F0=0F_0 = 0.

I know the rule to get the next number. Take the last two numbers, add them together and I get the next.

Given 0,1,1,20, 1, 1, 2, I add 1+21+2 and get 33. It seems that I know that this can never end because I know a rule that, by virtue of my following it, necessarily gives me a new Fibonacci number.

Do we know such a rule for inquiries about reality generally: that they will always give me more facts or more questions left open? I do not think so (again, not in the sense of more 'facts about the facts', for that is trivially true). It seems necessarily true that the Fibonacci sequence will never terminate if I sat down and cranked out numbers for an infinite amount of time. It does not, in any salient way, seem necessarily true that one discovery unearths at least one more. However, if reality's fractality is not necessarily true or false, it follows as a simple truth of modal logic that it is possibly true.

An Answer

Whether some rule would emerge that would conclusively show us that reality is or is not fractal or whether some hard-stop/rock-bottom would be observed that would prove that it is not, at present it seems that we have to accept that it is possible that reality is fractal in the sense suggested above.

The cash-value of believing that reality is possibly fractal with respect to inquiry, is a stance of preparedness for endless complexity and a disregard for the importance of ultimate ontological questions. For, if it is true that reality is possible fractal, then what there is, in reality, has no determinate answer unless we agree on a criterion, as in the coastline problem, for what we care about.

I think this highlights an already common-sense view. Who cares what the 'ultimate truth' of the coastline's length is if we are just trying to sail a boat around the coast of England in some finite amount of time? Who cares what the 'ultimate constituents of reality' are if we are just trying to survive, do good, launch a rocket, cure diseases, do the laundry, help future humans survive and flourish and so on. Only within these inquiries would it be worthwhile and possible to have an agreed on criterion of measurement (supposing reality is of this infinite fractal nature). Once we have such a criterion of measurement we no longer face indeterminacy about the answers.

This subject has in no way been treated with exhaustive clarity here and much more could be said about it. For the time being, I find it a fascinating lens for contemplating our relation to a world that is possibly intractably complex as creatures of finite attentive and computational power.


julia

· 10 min read
armchair

A scientist who ceases for a moment to try to solve his questions in order to inquire instead why he poses them or whether they are the right questions to pose ceases for the time to be a scientist and becomes a philosopher.

-- Gilbert Ryle, Philosophical Arguments

Intro

I have too frequently heard skepticism expressed toward the usefulness (or worse: the epistemic viability) of armchair theorizing. The term "Armchair" is usually meant in pejorative sense to indicate that no real work could be done that way. I choose to just lean in to the term. Whether or not we take armchair theorizing to include mathematics, computer science, and logic this skepticism is unwarranted. There are a few subclasses of armchair theorizing that are usually in the crosshairs of those eager to criticize it. These typically include ethics, cosmology, analytic philosophy, theoretical physics, and much more. Given the amount of silly theorizing that goes on, the existence of criticism is unsurprising. However, I still see most of this criticism fall flat even when the analysis itself is not valid or useful.

Here I will argue that if we are going to accept any type of pure conceptual analysis as necessarily "Garbage in, Garbage out", which we should, then we are then committed to accepting it as necessarily "Gold in, Gold out". If we are committed to "Gold in, Gold out", then it follows that armchair knowledge can be generated and that should not be surprising.

Additionally, I will argue that the armchair toolkit consists of many useful tools that are not exhaustively described by 'listing facts', but nevertheless are indispensable in the furtherance of knowledge.

Naive Pure Empiricism

Can we really generate knowledge from the armchair? How is that possible? I have myself been struck with a mysterious feeling around this question. If knowledge is about the world in some sense, how could I just sit here and come up with some? This may be an intuition only shared by those whom William James referred to as "hard-nosed empiricists" and they are mostly the target of the arguments here.

Philosopher David Chapman holds such a "hard-nosed" view with the only field allowed to generate armchair knowledge being mathematics:


If it were only this tweet where I had heard this sentiment expressed, I would likely just ignore it and move on. Indeed I would like to believe this position to be a straw-man I have invented, but sadly I have met many individuals who share this view, some deeply and some only on a superficial intuitive level.

I would like to interpret this charitably as "Don't speculate or try to reason abstractly too much about something that is best left to the domain of empirical research.", or maybe "We need SOME empirical content somewhere along the line to generate knowledge." I mostly agree with those and the second is something even most die-hard rationalists might accept. Chapman's words here invite a much stronger interpretation, even if maybe they are hyperbole. They espouse what I'll call Pure Naive Empiricism.

You have to poke things and see what happens.

Other than maybe in math, you can’t figure anything out by just thinking about it.


Let's slightly reformulate these claims while sticking to their cash-value as the formulation of Pure Naive Empiricism:

Pure Naive Empiricism: Knowledge is only attainable via empirical experiments, except in mathematics *
(* We could charitably assume that by "mathematics" Chapman also includes other abstract fields of knowledge.)

This claim is obviously false unless we adopt a rather extreme interpretation of what "empirical experiments" are. Pure Naive Empiricism appears to be a popular position among those not fond of abstraction that impinges on reality too closely. It would be a simple world if math and observations were all we needed for furthering knowledge.

Why is Pure Naive Empiricism entitled to stop at mathematics? (even if it is "mathematics plus some other stuff") If mathematics is an "acceptable" epistemic pursuit then why is formal reasoning about cosmology, ethics, metaphysics, or mind not? I suspect the Pure Naive Empiricist does not have a satisfactory answer to this question. We don't need to go poke anything to find out why either.

I will argue that there is no viable place to draw the line between "acceptable" and "not acceptable" analysis and so the distinction is bogus. Either all conceptual analysis of any kind (including mathematics) is capable of generating knowledge, if the premises are true and the rules are followed, or none of it is.

'Garbage In, Garbage Out' Implies 'Gold In, Gold Out'

In learning elementary symbolic logic and my first programming languages I was taught a simple maxim of any deduction: "Garbage in, Garbage out." I believe this should uncontroversially apply to any formal language and indeed to everyday human languages in formulating and assessing arguments. Consider the following argument.

  1. All Blorgs are Schmorgs

  2. Skrump is a Blorg

  3. Skrump is a Schmorg


This is a valid argument but not a sound one. Nobody will ever care that Skrump is a Schmorg because it is a Blorg. We fed garbage in and we got garbage out.

What was not taught to me alongside this was "Gold in, Gold out". We usually don't have good reasons to care about this one because it is so obviously true. If we are writing a simple Python script that converts Celsius to Fahrenheit, we already know, trivially, that if it is 35oC outside and my script says that converts to 95oF, that this fact applies to the real temperature outside right now. We usually only need to be reminded of "Garbage in, Garbage out" when our analytical processes have generated an absurdity that we become convinced is true.

If the falsehood of at least one premise guarantees that a valid argument is not sound then all the premises truths guarantee that the argument is sound. We cannot rationally hold "Garbage in, Garbage out" without holding "Gold in, Gold out". If we have to hold both, then we cannot deny that the products of analysis are true. So, straightforwardly, if something is true and we have good reasons to believe it without other defeating reasons, we have knowledge.

If Armchair Knowledge Were Not Possible?

Pure Naive Empiricism has extremely bizarre implications. For example, if all knowledge were generated solely by empirical experiments then we would have to know all of the conclusions entailed by our current beliefs, both the actual and conditional implications. We clearly don't know those. So either we claim that those somehow "are not knowledge" or we admit that Pure Naive Empiricism is a ridiculous view.

If armchair knowledge were not possible, how would we know what questions to ask to guide empirical inquiry? Should we be looking for consciousness in the brain? Do thoughts have a location? Can actual infinities exist? Could the universe be fundamentally random? Does it follow from space-time theory that space and time are not ontologically distinct? Is math real? Is everything we believe false because we only evolved to survive? Are these good questions at all? To find out if these questions are worth pursuing, we need armchair theorizing, even if it is speculative. We need to know that "this is the right/wrong question". The alternative is literally conducting an experiment to test every absurd hypothesis that we can come up with. Armchair theory, even speculative theory, can and does save us eons of unneeded experimentation.

Additionally, it is often overlooked by the Pure Naive Empiricist that not all knowledge-pursuing consists in the gathering of facts, deductively generated or discovered. We need to know the right questions and the right way to think about them to have success in any inquiry. Some ant colonies are complex systems whose behavior is best understood emergently. No listing of simple facts about these ants by itself, without the analytical armchair toolkit, would give us this insight.

What Armchair Theorizing Gets Wrong

Some armchair theorizing is not worth the mental energy spent on it. If armchair knowledge can be knowledge at all, then it can be true, but what kind of armchair knowledge is useful? We should answer this question partially with reference to the two maxims above. I can say that I know that if "All Blorgs are Schmorgs and Skrump is a Blorg" then "Skrump is a Schmorg" but it is clearly of zero use whatsoever. The concepts we reason about, if we want to generate not only knowledge but knowledge that matters, must be clear and serve some end.

If we have some valid reasoning about minds, it is only as useful as the argument's concept of "minds" is related to the one(s) that we actually use. If we say minds are in rocks and atoms as well as in brains, then are we still talking about the same content anymore or have we made a move that makes our terms devoid of any of the predictive or explanatory power that made them useful? Perhaps in the future we will have some other theoretical reasons to shift the semantic ground away from the folk-concept of "minds".

Generally, we should be skeptical of armchair theorizing that overextends concepts beyond the scope of their reasonable usefulness. When that limit has been overreached is itself often a tricky philosophical question. Some philosophers believe that the Einsteinian notion of space-time should be scrapped because of analytic arguments about how time must be ontologically distinct from space. Though I cannot rule out these conclusions as false with a hand wave, we should be skeptical when armchair thinking has purportedly overturned a useful empirical framework.

What Can Armchair Theorizing Do?

Every implication of our current beliefs, conceptual definitions, and strategies of inquiry is not laid bare before us just by having them. If that were the case, we would be supercomputers. That space needs to be mapped out and the edges of the map keep expanding. Useful armchair theory is not all pure deduction, sometimes it takes the form of reframing questions, guiding inquiry, and using the power of analogy to deepen understanding or see new possibilities. All of these modes and many more are indispensable to science and to good human lives.

Generally, useful armchair theory comes in the form of deductive arguments, clarifications of existing concepts, demonstrating the incompatibility of certain beliefs, the dispelling of illusory problems, the creation of new questions, meta-inquiry, and the discovery of unforeseen implications of currently known truths.

Conclusion

I hope I have made it clear that outright denial of the possibility of non-experimental knowledge or 'armchair knowledge' is absurd. I have argued that it is irrational to reject the ability of analytical reasoning to generate knowledge and that it is wrong to limit the scope of the furtherance of knowledge to mere 'fact collection'. I think that the intuitions to the contrary are motivated by putting empirical inquiry on a pedestal while taking for granted the foundations that support it.

The analytic toolkit that can be accessed from the armchair cannot reach out and interface with the physical world, of course, but it simply does not need to in order to be useful, generate truth, and make lives better.

· 3 min read
lookingbackdeath

Death is nothing to us, since when we are, death has not come, and when death has come, we are not.

-- Epicurus

Perhaps the greatest contradiction of our lives, the hardest to handle, is the knowledge "There was a time when I was not alive, and there will come a time when I am not alive."

-- Douglas Hofstadter, Gödel, Escher, Bach

Sadness about what happens after one dies makes sense when it is about the experiences of others, as in 'How will my grandchildren feel when I am dead?' Does it make sense in reference to one's self?

There are enigmatic assertions and feelings regarding death that initially seem to be about one's own experience yet evidently cannot be upon further inspection. Clearly, 'How will I feel once I have died?' is a nonsensical question. It has always somewhat confused me that people express sadness toward the fact that at some point they 'will have died' when that sadness is supposed to be on their own behalf.

Most adult humans have had a thought like 'At some point, I will have died, and isn't that sad!' This thought is only able to make us sad because it presents us with a convincing illusion. We imagine ourselves standing beyond our own death, still somehow having experiences, and thinking back with a feeling of loss and nostalgia at our own lives.

This feeling of loss, I will argue, evaporates under closer inspection. The following argument shows that sadness about one's own death on behalf of one's self is irrational or actually about others. I hope it lifts in the reader the convincing illusion that compels us to feel a false sense of loss on behalf of our future selves.

  1. Possible future retrospective sadness is only rational when it is about a state that is possible to be in.
  1. At no point can a person have experiences once dead.
  1. An individual’s being dead cannot cause that individual to have any negative experiences since they are not capable of having experiences.
  1. Being sad about possible future retrospective sadness one might have after having died, on behalf of oneself, is irrational or is actually about the possible future retrospective sadness of others.

When you are dead, I believe, everything that could meaningfully be called 'you' is gone. Until some such possibility as uploading one's consciousness to a computer or molecule-for-molecule clones, the existence of the self beyond death is only for the realm of thought experiments and science-fiction.

This conclusion should not be interpreted as bleak, however. Instead, it should focus our attention to the things that do actually matter after we die, which will be for those then living to experience. I am not a solipsist and I think some things matter even when we are gone. We should not think about the illusion of how our 'dead selves' will feel, but instead we should think about how we will leave the world for those who are still there when we are gone.

· 6 min read

Why A Successful Ethics Has to Apply to 'Beings' Generally

cosmos_mind

What 'beings'?

May all beings look at me with a friendly eye, may I do likewise, and may we look at each other with the eyes of a friend.

-- from the Yajurveda, 1000 B.C.

Many ethical theories unjustifiably limit their application exclusively to human beings or very human-like beings. As I have read more and more ethics I have continually been surprised by this. To me, it has always seemed like a very serious pre-ethical question: What makes a being "count" ethically?

Is it that it has states that are "sentient" or "hedonic" in the utilitarian sense (I am developing my own views on that), or that it can have preferences, or rights, or whether it can accept contracts or be virtuous? This 'cutoff' for ethical-entity membership is a very important one, however we choose to carve the cake. It will only grow more important as our species advances and the implications of having membership criteria that are too anthropocentric or too unrecognizable will have dire consequences. For the purpose of these short thoughts, I will call beings that make that ethical-entity membership cutoff "ethically relevant beings".

We will likely soon live in a world where artificial intelligence will meet or surpass our intellectual capabilities, where 'trans-human' adaptations might occur, or where we might encounter life from another world. In all of these cases, ethical theories that are too tightly bound to a conception of ethically relevant entities being "human-like" will get us into trouble.

While this may sound like science fiction for now, it is the kind of thinking that we would hope that a more advanced being than us, whom we had yet to encounter, would be doing. Think of the collisions of different civilizations at differing degrees of technological advancement in human history. The result is usually very bad for the lesser-advanced group. I think that since the impact of such an encounter would be so large for the worse-off party, it is very worth thinking about whether our theories account for ethically relevant beings that are "lower down the ethical totem pole" than us and why. How we might want to be treated by a being that viewed us as "lower down the ethical totem pole"? This is why I think that successful ethical theory, from the basement to the roof, has to account for cosmological scale and a vast variety of ethically relevant beings.

The right ethical theories have to apply to beings generally and not make the cutoff too high or too low. Here are some thoughts on the risks of "too high" and "too low":

Setting the Bar Too High

If the cutoff is too high for being ethically relevant, as many current ethical theories have it, I think that could potentially be very harmful. A small example is our horrific treatment of animals. Many people believe that this treatment is justified by the sort of high-bar reasoning like "chickens don't have rights" or "fish aren't sentient". Surely we would not want a vastly superior being (however we countenance ethically relevant beings) to view us that way. That would be nothing short of a disaster for humanity.

So what is the upper bound? It cannot be personhood. Maybe the ability to 'feel'? Many respond quickly with: "Well, the cutoff is obviously consciousness!" To me, this view is dangerous and very uncopernican. Imagine a mental or functional state (however you like it) called "Schmonsciousness".

Schmonsciousness is consciousness3.
schmon

Humans cannot even fathom what it is like to be Schmonscious, just as we cannot imagine a worm having the depth of experience we have. Would it be ok for Schmonscious beings to eradicate us? For them to use us as a means? If there are many other intelligent beings in this universe (biological or artificial), then it would be rational to believe that something like Schmonsciousness is not that far-fetched. Under the assumption that there are many intelligent beings in the universe, we should reason like we are an randomly selected from the set of all of them, via the "self sampling assumption" .

The importance of this goofy example is that when we set the bar erroneously high everyone below loses and so we need to be very certain that property is the right one, if there is such a property. Consciousness, as currently understood, is not that property.

Setting the Bar Too Low

If the cutoff for being an ethically relevant being is too low then we might end up with absurd conclusions which force us to countenance inanimate objects or automata as ethically relevant beings. This type of theory may end up having negative consequences as well if it casts the net so wide that it becomes an absurdity to truly follow.

Ahimsa is the Buddhist/Jainist principle of non-violence. According to another ancient text, the Atharvaveda, "Ahimsa is not causing pain to any living being at any time through the actions of one's mind, speech or body." For many adherents, this means not even walking on grass. Maybe this view has something right, but it is hard to see how we would feed 8 billion people on the planet without harming some life form. To the ethical intuitions of most, starving millions is worse than hurting many plants. Unless maybe we can bioengineer ourselves to photosynthesize, this approach might not be reasonable at this point in time. That is all not to mention why we need to suppose that all living things are ethically relevant beings. What about bacteria?

If it is discovered that there is no hard line between life and non-life (artificial or natural, whatever that means on a cosmic scale) setting the bar too low may become even more absurd. Calling anything which can do something like 'avoid' an ethically relevant being would lead us to admit that a world where all the magnets are touching opposite dipoles would be worse. That is surely absurd.

magnets

What should the criteria for ethically relevant beings be?

This is a problem I am fascinated by but do not yet have a satisfactory answer for. As I was considering above, maybe the ability to 'feel' is a good cutoff? Even that is difficult to pin down in a way we could apply to all beings. I am reminded of David Lewis's Mad Pain and Martian Pain.

Here are some quickly assembled thoughts on where to start:

An ethically relevant being is:


  • Causally efficacious (it can cause and be caused)

  • Spatiotemporally contiguous (it isn't made up of random unconnected things in space and time)

  • Made of the right machinery: has information-processing, mental states, and/or functional states**


It seems like the third bullet is probably the most important and most difficult to establish. My reasons for picking these are a bit beyond the scope of this post and more could be said about this topic. It is something I would love to think more deeply about.

· 3 min read
time_travel

It has always seemed odd to me to hear people say "Wow. That was 10 years ago." or "Time flies so fast." While I empathize with the feeling, my inner response is usually something like "Every second of your life took place across one second, no?" So why does it feel like time moves so fast in retrospect?

The reason is that human memory uses a form of data compression. There is even some evidence to suggest that consciousness itself is an efficient compression method. All memories are partial, they are made up of fewer bits than their initial cognitive representations needed. If that were not the case, all memories of an experience would be virtually identical to having that experience again.

Here's a quick example

Say we have this data:

Original data: "AAAABBBCCDAA"

In run-length encoding, consecutive repeated characters are replaced with the character itself followed by the count of its repetitions.

Compressed data: "4A3B2C1D2A"


Assuming we are using ASCII encoding, where each character is represented by 8 bits (1 byte):

Number of bits in Original data: 12 characters * 8 bits/character = 96 bits

Number of bits in Compressed data: 10 characters * 8 bits/character = 80 bits

To be clear, I am not merely making the obvious claim that "time passes at a constant rate". Instead, I am saying that due to the way our memory compresses information, the more time we are alive the more 'compressed memory' we will have and the 'faster' it will appear that time flies by. Our actual present experience is always full and our recollection of past experiences always partial. This creates the illusion that we should fear our actual present experiences 'slipping away faster and faster'.

It always helps to assuage the negative feelings I get when contemplating that "time moves quickly" to realize this simple truth. It helps me be present. That time moves any quicker than it actually takes place, as one accrues memories, is an illusion. Maybe that thought can help others too.

· One min read

Here we go...