Skip to main content

Excerpts from a Correspondence on A.I.

· 9 min read
nove_owl

The following is part of a correspondence on the topic of A.I. and whether it is actually intelligent, actually reasons and so on.

Question

Could AI feel anything like the satisfaction of giving just the right gift?

Answer

The topic of AI has definitely occupied a lot of my philosophical bandwidth recently. The article was an interesting read! I somewhat agree with the sentiment, though I am not as optimistic, maybe. I have a thousand thoughts on this topic, and I will doubtless fail to keep them concise here, so I'll take a swing at answering your question and attach whatever mini-essay comes flying out here as a post-script.

Could AI feel anything like the satisfaction of giving just the right gift? No. That is a satisfaction reserved for a being who mutually recognizes another in a normative sense (as a receiver of gifts in a deeply personal sense). Additionally, that recognition presupposes an ability to be involved in an ongoing transaction with a modally complex world as a product of selectional forces. These things virtually require a conceptual status that is almost identical with social biological life.

What's the big deal difference between that and the convincing illusion of the feeling? (Perhaps ChatGPT could write a nice poem or essay about he feeling?) What makes one the "real deal"? I think we lend a great deal of credibility to these models in ascribing feelings/agency, etc to them. The difference is that the real deal is deeply counterfactually robust and subject to ongoing normative correction. If some system could only engage in the practice of "counting" if the countables were bowling pins, and was baffled by counting anything else, we should rightly hesitate to call this "counting".

In the face of what I see as a lot of AI hubris, I like to consider the fact that we cannot even synthesize a robust, simple organism. Let's say an algae. Even algae is quite robust because it is the product of constant interaction with a complex world, modular parts that all fight to survive, and the ability to adapt. Not to say this is synthetically impossible, but I don't think we are anywhere close.

Can we shortcut nature's path to "intelligence" in the same way we can shortcut nature's path to flying? A bit apples-to-oranges, but I think the answer should be no.

Extended P.S.

My little critique here only applies to AI in its current iteration. I think anything that can be a Universal Turing Machine could—in principle and with the right guidance—house intelligence. So I am not committed to the claim that something like an LLM could never be intelligent. It just seems nearly physically impossible to me at this point.

I use A.I. at work every day, and I think the public sentiment about it is very overblown. It's good at spitting out simple code. It cannot engineer anything. I have built a few cool tools with it on my own. One example might help segue into what I think the biggest problem with "artificial intelligence" in its current iteration is. I vectorized the entire Stanford Encyclopedia of Philosophy and was using it to generate reading lists. I built a tool that plugged it straight into my Claude client so I could ASK the entire encyclopedia questions—pretty fun stuff. I offered it to Stanford SEP to see if they would want to host it but obviously they didn't want to claim responsibility for whatever it spewed out, having carefully curated the encyclopedia themselves.

Philosophically speaking, I think there is a sense among AI-folks that a kind of "functionalist-essentialism" is true about what it means to "really think," or "really feel," or "really reason". Folks of that persuasion seem to believe that there is some pattern of information processing that is identical with "feeling/thinking" and so on. This view makes the kind of mystical thinking about AI that props up the pseudo-marketing scare tactics plausible. If AI can "all of a sudden" become sentient, maybe it can take over? Maybe it can overrun humanity? Maybe it will take your job? etc, etc... I find these views bizarre at best. I think that "thinking, reasoning, feeling" are all deeply normative concepts, and to try to just pick them out like natural kinds is a radical mistake. You might try to pick out some disjoint set of natural phenomena to fit the bill (brain states and so on), but that set will appear wildly gerrymandered. I think that is because that approach is already barking up the wrong tree, looking to the natural when it should be looking to the social-normative.

I think it's also a mistake to view intelligence or feeling as extra-natural metaphysical properties that somehow inhere in our experiences and mental goings-on, but I'll avoid that one for now.

When people talk about AI, I feel like I always see a few very philosophically distinct concepts run together under the heading of "intelligence":

  1. Information processing (thinking, information compression),
  2. Phenomenal consciousness (feels), and
  3. Reasoning (discursivity, giving and asking for reasons)

I view AI as doubtless capable of 1, but I am extremely hesitant to attribute it with the ability to reason in a full-blooded sense or the ability to feel. Why draw a distinction between 1 & 3? If we don't, we run the risk of claiming that thermostats think or can understand temperature, or that parrots understand "RED!" when trained to respond to red stimuli. In order to actually capital R Reason, I think we need to be able to robustly give and ask for reasons. To do that, I think we need to have something like the following:

  • Social engagement, which gives us the ability to:
    • Be normatively constrained in a kind of "space of reasons". Accepted and argue reasons for/against.
    • Universalize cases (by holding each other accountable to rules)
  • Embeddedness in a modally complex environment world, which gives us the ability to:
    • Be robust/flexible to the right kinds of changes.
      • Robust under arbitrary changes, flexible under others.
      • Machine learning algorithms can be rigorously trained to recognize school buses, but then you flip one upside down and they utterly fail.
      • To BECOME robust, I think, requires iterative adaptation with a complex environment.
    • Have embodied memory and modularity
      • DNA, Mitochondria, etc are all crystallized successes that took billions of iterations.
      • Our brains constantly "offload" work to our environments.
      • No doubt nature doesn't take the fastest path by necessity, but it's pretty damn efficient.

I think the embeddness matters because, in order to have what should count as a genuine ability to generalize and adapt we need to have information processing that is calibrated to things in the environment that matter. This is just baked into what it is to be the product of natural selection, but in the case of artificial intelligence, it matters to say explicitly. AI struggles with the modal (counterfactual dependences) that even simple, ordinary empirical stuff requires. e.g. "a well-made match lights when struck" (but not in a Faraday cage, but not if it's wet, but not if it's a stage prop, but not if it's been struck already, and so on...). This counterfactually robust information is already always there in our environment, but has to be codified somehow for something like an LLM.

To be able to Reason requires having something like Kant's faculty of "Reason". That is the ability to make generalizations universal, to bind ourselves to rules that normatively apply to "all cases". To the extent that AI has this normative element, it is entirely parasitic on "humans in the loop". In a world with only a bunch of LLMs talking to each other, I would not say they are intelligible as talking about anything.

A great example of both universalizing and embeddedness breaking down catastrophically for AI is their inability to do simple puzzles when the parameters change slightly. See this recent Apple paper where LLMs fail Tower of Hanoi problems when the number of rings, spikes changes. Apple tower of Hanoi Paper. Also, Google AI engineer Francois Chollet has a famous set of problems that LLMs absolutely fail to solve, but a human child could tackle easily. ARC AGI There is a ~ million-dollar prize for high benchmarks there. Almost none of the viable (>50% correct) solutions are LLMs (see the leaderboard).

As for 2, feeling, that's the one we seem the MOST hesitant to attribute to LLMs. I think the reason for that is twofold. First, having real feelings seems to require an ongoing transaction with the world (I like how Dewey defines experience as necessarily a transaction with the world). Second, there is a normative element there as well; we learn what feelings are (how to identify them and what to do) in a deeply social way. We need to have two things: both the capacities/dispositions to respond (biology) and the normative network with others who do.

I think it's important to notice—as Chiang and Hofstadter do here—that even information compression or information processing are normative notions, if they are important at all. If I compress some information that comprises a JPEG file and it cannot be "unzipped" into a visually intelligible image, I have failed. I have done it wrong. If I process information in some arbitrary way, that may be "processing", but who cares if it's not processing right.

So revisiting 1,2,3 with some more philosophical clarity (or less? haha) : Can LLMs:

  1. Think/Process information/Compress information? Only in a sense that is non-autonomous and parasitic on US for its meaning.
  2. Feel feelings, have emotions? No. Not world-embedded enough and not sufficiently social for the concepts to apply correctly.
  3. Reason? No. They can fake it well, or do a convincing version of it for many purposes, but they do not appear to actually be generalizing universally or actually participating in a space of reasons because their behavior is too brittle.

I think 2 & 3 can exist on a spectrum, once up and running. On the one end are easily-tricked and fragile systems, and on the other are the systems that can generalize to all cases and subject themselves and others to proper normative correction (again, LLMs alone cannot do this).

I suppose all of this is just a nuanced way to provisionally agree with the article's stance: “You could say they are thinking, just in a somewhat alien way.”