One of the areas I want to explore in Inner Horizon is the infinite limit of computational power which immediately brings up intelligence and consciousness. The recent discussions of Large Language Models (LLMs) in the context of e.g. GPT-4 has made me realize two things: 1. people are bad at scientific reasoning, and 2. whatever GPT-4 is, it’s not verging on “sentience”. I am not claiming anything that follows to be original1 — I am just collecting thoughts to structure a sci-fi novel.
Scientifically, if GPT-4 can do X, but you don’t know how it arrived at X and things like X are in the training data, we haven’t really shown the novelty we generally associate with intelligence. In addition to a couple rants about blind testing, I came up with a thought experiment on Twitter:


That part is just basic “bad science” of the kind where you don’t have a good control group or well-defined conditions (Feynman’s “cargo cult science” as I talk about here on my econ substack).
Most of the tricks these LLMs are doing can be classified under “very good compression of the training data” in the sense of Shannon’s original conception of information theory. In the paper, Shannon shows a series of approximations to English — random letters, random letters with correct frequencies, random letters with the correct two-letter frequencies, etc. As you move up the order of approximation the “resemblance to ordinary English … increases quite noticeably”:
Through their training LLMs have not only worked out the probabilities of entire sentences, but paragraphs and responses to particular sentences — even code and images. Over the years, its resemblance to a human response has increased quite noticeably. GPT-4 is good at something, but what is this capability?
Do we want to categorize this capability as “intelligence”? It’s basically pure combinatorics, probability, and data. Without going into the whole “Chinese room” thought experiment (which my tweeted example above is a play on) or the definition of qualia, this definition strikes me as insufficient for the weird ETO-like conception of LLMs as a kind of alien proto-superbeing that you might just want to align yourself with2.
However, I am fine with chucking the term “intelligence” over to the AGI dogs — not just to let marketing ruin it, but also out of spite for the whole IQ-centric view of humanity. Intelligence as a D&D stat that can be measured with tests a good little automaton student performs well on — cool; not what I’m interested in.
It’s entirely possible that our human brains are just giant ML algorithms and that once we reached a high enough point on the intelligence metric with enough relationships between different pieces of information we develop an integrated description of the universe that we call consciousness. We are philosophical zombies, but with an effective description as conscious beings. I don’t see the consciousness as an evolutionary byproduct as terribly different from this view either.
None of the “we’re not really conscious; we only think we are” views are particularly satisfying — in part because it kind of gives up on a hard problem by saying it doesn’t exist. This can be fun sometimes (or even correct), but the best course of action is to acknowledge the possibility in order to keep yourself humble and then go back to thinking about the hard problem qua hard problem.
After reading a couple of LLM papers, I did a thought experiment the other day while I went about my regular work — what things was I doing that would be difficult to accomplish if I was a philosophical zombie? Well, reading those LLM papers in order to be able to regurgitate the information in another forum would be fairly easy. Getting lunch? Easy to see myself as a trained automaton seeking calories. Writing a computer program to calculate something? I mean GPT-4 can do that, right?
What seems like it would have been hard were explaining the design goals to another team on one project, teaching a mentee about an electronic component, and composing an email to say someone hadn’t done their proposal work on time so I end up having to do it without throwing them under the bus since that project would likely never be funded since the client didn’t really understand what was involved. All social interactions.
Much like the other thought experiments I’ve been doing on how many dimensions you would evolve to perceive in empty space (1? 0?) or near a black hole (2+1?), it seems the conclusion is that a life form that evolved without sociality would not necessarily be conscious in the way we think of it. I mean our consciousness shuts down during the several hours per day we’re still living, possibly processing learned information, and can be ready with a fight or flight response — but are just not doing social things. I see consciousness as an evolutionary adaptation for social life forms. It seems beneficial3 for many social tasks — teaching others, dealing with social hierarchies, or leadership.
As a side note, while some have considered that consciousness might arise to allow a being to discern between illusory sensory perception and reality or the possibility of hidden motives and deception, I “argue” against it through the thought experiment of the Uutaruu. They are a telepathic species that is unable to intuitively grasp the concept of deception due the fact that during their evolution every thought was generally available to every other member of their species. However, they are conscious beings4. Teaching seems like a more essential function of consciousness. While some tasks can be learned by imitation alone, many complex tasks involving tools and intermediate goals would seem to benefit from a teacher and student capable of experiencing qualia and conceiving of the other also experiencing them. Most of the animals capable of passing the self-awareness mirror test for consciousness are not only social animals, but social animals that pass knowledge on.
I’d also like to say that it’s entirely possible that consciousness is even more rudimentary — that it is primarily an aspect of social species without genetically or biochemically determined social roles (i.e. not ants or bees). It didn’t evolve to enable teaching, but rather to enable dynamic social status and relationships. Meerkats, pigeons and wolves might be less intelligent than humans, but just as conscious.
[ETA: I do want to point out that human babies start to learn essentially as soon as they are born, but may or may not be conscious beings until a few months or even years later (though the science on this is far from definitive). Consciousness booting up later than intelligence (as we’ve defined it here) is a possible lever for distinguishing the two qualities.]
Because GPT-4 is not truly a social being, it is intelligent (remember, we’re giving them this word), but not conscious. Intelligence is at its heart raw combinatorial power, but fully captured by the Chinese room. Consciousness is the experience of self in relation to others in a social system; it’s specifically what’s missing in the Chinese room . To this I want to add sapience as the peculiar combination of the two we humans experience — sufficient intelligence coupled with consciousness that enables us to build abstraction and symbolism. GPT-4 is intelligent but not conscious, so it is not sapient. A pigeon is conscious [pdf] but not intelligent, so it is not sapient. Humans (and possibly e.g. elephants and dolphins) are both intelligent and conscious, so would be considered sapient.
Now I am saying these things as if that’s the way they are, but remember this essay is notes for a work of fiction — science fiction — where I want to raise questions about consciousness. In order to make that interesting I have to propose some kind of answer5. I really have no idea, but my intuition says there's something more to consciousness than an LLM which is just a few more steps along Shannon's ladder of approximations to language.
Boltzmann Brains
Our universe appears to have a positive cosmological constant, which implies it is a deSitter (dS) universe. An issue with the possibility of “eternal inflation” in such a universe is that since space and time are infinite, there is infinite space and time to realize even miniscule probabilities — such as a brain fluctuating out of the vacuum because of quantum mechanics. These are so-called Boltzmann brains, but personally would’ve preferred a conscious mattress to be the subject of the thought experiment as Douglas Adams captured the true ridiculousness of probability in an infinite multiverse. The thing is, depending on how you measure it, most observers should be Boltzmann brains.
Aside from that being the original inspiration for the artwork at the top of this post, there are some papers [pdf] that try to take this issue on — attempting to find a “cutoff” for the infinite sums involved. However as the linked paper shows, there’s a great deal of philosophy that goes into defining a “minimal” brain that then goes into defining the probability.
The view of consciousness as a separate social faculty differing from pure intelligence taken as a measure of processing and storage above might help with these calculations — a genuine conscious (sapient) observer might need to have more than just a single brain pop into existence from the vacuum. In fact, a full conscious experience could have to produce an entire waking day for a human out of the vacuum fluctuations which would necessarily involve other people, the Earth, and the sun. It could even have to produce an entire conscious being’s personal history.
That is to say Boltzmann brains are far more likely to be philosophical zombies — possibly intelligent but not conscious per the view above (the view I plan to take in Inner Horizon). If they are, and we need a lot more to have a conscious observer, then the rate of production of Boltzmann Brains is estimated (per the linked paper):
Γ ~ exp (−10⁸⁵)
which can be small enough to make it at least reasonably likely for observers to be “normal” (evolved after a slow roll inflation and reheating) instead of Boltzmann brains.
My general approach to the vast scientific and philosophical literature on consciousness and intelligence is: i ain’t reading all that; i’m happy for u tho; or sorry that happened. You, dear reader, probably should likewise ignore this blog post — only read it if you are entertained. Again, it’s notes for a not-yet-existing work of genre fiction.
The one drawback is existential dread.
This isn’t so much of an argument but a declaration — in the world of Inner Horizon, it’s not deception that consciousness evolved to enable (rather it’s the other way around, consciousness enabled deception in humans).
In universe, computational power density is a detectable quantity — the limit being a black hole. Sapient social species are effectively just another mechanism of aggregating computational power density.