Four arguments for the irreducibility of mind

One often reads about the ambitions to explain our minds with chemistry and physics, with algorithmic equations, and the corollary aspirations to create artificial intelligence that would achieve the kind of consciousness and reason that human being possess. Whether such a thing is possible has been a topic of many philosophical debates and discussions. I wanted to share here a preliminary list of four arguments as to why I don’t believe that human reason can be reduced to algorithmic/natural processes. I call it “preliminary” because I am more convinced by some of these arguments than others, and an argument or two could be added or dropped in the future, so this list can be taken as a somewhat intermediary step in my thinking. I’ll list them from weakest to strongest in my opinion (although I find all of them worthwhile to make it into this list), but I encourage using this as a launching point for your own thoughts, rather than a definitive conclusion on the matter.

  1. Turing Machine

Turing machine is an abstract mathematical construct that can simulate any information processor – past, present, or future. And so, if human mind is, at the end of the day, simply a very complex informational processor, we should expect the findings on Turing machine too be applicable to it. And yet, one of the seemingly emerging implications on a Turing machine has been that it can only produce information that is either in its input, or in the structure of the mechanism itself. As mathematician John Lennox sums up the work of his colleague, “Gregory Chaitin has established that you cannot prove that a specific sequence of numbers has a complexity greater than the program required to generate it.” [1] In other words, one cannot prove that an algorithm or a program can produce information more complex than the program itself.

Now this is a negative proposition in itself, yet it coheres well with the positive assertion that an algorithm cannot generate new information. And this coheres well with our experience – we do not expect our computers or smartphones to generate phrases or sentences that were not built-in by the developer or typed in by ourselves. While this is more of a conjecture rather than a proof, – again, Chaitin’s argument is that one cannot prove that an informational sequence is more complex than the algorithm that generated it, – I think it points towards the conclusion supported by our common sense. We, human beings, can generate novel informational sequences (computer codes, poems, or philosophical works), – which would indicate that we cannot be fully modeled by a Turing machine, and hence by any (however complex) algorithm. [2]

[1] Lennox, John, God’s Undertaker: Has Science Buried God? (p. 162)

[2] One could argue that we do not really create information, but simply transmit it. So, our eyes get visual information from what we see, our ears get sound signals, etc, and then we use that “sense-data” to construct poems and novels. However, what Turing machine is talking about is coded information, not information in that more general sense. So that, it seems to me, even if one would argue that our brains store and encode the “sense-data” that comes in from outside, the process of conversion of this “sense-data” into coded information would be subject to testing by this argument. Could it be that our very ability to create new words shows that our brains can increase (information-wise) in complexity on their own, contra what a Turing machine would suggest? In contrast, a machine could only create words in accordance with in-built algorithms.

2. Godel’s Incompleteness Theorem

Godel’s incompleteness theorem was a paradigm shift in mathematics, with an elegant proof that had drastic implications for the field. To put it in simple terms, Godel showed that if you have a system of (mathematical) rules or axioms which you can use to prove further hypotheses and theorems, there will always be a theorem (a fact) that is true for that system, but that cannot be proven by the rules of that system. And an important point is that you can see that this unprovable fact or theorem is true by virtue of trusting the rules (i.e. the same logic that convinces you that those rules hold also shows you that this unprovable theorem is true), even if you cannot prove it by using the rules. [1] One non-obvious but fascinating implication is – how do you know that a given unprovable theorem is true, if not by the algorithmic process of rules and axioms. Sir Roger Penrose, a physicist and mathematician who recently won the Nobel Prize for his work in cosmology, believes that this demonstrates that the act of knowing is not an algorithmic process, but something else. Granted that objections have been raised to his argument, I again leave it to the reader to go into more depth and make up her own mind. [2]

[1] https://www.youtube.com/watch?v=GX10mR_N0Vs is where you can see mathematician Roger Penrose explaining Godel’s theorem and its implications for consciousness at the level which I and Joe Rogan (presumably) can understand.

[2] https://iep.utm.edu/lp-argue/#SH3a is where you can learn about this particular argument

3. Polanyi’s insight on Irreducibility of Information

Michael Polanyi was a brilliant chemist and polymath, having made contributions in fields beyond his own, such as economics. His intellect can be indirectly estimated by the fact that two of his pupils, as well as his son, won Nobel Prizes in Chemistry. One of his greatest insights came, in my view, in his article on the topic of biological life [1]. Part of Polanyi’s argument was that the origin of the informational code (DNA/RNA) in the structure of the cell cannot be explained purely in terms of chemistry and physics; but his argument applies to any informational code in general. Polanyi’s argument can be presented as follows: suppose that you are writing a letter to a friend. The only condition is that every time you write letter “a”, it has to be followed by “b”, and every time you write “b”, it has to be followed by “c”, and so on until you get to “z”, after which you have to follow up with “a” again. How much information would you be able to store in such a chain of letters?

The answer is zero, as the order of the letters is predetermined. There is no uncertainty, no room for variation, which is precisely why there is no ability to store information. As Polanyi writes (regarding DNA),

“It must be as physically indeterminate as the sequence of words is on a printed page. As the arrangement of a printed page is extraneous to the chemistry of the printed page, so is the base sequence in a DNA molecule extraneous – to the chemical forces at work in the DNA molecule. It is this physical indeterminacy of the sequence that produces the improbability of occurrence of any particular sequence and thereby enables it to have a meaning – a meaning that has a mathematically determinate information content equal to the numerical improbability of the arrangement.”

In other words, any informational chain, be it a DNA molecule or printed text on a page, couldn’t carry information if the sequence of letters was subject to chemical and physical laws, which are very much like the algorithmic, deterministic rules by which a computer works. As there is no uncertainty in the outcome of a set of inputs and algorithms, there is therefore very little possibility for creating new informational code purely using those inputs and algorithms; and yet we, human beings, are evidently capable of creating new information in the form of literary or functional codes. Notice also how well this coheres with the first argument based on Turing machine – two different routes of thinking are seemingly bringing us to a similar conclusion. Polanyi’s insight goes even further, suggesting that the creation of new information is not only a non-algorithmic, but also a non-deterministic and therefore (in so far as we accept determinism in Nature) supernatural process. [2]

[1] Polanyi, Michael. Life’s Irreducible Structure (1968). Science, vol 160, issue 3834. Can be freely accessed at https://www.informationphilosopher.com/solutions/scientists/polanyi/Polanyi_Life_Structures.pdf

[2] One could argue that while deterministic processes cannot create information, randomness (on which conceivably quantum mechanics may operate) is not deterministic. Hence, there is still the possibility of creating information as a byproduct of chance, though not as a consequence of a deterministic algorithmic process. But are we really prepared to say that the sentences, poems, literature, and computer codes that we create are made through chance? In lack of actual proof of such a thing, our immediate senses and intuition revolt against that notion. Furthermore, how much information can chance create anyway? How long would a monkey be typing on a typewriter before a Shakespearean sonnet comes out?

4. Lewis’ argument from reason

The last argument is, like Polanyi’s, one which was not directed at artificial intelligence, but its implication is still critical. It appeared in a much-underappreciated short book from 1947 by C S Lewis, called “Miracles”. Lewis’ argument was for the existence of supernatural (in the strict sense of that word, not ghosts and spooky stuff), and his main thrust was related to the existence of human reason. The relevant chapter (“The Cardinal Difficulty of Naturalism”) is short enough for my reader to investigate it on her own, but I will summarize it here. Lewis writes, with regards to the weakness of Naturalism, that

“The easiest way of exhibiting this is to notice the two senses of
the word because. We can say, ‘Grandfather is ill today because he
ate lobster yesterday.’ We can also say, ‘Grandfather must be ill
today because he hasn’t got up yet (and we know he is an invariably
early riser when he is well).’ In the first sentence ‘because’ indicates
the relation of Cause and Effect: The eating made him ill. In the
second, it indicates the relation of what logicians call Ground and
Consequent. The old man’s late rising is not the cause of his disorder
but the reason why we believe him to be disordered.”

This difference between logical “because” and cause-and-effect “because” is crucial; for our reason is valid if our thoughts follow in the logical “because”, while things in Nature follow in the cause-and-effect “because”. If our reasoning is to be both reliable logically and a natural phenomenon, it has to satisfy both conditions, to fall into both systems. But these two systems are wholly distinct. Lewis notes that

“to be caused is not to be proved. Wishful thinkings, prejudices, and the
delusions of madness, are all caused, but they are ungrounded.”

In fact, we often attribute someone’s words to a cause to undermine their reasoning, when we say, for example, “You only say this because you are a woman”. We believe that there is no reason to consider rational grounds for a belief or a statement which can be wholly explained without them. Lewis continues, however, writing that if our thoughts are natural phenomena, then

“even if [rational] grounds do exist, what exactly have they got to do with
the actual occurrence of the belief as a psychological event? If it is an
event, it must be caused. It must in fact be simply one link in a causal
chain which stretches back to the beginning and forward to the end of
time. How could such a trifle as lack of logical grounds prevent the
belief’s occurrence or how could the existence of grounds promote it?”

Needless to say, if Lewis’ argument is true, human reasoning cannot be reduced to the natural world. It cannot be simply another part of that large system of cause-and-effect that is Nature, precisely for its freedom to choose and follow rational chain of thought that is not due to simply another interaction of atoms. Human reason ends up being, ironically for some, the most immediately available hint to the existence of supernatural. [1]

[1] Of course, there are many objections and counter-objections that could be raised on Lewis’ argument, which is why I again leave it to the reader to see what Lewis’ wrote and what the resulting philosophical discussion has been like.

One response to “Four arguments for the irreducibility of mind”

  1. […] [2] I have written on this to some extent in my other post, Four Arguments for the Irreducibility of Mind […]

    Like

Leave a comment