Sentience: Predictive Coding in the Human Mind vs. AI, LLMs

Guest Post by David Stephen

There is a recent paper, Evidence of a predictive coding hierarchy in the human brain listening to speech, stating that, “Together, these results support predictive coding theories, whereby the brain continually predicts sensory inputs, compares these predictions to the truth and updates its internal model accordingly. Our study further clarifies this general framework.

Sentience: Predictive Coding in the Human Mind vs. AI, LLMs

Not only does the brain predict sensory inputs but each region of the cortical hierarchy is organized to predict different temporal scopes and different levels of representations. However, the link between hierarchical constructs in syntax and functional hierarchy in the cortex and in the model is a major question to explore.  This computational organization is at odds with current language algorithms, which are mostly trained to make adjacent and word-level predictions.”

“Predictive coding theory offers a potential explanation to these shortcomings; while deep language models are mostly tuned to predict the very next word, this framework suggests that the human brain makes predictions over multiple timescales and levels of representations across the cortical hierarchy.”

Though the distinction with the mind may seem fuzzy, the human brain does not make predictions. There is no text or image in the brain. In a speech, while there are several biological sets involved, the content [or memory] is of the mind. Predictive coding theory does not explain how the components of mind determine what it defines as a prediction.

Is the label, prediction, an accurate description of what the mind does, or does the way the mind functions give off the observation defined as prediction? How does the human mind process what is within for outputs as language or communication?

Mind and Prediction

When someone is communicating: typing, speaking or signing, and what comes next is prepared in mind, how does it work? What is the difference between how this works and what happens when someone holds something—like numbers—in mind, briefly? What is also the difference between this and when someone is in some situation and there are thoughts of several possibilities within a second? In the components of the human mind, what happens, what is the process like?

For what seems like prediction of sensory input, how does the ‘brain’ do it? Entering a room and knowing what to expect, what was the pathway? Where does the brain meet the mind that a chair, desk, painting or vase is identified or affectedly absent in the room?

Conceptually, the human mind consists of quantities [dots] and properties [fairly static shapes]. Quantities relay to acquire properties to determine what becomes of any experience, in that moment. The mind uses the same mechanism for everything it processes, including labels of memory, feelings, emotions, action, thoughts and reactions.

In the brain, sensory inputs are mostly collected at the thalamus, except for smell, collected at the olfactory bulb. It is where they are processed or integrated before relay to the cerebral cortex for interpretation.

It is postulated that sensory processing or integration is into a uniform unit, quantity or identity which is thought or in the form of thought. It is how the brain and the mind meet. Interpretation in the cerebral cortex is postulated to be knowing, feeling and reactions, as the beam of properties. Knowing is memory, supervising others.

Quantities have three major features. They have early-splits of go-before. They have old and new sequences. They also have prioritized and pre-prioritized phases. Properties have theirs. They include bounce points, where a quantity can hit, before going to another specific property, which is used in grouping or what is defined as associative memory. Properties have thin and thick shapes. They also have a principal spot where just one can go and have the most domination.

The feature of quantities that mostly applies to prediction is early-splits or go-before. What happens is that as quantities emerge from sensory processing, they split and some go on to acquire properties, like before. For example, seeing the initial words of a letter, then what should be its completion is quickly acquired, before looking at the rest of the word, such that if the acquisition matches, then no further processing, but if it does not, then it goes to the right property. This is what is called prediction error.

The initial split uses an old sequence. The correction may also use an old sequence, but sometimes because of the immediate change, it may use a new sequence. These may happen in prioritization, within seconds. It is this same early-splits that is used to know what next to say or type, or used to hold something in mind.

It is in the form of thought or quantity bearing acquired property that versions, equivalents or representations of whatever is external become to the mind. Between properties, quantities also split and are often splitting.

This is what works what is referred to as predictions, in predictive coding, processing and error. The mind does not make predictions, but has functions that seem like it. Simulating prediction alone gave LLMs seeming reasoning or intelligence, which is a parameter of memory, a qualification of sentience or consciousness.

See more breaking stories here.