What Constitutes As The Philosophy of Mind?

This blog will summarize articles, papers, and material I have gone through that touch on the subject of The Computational Theory of Mind.

In this one article that touches on this subject, it's known that the computational theory of mind promises us the abilities of a machine to emulate reasoning, decision-making, problem solving, perception, linguistic comprehension, and other mental processes.

Table Of Contents

  1. The Implications of Human Beings As Conscious Automata and The Definition of Consciousness
    1. Consciousness As The Fundamental Property of Nature
    2. Consciousness As A Weak, Strong, or Normal Emergence
    3. The Primal Instincts vs. The Unknown
      1. So What Does It Mean By Having Conscious Thoughts in A Purely Physical/Materialistic Sense?
    4. Theories That Address The Mind-Body Problem
  2. Computationalism and The Computational Theory of Mind
    1. A Turing Style Computational System
  3. The Computational vs The Representational Theory of Mind
    1. Computationalism vs Functionalism
    2. The Functionality and Usability of Affective Values
    3. What Are Syntactic Underpinnings?
  4. Tying Everything Together and Connecting The Dots
    1. In A Nutshell

The Implications of Human Beings As Conscious Automata and The Definition of Consciousness

Advances in computing raise the prospect known as The Computational Theory of Mind (CTM). Computationalists understand this paradigm as the principle to cognitive science (Rescorla 2020). Then later two schools of thoughts challenged this orthodox position.

One pertains to the neurological properties of the mind and body and the other emphasized representational mental states. To have a better understanding overall, we have to first know where these premises in the realm of CTM stemmed from. Below are some very important and relevant theories in Cognitive Science.

Consciousness As The Fundamental Property of Nature

Everything started from the debate about what consciousness and mental representations really are, and it has been going on for centuries.

In answering the question of why we need consciousness, one of the proposed answers is that consciousness is a fundamental property of nature. It's added that,

“We know that a theory of consciousness requires the addition of something fundamental to our ontology, as everything in physical theory is compatible with the absence of consciousness.

We might add some entirely new nonphysical feature, from which experience can be derived […] we will take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time”.

This is also known as "fundamental property dualism", which, in David Chalmers' definition, regards conscious mental properties as,

basic constituents of reality on a par with fundamental physical properties such as electromagnetic charge. They may interact in casual and lay-like ways with other fundamental properties such as those of physics, but ontologically their existence is not dependent upon nor derivative from any other properties".

According to this medium article, this idea is also referred to as panpsychism, which "holds that mind or a mind-like aspect is a fundamental and ubiquitous feature of reality". It is also described as a theory that "the mind is a fundamental feature of the world which exists throughout the universe."

Consciousness As A Weak, Strong, or Normal Emergence

Each one of them is a hypothesis over the emergence and origin of consciousness, and below is a more detailed breakdown.

A Strong Emergence: This is the position that is too good to be true and therefore argued against. As it's not very much plausible that it's the downward causal direction, where the emergence of consciousness caused the microscopic and molecular changes happening at a lower-level.

The only justification to this position is that it opens up possibilities for free will, explained by the physicist Sean Carroll.

Downward causation is one manifestation of this strong-emergence attitude. It’s the idea that what happens at lower levels can be directly influenced (causally acted upon) by what is happening at the higher levels.

The idea, in other words, that you can’t really understand the microscopic behavior without knowing something about the macroscopic.

In contrast,

A Weak Emergence: Instead of being a strong emergence, maybe in reality consciousness might just be a weak emergence instead as proposed.

The so-called “weak” emergence suggests that higher-level notions like the fluidity or solidity of a material substance emerge out of the properties of its microscopic constituents.

In principle, if not in practice, the microscopic description is absolutely complete and comprehensive. A “strong” form of emergence would suggest that something truly new comes into being at the higher levels, something that just isn’t there in the microscopic description.

The example used was when we talk about atoms and other physical properties, it might not be suitable to talk about free will, but under the philosophical discussions about human consciousness, motivation, and behavior, it may be otherwise.

ℹ️ In such case, explaining what a weak emergence is, we may ask the question: if we take the view that consciousness does affect our thoughts and behavior, is that view applicable at the level of atoms and particles, or only at the level appropriate for describing human psychology?

In answering this question, the article used a different analogy; consider the mechanism that regulates the constriction and dilation of our pupils according to the intensity of the light reaching them.

We could say that there's a sort of sensor in our eyes or in our visual processing area in the brain that causes our pupils to behave that way, yet at the fundamental level of physics, the knowledge of such a sensory mechanism is not needed to predict (in principle) the behavior of all the particles composing the eye as they interact among themselves and with all the external particles affecting them. Still, we wouldn't say that the sensor is merely a byproduct that accomplishes nothing.

Therefore, instead of agreeing on each extreme end of the spectrum, below might help construe and reach the ultimate conclusion.

A Normal Emergence: The visual sensor is obviously affecting the behavior of the pupil, yet it doesn't follow that it's affecting the behavior of the underlying atoms in a way that contradicts or is incompatible with our understanding of how atoms behave.

Without the sensor, the atoms wouldn't behave as they do, and in that sense the sensor is necessary for the pupil to constrict and dilate; it's that it's necessary to know the existence of the sensor to predict what the atoms of the eye will do next.

In the domain that deals with how the eye works, it's useful and practical to speak of some sort of sensory mechanism; in the domain that deals with how atoms behave, it isn't - it isn't even necessary. Perhaps we could understand consciousness in the same way.

The Primal Instincts vs. The Unknown

However, it seems though the importance of consciousness might have been overall exaggerated. A few primal physical responses of humans have indicated that humans could task being an automata without these familiar yet mysterious qualia.

For example, humans could drive while talk, have fight-and-flight responses without being aware of it, and blindsight which are sensational responses to visual stimuli without actually experiencing the effects visually.

Lastly, onto the discussions about pain. It seems that the presence of pain is superfluous. We need not to have to experience pain to understand the risk of injury and be avoidance of danger.

According to a prominent English biologist Thomas Henry Huxley (1825-1895), he believed that sensations and feelings are mere byproduct of mechanics of the brain. An epiphenomenon that is not the cause of any behavior. In his essay, "On the Hypothesis that Animals Are Automata, and Its History" he wrote:

So What Does It Mean By Having Conscious Thoughts in A Purely Physical/Materialistic Sense?

The consciousness of brutes would appear to be related to the mechanism of their body simply as a collateral product of its working, and to be as completely without any power of modifying that working as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery.

Their volition, if they have any, is an emotion indicative of physical changes, not a cause of such changes. […]

It is quite true that, to the best of my judgment, the argumentation which applies to brutes holds equally good of men; and, therefore, that all states of consciousness in us, as in them, are immediately caused by molecular changes of the brain-substance. It seems to me that in men, as in brutes, there is no proof that any state of consciousness is the cause of change in the motion of the matter of the organism.

In the same light, this other, in the paper "The 'Mental' and the 'Physical'", this other re-known philosopher J.J.C. Smart (crediting the term to Herbert Feigl) presented his theories about 'is consciousness a brain process?"

He argues a position coined as the "Nomological dangler" about "Sensations and Brain Processes", which refers to the occurrences of something that does not fit into the system of established laws.

He argues against the dualists' and epiphenomenalism's position on the dichotomy between mind-body and for a monistic view or interchangeably physicalism where the mind is a product of or identical to physical processes, stating that according to Occam's razor seeing a mental process like consciousness as purely physical gets rid of the hassle of not having been able to explain the grey area of what these brain processes actually are in a more scientific and established system.

To quote him more specifically,

It was believed that it is absurd that everything can be explained by the laws of physics except consciousness. In this case, he identifies consciousness with the broad term "sensations".

Below summarizes theories that tackle the mind-body/consciousness problem that reflect the assumptions explained above,

Theories That Address The Mind-Body Problem

Here's a list of various theories in the philosophy of mind, along with a brief contrast regarding their approaches to addressing consciousness:

  1. Type vs. Token Identity Theory: Type vs. token identity theory proposes that consciousness and other mental states are identical to specific physical states or processes in the brain. It suggests that subjective experiences and consciousness are ultimately reducible to patterns of neural activity.

  2. Eliminative Materialism: Eliminative materialism suggests that current folk psychology and common-sense understandings of mental states, including consciousness, are fundamentally flawed and may be eliminated or revised in light of future scientific understanding.

  3. Functionalism: Functionalism defines consciousness in terms of functional roles within a system, emphasizing the causal relations between inputs, outputs, and other mental states. Consciousness is seen as arising from the functional organization of the brain.

  4. Neutral Monism: Neutral monism proposes that consciousness and physical phenomena are different manifestations of a neutral substance or property underlying reality. Consciousness is neither purely mental nor purely physical but emerges from a more fundamental neutral substrate.

  5. Mind-Body Dualism: Mind-body dualism typically posits that consciousness is a non-physical or immaterial aspect of reality. It suggests that consciousness exists independently of physical processes and may have properties that cannot be fully explained in terms of material phenomena.

These theories represent diverse perspectives on the nature of consciousness and its relationship to the mind and body. They offer contrasting approaches and frameworks for understanding one of the most complex and fundamental aspects of human experience and pave the way for designing novel and inspirational cognitive systems.

Computationalism and The Computational Theory of Mind

In a similar light, CTM or the Computational Theory of Mind was established in accordance with functionalism without addressing the qualitative or subjective aspects of mental processes.

According to CTM, the mind is a computational system similar in important respects to a Turing machine, and core mental processes (e.g., reasoning, decision-making, and problem solving) are computations similar in important respects to computations executed by a Turing machine.

First, CTM is better formulated by describing the mind as a “computing system” or a “computational system” rather than a “computer”. As David Chalmers - an Australian philosopher and cognitive scientist specializing in the areas of philosophy of mind and philosophy of language - (2011) notes, describing a system as a “computer” strongly suggests that the system is programmable. As Chalmers also notes, one need not claim that the mind is programmable simply because one regards it as a Turing-style computational system.

Second, CTM is not intended metaphorically. CTM does not simply hold that the mind is like a computing system. CTM holds that the mind literally is a computing system. The most familiar artificial computing systems are made from silicon chips or similar materials, whereas the human body is made from flesh and blood. But CTM holds that this difference disguises a more fundamental similarity, which we can capture through a Turing-style computational model.

A Turing Style Computational System

  1. memory locations: assuming an infinitely long linear structure or system, here is where these symbols might be kept. (In real life scenarios, there are always ways to optimize memory locations and storage space, including through the use of techniques like hashing, because the implementation could not realistically assume infinity).

  2. a central processor, which has a limited number of machine states it can enter.

  3. the central processor's basic actions on symbols include writing and deleting symbols as well as accessing the next memory position in the linear array. (On the tape, veer to the left or right).

  4. two key principles are used in the operation to process the data: the current symbol that is stored at the current memory address, and the scanner's own current machine state.

  5. a machine table, which, based on the central processor's present machine state and the symbol it is currently accessing, determines which elementary operation it will carry out. It also determines how those same circumstances affect the machine state of the central processor.

  6. human cognitive constraints - lastly, the symbolic system that the Turing computer symbolizes could effectively duplicate it due to our cognitive constraints, which can only yield a certain number of possibilities.

One important breakthrough invention during the computationalism era was the Logic Theorist computer program (Newell and Simon 1956) which proved 38 of the first 52 theorems from Principia Mathematica (Whitehead and Russell 1925).

Since the discrete nature of the system, it's concerned with its competency of whether it could model the continuous nature of human cognition.

The Turing machine also proved the existence of a Universal Turing Machine (UTM), which results in the ultimate development of current computer logic and system. More importantly, a personal computer can mimic any Turing machine until it exhausts its limited memory supply.

The Computational vs The Representational Theory of Mind

As opposed to type-identity theory, where mental states are brain states, Putnam proposed a different view that mental states are multiply realizable: the same mental state can be realized by diverse physical systems, including not only terrestrial creatures but also hypothetical creatures.

Functionalism therefore is tailor-made to accommodate multiple realizability.

He stresses the importance of probabilistic automata, stating that mental states are the machine states of the automaton's central processor.

In summary, functionalism was introduced to address the functional aspects of mental processes (the functional role of an automaton's central processor; for example to manipulate symbols); the computational theory of mind provides a computational framework to describe such a process (namely the process of manipulating symbols by the central process is considered as a mental process being "computational" ).

Lastly, the type vs. token-identity theory or physicalism of our cognitive processes defines that similar to a natural language (where a CAT is a type and every appearing instance of the word CAT is a token), Mentalese defines the mental representations that central processor is able to manipulate, operate on, and make computations/calculations with.

Computationalism vs Functionalism

In offering such a model, we prescind from physical details. We attain an abstract computational description that could be physically implemented in diverse ways (e.g., through silicon chips, or neurons, or pulleys and levers). CTM holds that a suitable abstract computational model offers a literally true description of core mental processes.

As per definition, the mind is the functional organization of the brain, whereas computationalism argues that the functional organization of the brain is computable so namely the mind is computable.

However, this line of thinking imposed a serious challenge on the early development of AI and nearly sent the field into a dead end and a period of stagnation called the winter period. The weakness of this thought lies in the fact that human cognition has limitations and couldn't entertain infinite amount of propositions and rules, which entails that it reflects poorly the productivity of these mental processes. On top of that there are some other major limitations,

Symbol Grounding Problem: CTM struggles to explain how abstract symbols in the mind acquire meaning. It doesn't account for how symbols connect to the external world.

Qualia and Consciousness: CTM has difficulty explaining subjective experiences (qualia) and consciousness. It tends to focus on functional aspects without addressing the qualitative nature of mental states.

Flexibility and Adaptability: CTM often assumes rigid rule-based processing, which may not fully capture the flexible and adaptive nature of human cognition.

Coming back to the important discussion around consciousness. Now we could ask that,

ℹ️ why isn't this indication enough for such a phenomenon to explain what these "nomologically dangling" ideas are? And what exactly is the purpose of consciousness? or the importance of qualia?

One answer to the questions argues that they are not merely physical, it's the nature's decision bestowed upon us with free will. It lets us decide with this warning sign. It's more so a trade-off handed over to our free will. We experience discomfort caused by external stimulation and then we are given the option to consciously decide what we are going to do with it.

Another objection to the byproduct hypothesis would be that,

The Functionality and Usability of Affective Values

If pleasure and discomfort have no consequences, there would seem to be no reason why we couldn't detest the feelings brought on by necessary actions or relish them when generated by harmful ones.

Therefore, if epiphenomenalism were accurate, it would be necessary to provide a special justification for the harmonious relationship between our feelings' affective value and their usefulness in our daily lives. Nevertheless, this alignment could not have a true explanation based on epiphenomenalist presumptions.

Affective valuation would not have any behavioral implications if it were misaligned with the usefulness of the causes of the evaluated sensations, and vice versa. Therefore, the felicitous alignment could not be chosen.

Epiphenomenalists and physicalists would only be left with the option of accepting a crude and unscientific understanding of the pre-existing harmony of affective appraisal of feelings and the usefulness of their sources.

Later then came about the renaissance period with the blooming of connectionism grounded in The Representational Theory of Mind. RTM was developed to address these limitations. Not only did it not shy away from the traditional rationalism or symbolicist ideals, it incorporated the aspects and was established to addresses these limitations,

Emphasis on Representations: RTM places a central focus on mental representations, acknowledging that cognitive processes involve manipulating and interpreting these representations.

Connection to the External World: RTM attempts to tackle the Symbol Grounding Problem by emphasizing the connection between mental representations and the external world. It explores how representations acquire meaning through their relationship to real-world objects or events.

Integration of Qualitative Aspects: RTM accommodates the qualitative aspects of consciousness and subjective experiences by acknowledging the role of mental representations in shaping our perception of the world.

Flexibility in Cognitive Processing: RTM allows for more flexibility in cognitive processing by recognizing that mental representations can be dynamic and context-dependent.

RTM is able to accomplish the above by assuming,

Productivity:

A finites set of symbols in natural language and the device could
entertain a infinite numbers of logic using a finite set of rules and elements.

Systematicity:

That there are inherent systematic relations between basic cognitive constitutions.

More importantly, with a shift in the representation of information toward a sub-symbolic, universally distributed approach. While the specific mention is about solving a philosophical conundrum related to representations of meaning, we can consider how this shift might also address the potential conundrum faced by classicists - the syntactic underpinnings. Here are some ways in which this approach could be relevant:

Capturing Syntactic Patterns: While distributional representations are primarily designed to capture semantic similarities, they also implicitly encode syntactic information. Words that often appear in similar syntactic structures tend to have similar distributional representations. For example, verbs that frequently take similar types of objects will have similar distributional representations.

Contextual Embeddings: Modern distributional representation models, such as transformer-based models like BERT and GPT, generate contextual embeddings that capture both semantic and syntactic information. These models are trained on large amounts of text data and learn to represent words in context, thereby capturing syntactic dependencies between words within sentences.

Syntax-Aware Pretraining: Some recent research has focused on developing distributional representation models that are explicitly trained to capture syntactic information. These models incorporate linguistic knowledge about syntax into the training process, resulting in embeddings that are specifically tailored to represent syntactic structures.

Syntactic Analysis: Distributional representations can also be used as features for downstream syntactic analysis tasks, such as parsing and part-of-speech tagging. By leveraging the syntactic information implicitly encoded in these representations, AI models can improve their performance on these tasks.

The shift toward a sub-symbolic, universally distributed representation seems to open up possibilities for addressing challenges related to syntactic representation and processing. By embracing a more dynamic and contextually adaptive approach, this mechanism may provide a framework for capturing syntactic information in a manner that aligns with the overall goals of the system.

What Are Syntactic Underpinnings?

To recall, the syntactic underpinnings refer to the foundational principles and structures that dictate the formation of sentences and phrases in a language. These include rules for word order, phrase structure, grammatical categories, syntactic dependencies and hierarchies.

Here are a few strategies that could be employed to directly address syntactic underpinnings:

These approaches directly target the study and understanding of syntactic structures and rules within a language, aiming to develop systems capable of accurately representing and processing the syntactic aspects of a natural language.

Tying Everything Together and Connecting The Dots

Philosophers only started to become interested in connectionism because it offers an alternative to the classical computational theory of mind.

Connectionism is a movement that has been put out in the field of cognitive science in hope to explain and extrapolate the mental processes of a brain. It's composed of a simplified emulation of the actual cognitive compositions of a brain, by having connected neurons or units (the analogs of neurons) within which are assigned weights that measure how strongly the connections are there between units. We could picture these weights as a replica of the synapse in our actual biological brain that link one neuron to each other.

The bridging and departure between classicist and connectionist's view became that a lot of connectionists view cognitive processes as analogous while most classicists would agree that they are digital in nature.

Many connectionists argue that weight units that are dynamic, continuous, and analogous in nature which therefore reflect better the way that human brain processes information through different mental states.

On the other hand, classicists view information processing done by humans as similar to a computational system where information is stored symbolically and processed systematically through automatons. And the storing of information is similar to that of a computer where there's "memory locations" for information access and retrieval.

However, many connectionists seek to bridge these two paradigms together instead of trying to argue for one over the other. One of the schools - the implementational connectionists - seeks to find a solution that accommodates these two different hypotheses.

It's also argued by radical connectionists that the traditional or the symbolic way of treating human cognitive information processing should be eliminated because how poorly it reflects these dynamics in human consciousness that could otherwise be materialized through connectionism.

Graceful degradation of function, holistic representation of data, spontaneous generalization, appreciation of context, etc.

Yet, hybrid mechanisms were put out by different teams to mediate such clashes. For example, there were papers that drew inspiration from the classical style of information processing (having one module that acts as the memory location or implementing methods that utilize variable binding techniques for symbolic processing that reflects the classical computational way of thinking aka the turing style information processing and storage).

Also one interesting discovery about the connectionism representation of information processing inside the brain might have a good give away of how brain actually processes information.

That is, instead of a local representation of an individual or specific informational unit, the learning mechanism is more so universally distributed across different (hidden) units.

To top it off, the new way of representing information's sub-symbolic nature - where there were no intrinsic properties of representations that determine their relationships with other symbols - might solve a philosophical conundrum in representations of meaning.

In essence, a distributed representation preserves patterns across the net that could allow comparison and good preservation when parts go amok.

In A Nutshell

In this blog, we explored the Computational Theory of Mind (CTM) and its implications. We have discussed about consciousness as a fundamental property of nature and various hypotheses on the emergence and origin of consciousness. We have also addressed the mind-body problem, theories like Type vs. Token Identity Theory, Functionalism, and Mind-Body Dualism. It introduces CTM, a Turing-style computational system, and compares it with the Representational Theory of Mind. We have also touched on limitations of CTM, including the Symbol Grounding Problem and its difficulty in explaining consciousness. Finally, Connectionism (grounded in the Representational Theory of Mind (RTM) was introduced as an alternative approach, emphasizing representations, contextual embeddings, and flexibility in cognitive processing).

Namely, connectionism involves simplified emulation of brain's cognitive compositions through interconnected units akin to neurons. Connectionists view cognitive processes as analogous and argue for dynamic, continuous weight units reflecting brain's mechanism. Classicists see human information processing as computational, with symbolic storage and systematic processing. Some connectionists aim to bridge these paradigms, while radical connectionists advocate for eliminating symbolic treatment. Hybrid mechanisms have been proposed to mediate clashes, drawing inspiration from classical and connectionist approaches. This distributed style of representation and sub-symbolic nature might offer insights into how brain actually processes information.

References

Rescorla, Michael. "The Computational Theory of Mind." The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), edited by Edward N. Zalta.

Rumelhart, D. E., McClelland, J. L., & the PDP Research Group. (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition.

Clark, A. (1993). Connectionism and Cognitive Architecture: A Critical Analysis.

Bechtel, W., & Graham, G. (Eds.). (1998). Connectionism and Cognitive Science.

Horgan, T., & Tienson, J. (1996). Foundations of Connectionism: A Reassessment.

Clark, A. (2001). Mindware: An Introduction to the Philosophy of Cognitive Science.