Philosophy of Mind

Cognitive Science and The Philosophy of Mind

Q What is the focus of this blog?

A This blog will summarize articles, papers, and materials I have gone through that touch on the subject of Philosophy of Mind and how its presence lays important foundation for the development of general artificial intelligence.

The blog covers the following topics:

  • What Constitutes The Philosophy of Mind
Read More

Contemporary NLP

Introduction to Contemporary NLP

Q What is the importance of psychological concepts in NLP?

A To understand modern natural language processing (NLP), it’s essential to draw inferences from crucial psychological concepts like the Language of Thought Hypothesis and the Representational Theory of Mind. These concepts help explain how our brain processes and produces language and mental representations, which are foundational for NLP.

Language of Thought Hypothesis (LOTH)

Q What does the Language of Thought Hypothesis (LOTH) propose?

A LOTH proposes that our brain has a schema for producing language of thought, known as Mentalese. It suggests that mental states and thoughts have a structured, language-like format, which facilitates reasoning, problem-solving, and decision-making.

Q What are propositional attitudes in LOTH?

Read More

The GPT Architecture

Summary and breakdown of the code that form the Generative Pre-trained Transformer architecture continued

Let’s break down the code snippet line by line to understand what each step does in the context of creating positional encodings for a Transformer model using PyTorch.

Code Snippet

1
2
3
4
5
div_term = torch.exp(torch.arange(0, d_model, 2).float() *
(-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)

Explanation

1. div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))

Purpose: Calculate the denominator for the sine and cosine functions in the positional encoding formula.
Breakdown:

Read More

Intro to Determinism

Introduction to Determinism in Philosophy and Mathematics

Determinism is the philosophical idea that every event or state of affairs, including human decisions, is the consequence of preceding events according to fixed laws of nature. In its purest form, determinism implies that if we had perfect knowledge of the current state of the universe, we could predict all future events with certainty.

In contrast, non-determinism implies the possibility of multiple potential outcomes for any given situation, introducing elements of uncertainty, chance, or randomness.

In mathematics and computer science, determinism refers to systems or processes where outcomes are strictly determined by initial states and inputs, leading to predictable results. Non-determinism, on the other hand, allows for multiple potential outcomes given the same initial state and inputs, introducing randomness or unpredictability.


Philosophical Aspects of Determinism

From a philosophical perspective, determinism is often discussed in relation to:

Read More

Naturalizing Intentions

The Hard Problem in Naturalizing Intentionality

The Hard Problem of Intentionality mirrors the broader Hard Problem of Consciousness, focusing on a specific aspect of mental life: intentionality—the mind’s ability to represent, think about, or be “about” things. Both problems grapple with the difficulty of explaining subjective mental phenomena in purely naturalistic or physical terms.

Let’s break this down into key concepts and challenges in a comprehensive way:

1. What is Intentionality?

Intentionality is the capacity of the mind to be directed toward objects, states, or events. When we think, perceive, believe, or desire, our mental states are always about something:

  • Thinking about a tree.
Read More

Understanding Qualia

This blog will try to understand Qualia comprehensively.

Qualia (pronounced KWAL-ee-uh) are the subjective, internal experiences that each individual perceives when interacting with the world. These experiences refer to the way things feel to an individual, rather than their physical or scientific properties. Examples include:

  • The specific redness of a rose as perceived by an individual.
  • The feeling of pain from a stubbed toe.
  • The taste of chocolate or the sound of a piano.
Read More

Compositionality

Dealing with Compositionality

This blog will introduce the research done in syntax that addressed compositionality. In one of the connectionist natural language processing papers I have read about, it touches on government binding theory proposed by Chomsky, and the paper tried to model the motion from d-structure to s-structure in GB theory through the non-overlap constraint and chain map combined with NNs.

And the demonstration of the non-overlap map is below

Non-Overlap Constraint Explained

The non-overlap constraint is a rule in cognitive models or neural networks that prevents overlapping activations of units in a chain map. This ensures that no two units representing the same syntactic marker can be active simultaneously, which helps maintain clear and distinct representations.

Diagram Breakdown

Components:
Chain Map (Green Text):

Read More

Quantifying Beliefs

This Blog Will Go Through Some Fun Things About Modeling Beliefs with Bayesian Methods

The relationship between Bayesian conditional probability and theological discussions, particularly the existence or understanding of God, has been a subject of philosophical and theological debate for centuries. I will first explain the Bayesian conditional probability equation in a simple way and then explore how it might relate to theological concepts like belief in God.

Bayesian Conditional Probability

At its core, Bayes’ Theorem helps us update our beliefs when we receive new evidence. It tells us how likely something is to be true (the posterior probability) given what we already believe (the prior probability) and the new evidence we observe.

Bayes’ Theorem is expressed as:

$$
P(A | B) = \frac{P(B | A) \cdot P(A)}{P(B)}
$$

Where:

Read More

Measuring Subjectivity

Understanding VAE through a different angle

In a different blog, we have briefly introduced the mechanism behind the VAE algorithm, where it’s essentially taking complex input as numeric values, find a lower-dimensional, structured, and meaningful representation of the input data, and return output that facilitates generative modeling and reconstruction.