Annotations
The mental state attribution question in the philosophy of mind asks: which mental states, if any, can humans coherently attribute to non-human entities? The question applies to a wide range of entities and mental states. For example, philosophers of mind ask whether species further away from humans in the phylogenetic tree, like oaks and bees, or closer to us, like apes and dolphins, have the capacity for consciousness, thought, rationality, emotions, or pain.
However, in 1950, Alan Turing published a seminal paper, “Computing Machinery and Intelligence”. It generated discussion of whether mental states, such as intelligence, could be applied to machines.
For example, OpenAI released a report in March 2023 in which GPT-4 was reported to have successfully gotten a TaskRabbit worker to solve a CAPTCHA code for it.
LLMs pass the Turing Test, but they are subject to Searle’s criticism of strong artificial intelligence generated by his Chinese Room thought experiment.
Turing holds that if we cannot tell the difference we should attribute intelligence to the machine. So, if we can’t tell if we are interacting with an LLM or a human, when we are blinded from the physical structure of what we are interacting with, we ought to judge that the LLM is intelligent.
In his Chinese Room thought experiment, Searle challenges the idea of machine “intelligence” by inviting us to consider a person who does not understand Chinese but is asked to answer questions given to them in Chinese by looking at a table.
Searle asks us: given the way the person goes about answering questions, does the person or system understand Chinese? Many people have the intuition that neither the person nor the system understands –after all Searle says the person doesn’t understand Chinese, and if the person is part of a system where none of the other parts can contribute to understanding, then the system doesn’t understand either.
We can distinguish between two classes of terms for mental states: Class A: think, assert, understand, and know. Class B: think*, assert*, understand*, and know*. The difference between A and B is that mental state terms in A require knowledge of meaning, while mental state terms in B do not. With this distinction, we can capture the insight of Searle’s thought experiment, while denying that it forces us to say that LLMs don’t have any mental states.