The Chinese Room, Turing Test and Communication (#1443)



Related Documents:
(#1437) The Chinese Room, Experience, Knowledge and Communication
Also see other excerpts from my discussions with the Society for Scientific Exploration.


Re: Why the "Virtual Mind" is Not a Refutation of the Chinese Room
> The Chinese Room
> thought experiment was designed to show the flaws of
> the Turing Test, which is based only on externally
> observed behavior.

I agree that the Turing test is an inadequate test for sentience. It mostly hinges on communication ability - hence some sentient beings may fail that test due to an incomplete communication functionality and other non-sentient processes may pass the test if their internal algorithm is sufficient to create the impression of coherent communication in particular contexts.

An example of a sentient being failing the test would be the linguistic majors described by Bob Marks (also the chinese room example). After an in depth conversation one would realise that although the responses where grammatically correct they would not always make sense in context. Language consists of more than just grammatical rules, it also encodes an enormous amount of common sense knowledge about the refered objects and their properties and manner of combination.

E.g.
there is a cup on the table - please pass the cup?
makes sense but:
there is a stain on the table - please pass the stain?
does not make sense because a 'stain' is a different type of object to a cup and has different existential properties. The system would need detailed experiential knowledge and a rich associative network of meanings before it could really speak coherently.

An example of a non-sentient process passing the test is the cases where ELIZA (a program that mimics a doctor's consultation) managed to convince some people that they were speaking to a real doctor. In carefully contrived situations that are highly ritualised and well defined, a direct program could successfully mimic meaningful communication but it would not succeed in the case of general knowledge conversations. It would need meta-processes and a rich associative network to be able to do this.

There is no absolute means for proving sentience other than one's own experience of one's own sentience. One cannot prove this to another person since all communication is an external process that gives access to the outer aspect of a system (it's observable state) but not to its inner aspect (state of awareness). The only direct way of discerning another's sentience would be through "discriminative participation" as it is called in yoga. Through meditation upon some other system one can enter into a state of identification with that system and can experience 'being' that system. One then has access to the inner aspect of that system and can experience its inner experiences. This is possible because ultimately the cosmos is a field of unified consciousness - whilst on some levels it seems fragmented into isolated objects, on a deeper level there is an underlying unity through which we can operate.

The real question is not whether a system can communicate coherently but whether it exhibits complex self awareness and responsiveness to its environment and itself. But this can only be discerned by the signals that the system gives out - hence if the system cannot communicate coherently then we cannot discern its level of sentience.

Searle's example just shows the limitations of a single level direct process, it does not take into consideration what arises when there are many levels of meta-processes that create higher level awareness. Thus the chinese room example attempts to analyse the direct symbol processing process but by adding the human being as a sub-system there is also a higher level meta-awareness. So the total system is sentient even though it has an incomplete communication functionality.

> If the system is asked this in Chinese, it will not
> "know" what it is saying in answer.

If the total system only has the symbol input interface by which it can experience the outside world then due to the incomplete communication functionality (lack of an associative network) one cannot access the majority of the system's inner functionality. For example, due to the human sub-system the total system has full self- awareness but unless one can call out through the walls of the room and speak to the human inside you could not discern this fact. If one can only communicate via the symbol input interface then one can only interact with the system via its incomplete language functionality, hence one cannot meaningfully communicate with it. This is similar to some people who have totally clear and coherent minds but their bodies are so debilitated that they cannot perceive or express any external stimuli. Hence they are fully sentient but cannot communicate this with an outside observer.

Hence both the Turing Test and the Chinese Room do not really touch upon the issue of sentience, they merely touch upon the issue of communication functionality. The chinese room system is in fact fully sentient but it is unable to communicate this because of its limited communication functionality. Sentience itself is an internal phenomenon that does not necessarily correspond to language ability.

In some sense every system is 'aware' but only those systems with complex meta-systems and associative networks can be aware of their awareness and come to know that they know. This is regardless of whether they are able to communicate this fact or not. Hence it is not adequate to test communication ability as a measure of sentience. It would be more meaningful to find indirect ways of testing for the presence of self-awareness. Exactly how, I'm not sure...

In the case of exhaustive Turing tests with complex conversations about a broad range of general knowledge subjects, for a system to pass such a test it would require a high degree of self-awareness, a concept of itself as a being in the world, a rich experiential idiom of ideas and general knowledge about the meanings of those ideas and a rich associative network and communication functionality. Thus a system that passed an exhaustive Turing test would most likely need to be sentient and to have a history of experiences as a sentient being in the world. But a system that failed such a test would not necessarily not be sentient, it may just have an innadequate communication functionality to express its sentience. Thus the test could indicate (but not prove) sentience but not non-sentience.

www.Anandavala.info