Could a computer have a mind? A philosophical and practical rethinking.
Could a computer have a mind? A philosophical and practical rethinking.
The Matrix Trilogy depicts a virtual world in which machines have emotionless minds to compete with human minds. That sci-fi adventure leaves us with a bunch of curiosity on artificial intelligence, mainly that whether it is possible for computers to have minds regardless of emotion. Other films such as A.I. and Bicentennial Man narrate stories even further by creating robots with minds that have emotions, particularly love.
After deliberately meditation, we probably conclude on whether computer could have a mind with questions that,
1. How do we define that a computer has a mind?
2. If in case a computer does have a mind, what methods does it allow us to identify or to prove that situation?
According to dualism, the mind is non-physical, and therefore mind or intelligence is purely spiritual. Consequently it is not possible to understand or imitate human minds with purely physical terms. On the other hand, functionalism provides foundation for us to think about the mind as analogous to a computer. We can argue that minds are information processing machines as computers that they take information provided by sensory organs and other mental states which we have, process it and produce new behaviours and mental states. In philosophical theory, it is possible that a computer could have a mind. Yet the questions becomes whether it is possible, in physical practice, that a computer could have a mind.
Alan Turing in his landmark paper ‘Computing Machinery and Intelligence’ (1950) proposed the imitation game thought experiment as a response to those questions. In his paper, Turing ingeniously replaced the ambiguous question of “Could computer have a mind?” as “Are there imaginable digital computers which would do would do well in the imitation game?” By doing so, we avoid the first question of the definition of mind or intelligence. Turing’s answer to the second question is that if a computer responds as intelligent as a human being, we then prove that computer has a mind.
Later John Searle’s Chinese Room thought experiment (1980) challenged the definition of Turing’s proposal. It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese. However, the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either. In other versions of the experiment, the programme is replaced by an English-Chinese dictionary. And the English speaker carries on a conversation in written Chinese with that dictionary.
In the experiment, the difference between syntactic and semantic properties of symbols has been addressed. Syntactic properties are physical properties of symbols, while sematic properties are meanings of the symbols. In the experiment, only the syntactic properties of written Chinese are perceived and executed through a set of programme, while the English speaker has no knowledge on the sematic properties of written Chinese at all. The experiment concludes that a programme cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently it may make it behave. That means a computer manipulating syntactic properties of symbols cannot prove to understand the sematic properties of symbols as a requirement of “having a mind”.
Yet the Chinese room experiment has a flaw that the dictionary must exist prior to the experiment. In other words, when new terms are created, there must be some way to update the dictionary, otherwise if the questioner askes something outside the dictionary, the experiment fails. It is not practical to assume that there is a Chinese dictionary that includes everything. If it is omnipotent, the content of the Chinese dictionary would be infinite to include every possible meaningful term, for the increase of the number of entries grows with creation and innovation till then end of time. It is not of any practical interest to discuss something theoretically timely infinite. The flaw of the experiment says that either the Chinese dictionary does not include every possibility, or it is meaningless because of its infinity. This flaw indicates that the English speaker would not carry on the conversation if that Chinese express he read on paper is not included in the dictionary or it takes infinite long to search for that express in the dictionary, and thus the Chinese Room proposal fails the Turing test.
In practice, machines work based on syntactic properties of symbols instead of sematic properties. Instructions are executed orderly. Machines understand or possess sematic properties or meanings of symbols only after information input from we human beings is archived in memories. Similarly, either we human beings are programmed to retrieve the sematic properties of symbols stored in our DNAs archive, or our mind has particular parts or functions that understand the sematic properties.
No evidence so far supports the first proposal, and the storage of human brain is physically finite. Yet we tend to accept the second proposal. To overcome that issue, we discover the process of learning as following stated.
Syntactic properties of symbols come from sensory organs, while sematic properties of symbols come from learning process.
Learning process is acquiring new, or modifying existing, knowledge, behaviours, skills, values or preference and may involve synthesizing different types of information. In other words, learning is a set of methods of establishing sematic properties corresponding to syntactic properties. Perception process, which includes the understanding of both syntactic and sematic properties, states that syntactic properties allow minds, either human or artificial, to recognize particular symbols, and sematic properties allow minds to retrieve meanings corresponding to that symbols from archive.
If the corresponding sematic properties are not in the archive of brain, we thus cannot perceive that symbol, which happens a lot such as when we read articles written in languages we don’t know. If that happens we have to use learning process to establish the connection. For example, we must learn a particular language, particularly building vocabularies, which in fact is the process of establishing connections between syntactic and sematic properties of symbols, before we can read articles written in that particular language, particularly building our vocabularies.
Learning process is that minds, either human or artificial, establish connections between sematic properties and syntactic properties according to a set of commands or methods. Those new information created can either be archived for retrieve of perception, or be adopted to update the particular set of commands or methods of learning process, namely evolution, which looks like a programme can rewrite itself.
That requires the possibility to construct or programme sets of commands or execution orders to realize learning process. Therefore the question that “Could a computer have a mind?” becomes “Could we programme set of executing orders to achieve learning process?”
We all have feelings or experience of making unconscious decisions. Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation, and thus further argued that these unconscious skills would never be captured in formal rules. If that is the case, it would be truth that it is impossible to construct or programme sets of commands or execution orders to realize learning process.
Turing argued in response that, just because we do not know the rules that govern a complex behaviour, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.” Advances in science and technology lately partially have supported Turing’s arguments, since progress has been made towards discovering the “rules” that govern unconscious reasoning.
Neurobiologists believe all the problems regarding the process of learning and the set of executing orders to achieve it will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Other related questions such as Can a machine have emotions, Can a machine be self-aware, Can a machine be original or creative, or Can a machine have a soul, only depend on the complexity of the learning process, self-improving evolution.
Till here we are able to answer the questions that whether a computer could have a mind. The answer is YES. A computer could have a mind or a computer could be as intelligent as human if it achieves artificial intelligence involving a self-learning and self-improving process. And so far the evidence from the advances in science and technology support the possibility to realize such a self-learning and self-improving process.
The Matrix Trilogy depicts a virtual world in which machines have emotionless minds to compete with human minds. That sci-fi adventure leaves us with a bunch of curiosity on artificial intelligence, mainly that whether it is possible for computers to have minds regardless of emotion. Other films such as A.I. and Bicentennial Man narrate stories even further by creating robots with minds that have emotions, particularly love.
After deliberately meditation, we probably conclude on whether computer could have a mind with questions that,
1. How do we define that a computer has a mind?
2. If in case a computer does have a mind, what methods does it allow us to identify or to prove that situation?
According to dualism, the mind is non-physical, and therefore mind or intelligence is purely spiritual. Consequently it is not possible to understand or imitate human minds with purely physical terms. On the other hand, functionalism provides foundation for us to think about the mind as analogous to a computer. We can argue that minds are information processing machines as computers that they take information provided by sensory organs and other mental states which we have, process it and produce new behaviours and mental states. In philosophical theory, it is possible that a computer could have a mind. Yet the questions becomes whether it is possible, in physical practice, that a computer could have a mind.
Alan Turing in his landmark paper ‘Computing Machinery and Intelligence’ (1950) proposed the imitation game thought experiment as a response to those questions. In his paper, Turing ingeniously replaced the ambiguous question of “Could computer have a mind?” as “Are there imaginable digital computers which would do would do well in the imitation game?” By doing so, we avoid the first question of the definition of mind or intelligence. Turing’s answer to the second question is that if a computer responds as intelligent as a human being, we then prove that computer has a mind.
Later John Searle’s Chinese Room thought experiment (1980) challenged the definition of Turing’s proposal. It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese. However, the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either. In other versions of the experiment, the programme is replaced by an English-Chinese dictionary. And the English speaker carries on a conversation in written Chinese with that dictionary.
In the experiment, the difference between syntactic and semantic properties of symbols has been addressed. Syntactic properties are physical properties of symbols, while sematic properties are meanings of the symbols. In the experiment, only the syntactic properties of written Chinese are perceived and executed through a set of programme, while the English speaker has no knowledge on the sematic properties of written Chinese at all. The experiment concludes that a programme cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently it may make it behave. That means a computer manipulating syntactic properties of symbols cannot prove to understand the sematic properties of symbols as a requirement of “having a mind”.
Yet the Chinese room experiment has a flaw that the dictionary must exist prior to the experiment. In other words, when new terms are created, there must be some way to update the dictionary, otherwise if the questioner askes something outside the dictionary, the experiment fails. It is not practical to assume that there is a Chinese dictionary that includes everything. If it is omnipotent, the content of the Chinese dictionary would be infinite to include every possible meaningful term, for the increase of the number of entries grows with creation and innovation till then end of time. It is not of any practical interest to discuss something theoretically timely infinite. The flaw of the experiment says that either the Chinese dictionary does not include every possibility, or it is meaningless because of its infinity. This flaw indicates that the English speaker would not carry on the conversation if that Chinese express he read on paper is not included in the dictionary or it takes infinite long to search for that express in the dictionary, and thus the Chinese Room proposal fails the Turing test.
In practice, machines work based on syntactic properties of symbols instead of sematic properties. Instructions are executed orderly. Machines understand or possess sematic properties or meanings of symbols only after information input from we human beings is archived in memories. Similarly, either we human beings are programmed to retrieve the sematic properties of symbols stored in our DNAs archive, or our mind has particular parts or functions that understand the sematic properties.
No evidence so far supports the first proposal, and the storage of human brain is physically finite. Yet we tend to accept the second proposal. To overcome that issue, we discover the process of learning as following stated.
Syntactic properties of symbols come from sensory organs, while sematic properties of symbols come from learning process.
Learning process is acquiring new, or modifying existing, knowledge, behaviours, skills, values or preference and may involve synthesizing different types of information. In other words, learning is a set of methods of establishing sematic properties corresponding to syntactic properties. Perception process, which includes the understanding of both syntactic and sematic properties, states that syntactic properties allow minds, either human or artificial, to recognize particular symbols, and sematic properties allow minds to retrieve meanings corresponding to that symbols from archive.
If the corresponding sematic properties are not in the archive of brain, we thus cannot perceive that symbol, which happens a lot such as when we read articles written in languages we don’t know. If that happens we have to use learning process to establish the connection. For example, we must learn a particular language, particularly building vocabularies, which in fact is the process of establishing connections between syntactic and sematic properties of symbols, before we can read articles written in that particular language, particularly building our vocabularies.
Learning process is that minds, either human or artificial, establish connections between sematic properties and syntactic properties according to a set of commands or methods. Those new information created can either be archived for retrieve of perception, or be adopted to update the particular set of commands or methods of learning process, namely evolution, which looks like a programme can rewrite itself.
That requires the possibility to construct or programme sets of commands or execution orders to realize learning process. Therefore the question that “Could a computer have a mind?” becomes “Could we programme set of executing orders to achieve learning process?”
We all have feelings or experience of making unconscious decisions. Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation, and thus further argued that these unconscious skills would never be captured in formal rules. If that is the case, it would be truth that it is impossible to construct or programme sets of commands or execution orders to realize learning process.
Turing argued in response that, just because we do not know the rules that govern a complex behaviour, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.” Advances in science and technology lately partially have supported Turing’s arguments, since progress has been made towards discovering the “rules” that govern unconscious reasoning.
Neurobiologists believe all the problems regarding the process of learning and the set of executing orders to achieve it will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Other related questions such as Can a machine have emotions, Can a machine be self-aware, Can a machine be original or creative, or Can a machine have a soul, only depend on the complexity of the learning process, self-improving evolution.
Till here we are able to answer the questions that whether a computer could have a mind. The answer is YES. A computer could have a mind or a computer could be as intelligent as human if it achieves artificial intelligence involving a self-learning and self-improving process. And so far the evidence from the advances in science and technology support the possibility to realize such a self-learning and self-improving process.