Do you discuss about the connectionist approach of computer?
Wonderful, clear lecture on some of the major ideas surrounding this question. Thanks for posting this!
I have an objection to the conclusion of the chinese room experiment. I do not think that it is fair to assert that someone who can reliably converse with someone else does not understand the language. I think that turing's prediction of solipsism perfectly depicts this assertion. What is language if not a bunch of symbols of different meanings to the individual? Just because I can speak in a language with some understanding of the translations of what each word means does not mean that I understand them any more or less than someone else who can also converse using the same language. When we listen to someone else talk our brains are doing exactly what is described in that thought experiment. We convert input into an output. Mouth noises do not have any literal translations, we are just making them up in our heads. Perhaps a computer translates those same symbols in a different manner, but how can you say that it is not simply a different form of understanding? When I think of the word "cat" I may have a completely different concept than a biologist does but nobody would say that only the biologist understands what the meaning of the word "cat" is. We both have different understandings in different ways. By the same token I think that this chinese-turing-machine may simply understand the chinese language in a different way than the chinese people reading the symbols and translating it in their heads do. I do not think it is even too far off from many conversations people have with eachother. It is almost expected for people to listen to another person's words not to understand, but to respond, in conversation. This is a step towards the input-output mechanism in a turing machine that is commonplace within humans.
Do you discuss about the connectionist approach of computer?
Wonderful, clear lecture on some of the major ideas surrounding this question. Thanks for posting this!
I have an objection to the conclusion of the chinese room experiment. I do not think that it is fair to assert that someone who can reliably converse with someone else does not understand the language. I think that turing's prediction of solipsism perfectly depicts this assertion. What is language if not a bunch of symbols of different meanings to the individual? Just because I can speak in a language with some understanding of the translations of what each word means does not mean that I understand them any more or less than someone else who can also converse using the same language. When we listen to someone else talk our brains are doing exactly what is described in that thought experiment. We convert input into an output. Mouth noises do not have any literal translations, we are just making them up in our heads. Perhaps a computer translates those same symbols in a different manner, but how can you say that it is not simply a different form of understanding? When I think of the word "cat" I may have a completely different concept than a biologist does but nobody would say that only the biologist understands what the meaning of the word "cat" is. We both have different understandings in different ways. By the same token I think that this chinese-turing-machine may simply understand the chinese language in a different way than the chinese people reading the symbols and translating it in their heads do. I do not think it is even too far off from many conversations people have with eachother. It is almost expected for people to listen to another person's words not to understand, but to respond, in conversation. This is a step towards the input-output mechanism in a turing machine that is commonplace within humans.