Originally Posted by TScott
Not real AI.
Or more to the point Strong AI.
Artificial Intellegence has different levels. An example of the lowest level would be the thermostat in your home. It wants (because you programmed it) to keep the temperature around it's sensor at a specified (by you) temperature. It senses the temperature and if it is to low or too high it will send a signal to the HVAC equipment to heat or remove the heat from the area. When it senses the temperature is at the specified level it sends a signal to the equipment to stop.
Artificial intellegence has been a quest since the first computer was invented. Science fiction writers have long proposed machines who could think just like humans, and somewhere along the way this concept was dubbed Strong AI.
In 1950, Alan Turing developed a thought experiment, a test for Strong AI, in which there were 3 participants 2 humans and 1 computer. A human judge was set in one room with a terminal in which she could communicate with the other human and the computer. She could ask them questions through her keyboard and they would answer her through her printer. If she could not tell from this conversation which was the computer, the computer, according to Turing, would pass as strong AI. And for decades this became the standard. LISP and other programs were developed (although as I said in the other thread LISP became sort of a standard template for language recognition programs).
About 30 years later a philosopher named John Searle challenged the Turing Test with what became known as The Chinese Room thought experiment. In this test a human is set in a room in which messages can be passed through a slot into the room. The messages are written in Chinese and the human does not understand any Chinese at all. The human has a look-up table that he can compare the Chinese writing to and that gives answers that the human can copy to a piece of paper and pass out the slot. The human is answering the questions without really knowing what is being transpired. Searle beleives this is what computer programs do. They are given words, and groups of words in a certain syntax that they can compare to arrays in their memories. Through simple "if-then" logic they can formulate answers.
Searle's experiment was attacked by the Computationalists in the AI field because of it's impracticality, and he defended it by simplifying his theorem into 3 axioms: Computer programs are based on syntax; Human mentality is based on semantics; Syntax in insufficient for semantics.
IMO strong AI is becoming more and more a reality, but not through the school of Computation, but instead through the Connectionist school. IOW strong AI will not be accomplished with programming alone, but through hardware design, such as neural networking, a relatively new field in which, coupled with fuzzy logic programming, much of our industrial robotics is being based.
Will a Robot ever be able to accomplish true human AI, where it can enjoy subjectiveness and be totally creative? I guess my feelings are why would we develop such a thing?
Sure, LISP syntax has become a standard in AI, but you must understand then that LISP is using what it calls "atoms" which are representative tokens, in it's simulation of semantics.
KC said:Seems like code for "it's not how human brains do it so it's not real".
Not real AI.
Or more to the point Strong AI.
Artificial Intellegence has different levels. An example of the lowest level would be the thermostat in your home. It wants (because you programmed it) to keep the temperature around it's sensor at a specified (by you) temperature. It senses the temperature and if it is to low or too high it will send a signal to the HVAC equipment to heat or remove the heat from the area. When it senses the temperature is at the specified level it sends a signal to the equipment to stop.
Artificial intellegence has been a quest since the first computer was invented. Science fiction writers have long proposed machines who could think just like humans, and somewhere along the way this concept was dubbed Strong AI.
In 1950, Alan Turing developed a thought experiment, a test for Strong AI, in which there were 3 participants 2 humans and 1 computer. A human judge was set in one room with a terminal in which she could communicate with the other human and the computer. She could ask them questions through her keyboard and they would answer her through her printer. If she could not tell from this conversation which was the computer, the computer, according to Turing, would pass as strong AI. And for decades this became the standard. LISP and other programs were developed (although as I said in the other thread LISP became sort of a standard template for language recognition programs).
About 30 years later a philosopher named John Searle challenged the Turing Test with what became known as The Chinese Room thought experiment. In this test a human is set in a room in which messages can be passed through a slot into the room. The messages are written in Chinese and the human does not understand any Chinese at all. The human has a look-up table that he can compare the Chinese writing to and that gives answers that the human can copy to a piece of paper and pass out the slot. The human is answering the questions without really knowing what is being transpired. Searle beleives this is what computer programs do. They are given words, and groups of words in a certain syntax that they can compare to arrays in their memories. Through simple "if-then" logic they can formulate answers.
Searle's experiment was attacked by the Computationalists in the AI field because of it's impracticality, and he defended it by simplifying his theorem into 3 axioms: Computer programs are based on syntax; Human mentality is based on semantics; Syntax in insufficient for semantics.
IMO strong AI is becoming more and more a reality, but not through the school of Computation, but instead through the Connectionist school. IOW strong AI will not be accomplished with programming alone, but through hardware design, such as neural networking, a relatively new field in which, coupled with fuzzy logic programming, much of our industrial robotics is being based.
Will a Robot ever be able to accomplish true human AI, where it can enjoy subjectiveness and be totally creative? I guess my feelings are why would we develop such a thing?