11/22/2013
Mind, Vol 54 No 236, Oct 1950
This paper examines the question of whether machines can think. Since it was written when "machines" was still a broad term that could refer equally to a type-writer or an electronic computer, much of the work of the paper is clarifying, and exploring, what kind of machine could fit the bill for being able to learn, and, perhaps, think. Author works on re-phrasing the question to whether a machine (from here on, also "computer") can succeed at the "imitation game" as well as a man could. Here is the baseline game:
Interrogator (C) uses (probably) a keyboard and monitor to communicate with a separate, isolated, man (A), and a woman (B). Both A and B are trying to convince C that each is a woman. In other words, the man is trying to trick C into believing he is a woman. At some point, C makes the assessment on the gender of A and B. It is expected that C will have some degree of success at this.
Author's argument is that if a computer can take A's place and trick C as much of the time as A can, then a computer has played the imitation game as well as a man could, and thus was doing whatever a man was doing just as well. What is interesting is that author spends only a little time discussing the appropriateness of using this kind of game as an alternative to the "can machines think" question (pg435).
After discussing what kind of machine is envisioned (pg435-442), author considers objections (pg443-454):
1. The Theological objection: only humans with souls can think. Response: we're just making another vessel for a soul.
2. The 'Heads in the Sand' objection: if computers thought, humans would not be as special. Response: maybe so but it's possible, no?
3. The Mathematical objection: there are limits to the computing power of machines, cf Godel's proof. Response: and there aren't limits to the computing power of humans?
4. The Argument from Consciousness: thinking is part of understanding, which is part of consciousness, which machines do not have. Response: unless you're charitable, nobody else but you has consciousness. Thus, why not also be charitable about a machine that does what you can do equally well?
5. Arguments from Various Disabilities: machines can't do so many things that humans can do. Response: computers are getting better and much, and furthermore, arguments from disabilities don't hold up when comparing humans to each other. There is an interesting discussion on what kinds of mistakes computers can make (pg448-9) in this section. Author uses a distinction between "errors of functioning" and "errors of conclusion". You can program errors of functioning, but you may want to take that frailty away from computers to the extent possible. However, there is nothing that says a computer can't (and won't, often) make an error of conclusion, e.g. come up with the wrong answer through a flawless method. Author uses the example of inductive reasoning getting an outcome wrong even when used perfectly.
6. Lady Lovelace's Objection: machines do not have learning and cannot come up with anything new. Response: yes, but it isn't hard to see how they will be able to, someday. Furthermore, is novelty the mark of thinking? If so, even humans might not think very much.
7. Argument from Continuity in the Nervous System: the nervous system and a computer are too different, fundamentally. Response: this should not affect the imitation game, or the validity of conclusions, no matter which medium they were reasoned from.
8. The Argument from Informality of Behavior: there is no good way to possibly index all the appropriate behaviors for every situation for a computer, and since humans know the appropriate behaviors for every situation, humans cannot be the same as computers. Response: a computer can know what to do if given the right education and programming. There is also an interesting discussion on the difference between "rules of conduct" and "laws of behavior" here (pg452).
9. The Argument from Extra-Sensory Perception: humans have ESP. Response: since this may be true, to restrict the experiment to a "telepathy-proof room".
The remainder of the paper largely deals with what it would be like (theoretically) to create a learning computer. There includes an interesting discussion (pg454-5) about how the "real mind" is elusive, using a metaphor of the "skin of an onion".
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment