Sunday, February 19, 2006



In October 1950 the philosophical quarterly Mind published a paper by A. M Turing under the title "Computing Machinery and Intelligence". The first sentence of that paper read, "I propose to consider the question, 'Can machines think?'". (1) Six years later the paper was reprinted in an anthology, The World of Mathematics, edited by James Newman, under the title "Can a Machine Think?" Ever since there has been a torrent of publications around that question and it has given rise to what is known as the Artifical Intelligence project. Now, fifty years after that epochal reprint, Mark Halpern has published a judicious study of the whole issue (2). Halpern blasts the claims of Artificial enthusiasts and questions their right to pose as descendents of Alan Turing. To Mark Halpern I owe the incitement to offer the following thoughts.
For reasons that will become evident in the course of this paper, my treatment of the question is tangential to the Turing Test and to the questions it bred and the discussions it incited. I will readily concede it is not inconceivable that we may make a thinking entity or even an entity that loves and hates and composes symphonies and creates original poetry. My contention is that even after conceding that, there would remain questions that we have to be clear about.
Turing, having said it would be absurd to decide the question by examining how the terms "machine" and "think" are commonly used, proposes that the question be decided by an experiment which he calls the Imitation Game but which has come to be known as the Turing Test. The idea of the test is simple: to set questions to a computer and a human being, both hidden from the questioner. If the questioner is unable to decide which answers issue from the computer and which from the human being, we conclude that the computer was thinking.
Turing expected computers to earn the description 'thinking machines' not on the basis of problem-solving capabilities but on the basis of demonstrating the capacity for answering questions in a human-like manner. That, as far as it goes, is sensible. We today have computers that perform in seconds mathematical operations that would take a team of mathematicians much longer to perform; this in itself does not bring those computers any nearer to being human-like. And yet Turing's sensible proviso does not remedy the error inbuilt in the very idea of the test. In proposing to decide the question on the basis of objectively observable criteria, we remove all consideration of subjectivity and thus empty the question of all philosophical significance.
As often happens with questions that look simple, the question "Can a machine think?" is not a single question but is a conglomeration that can be separated into numerous questions which might receive different answers. To think clearly we need to separate these different questions.
In what sense can the Turing Test determine whether a computer is thinking? The answer to this question of course depends in the first place on how we define 'thinking'. But I do not intend to pursue the question in that direction. I think it is not unreasonable to say that however we define 'thinking' it will be possible sooner or later to programme a computer so that it will 'think' in the sense of the elected definition. But this would leave open what I regard as the more important question: Can the Turing Test determine whether a computer has subjectivity?
Again, whether or not we find the Turing Test providing a criterion for subjectivity, we would yet be left with a still more important question: What is subjectivity? For supposing we can devise a computer of such complexity as to have its (her?, his?) own whims and moods and initiative, that 'computer' would be in the same position as a cloned human being — its subjectivity would be an 'emergent' reality not reducible to either the hardware or the software that went to the making of the computer-person.
(I use the term 'emergent' hesitantly since it has been loaded with interpretations I cannot accept.)
What I am concerned to emphasize is that regardless of the process by which a person comes to be a person, it is the subjectivity of the person that is the locus of reality and value.
Approaching the question from a different angle, if or when neuroscientists succeed in completely mapping and artificially reproducing all the workings of a human brain (never mind the untechnicality of my language; I make no pretence to scientific knowledge; this does not vitiate my position), I would still maintain that the achieved autonomy and subjectivity would be creative in a double sense: (1) it would be an instance of the creativity of all process in nature ('natural process' would be needlessly ambiguous), bringing into being a reality that was not there before, an original reality; (2) the 'emergent' entity would fulfil itself, assert its reality, in creative activity, in thoughts and deeds that bring into being what was not there before.
Marginally: supposing we made a fully functional brain of an intelligence equal to that of an Einstein, the being to which that brain pertains would not have human feelings, human emotions, human desires, unless it were integrated with a body of flesh and blood with the same hormones and enzymes and what not as anyone of us. But this is neither here nor there, for there is nothing to prevent there being 'persons' constituted differently from us that would experience feelings and emotions other than those experienced by us.
From the start and throughout Turing's paper it is evident that he has no doubt as to what the answer should be. The test was obviously not devised to help us find an answer to the general question "Can machines think?" but to calibrate particular computers to decide which one or ones come up to the specified standard of thinking. And yet Turing's answer to his own question comes as frustratingly anticlimactic:
"The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
If the question is reduced to one of determining the conventional usage of words, it becomes of little philosophical importance. Halpern points to a "glaring contradiction in Turing's position" since at the beginning of his paper he held that to seek an answer to the question "in a statistical survey such as a Gallup poll" would be absurd.
Halpern quotes psychologist Epstein as saying that "the sentient computer is inevitable." Clearly Epstein understands sentience in behaviourist terms. With the advance of technology we can have computers that imitate human responses and human behaviour with more and more sophistication. But the question for a philosopher does not turn round what computers can or cannot do but round what computers do or do not experience.
Moreover, factually, by the criterion of returning original responses, as Mark Halpern remarks, "no computer, however sophisticated, has come anywhere near real thinking."
Lucretius's tumbling atoms do not remain tumbling atoms: they become Goethe and Heine and Shakespeare and Wordsworth. The question philosophy should answer is this: Which has the better claim to the title 'real', the dust that was Goethe or the living fire that even today sings,
Alles Vergängliche
Ist nur ein Gleichnis;
Das Unzulängliche,
Hier wird's Ereignis;
Das Unbeschreibliche,
Hier ist es gethan;
Das Ewig-Weibliche
Zieht uns hinan — ? (3)
Plato had an answer to that question. I think it is the one answer that makes sense of human life.


(1) Alan Turing's paper is accessible at: and numerous other online sources.
(2) Mark Halpern, "The Trouble with the Turing Test", The New Atlantis, Number 11, Winter 2006, pp. 42-63, available online at: and a more detailed version can be found on his website,
(3) The closing lines of Faust, Part Two.


Post a Comment

<< Home