May 17, 2011 12:14:00 PM
You say you are a human. Now, prove it. Wait, wait -- it''s too easy to point to your face or to perform a tap dance as you sing "Bicycle Built for Two." That will not do at all. You must, instead, at your computer terminal type in your part of a conversation that will show to the other conversationalist that you are not yourself a computer. And you will be competing with computers who have been programmed to try to prove that they are humans. This is the basis for the Loebner Prize, a controversial annual competition within the artificial intelligence community.
A panel of judges has a series of five-minute-long conversations via screen and keyboard; at the other end of the conversation might be a computer programmed to pretend to be a human or it might be a human trying to dissuade the judges that they are typing to a computer. The judges, of course, don''t know beforehand who is who (or, I suppose, what is what), and vote for the conversations that seem most human to them. The Most Human Computer Award, a research grant, goes to the programmers of the best computer conversationalist.
But oddly, there is a Most Human Human award for the human who did the best job of making the judges think they were typing to a human. In 2009, Brian Christian won the award, and he has written about it in "The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive" (Doubleday). It is a curious look into the history and potential of artificial intelligence, and a brilliant comparison between artificial intelligence and our natural variety. Christian may have won a prize demonstrating his humanness, but confirms his victory in this humane, humorous and thought-provoking book. "In a sense," he tells us, "this is a book about artificial intelligence, the story of its history and of my own personal involvement, in my own small way, in that history. But at the core, it''s a book about living life."
The Loebner Prize grew out of the Turing Test. Alan Turing was a brilliant British mathematician and codebreaker who was thinking deeply about what computers could do long before there was anything we would recognize as a computer. In his 1950 paper "Computing Machinery and Intelligence," he proposed that it was difficult to define what thinking was, so we could bypass the question of whether machines could think by using an imitation game.
An interrogator would interview two players via teleprinter (no LCD monitors then) and decide which was the human and which was the computer. Turing further proposed the alarming idea that a machine that could win the game was doing everything that its human competitor was doing, that is, thinking. Turing had an astonishing foresight; computing was primitive in his time. (Christian reminds us that "computers" at the time were not machines but people who computed numbers, and that in the 1940s artificial intelligence pioneer Claude Shannon fell in love with and married a computer.) But he predicted that it would be but 50 years before a computer could play the imitation game so well that the average interrogator could not tell it from a human. He was overoptimistic; programs competing for the Loebner Prize are doing better and better, and although they are not yet conversing as well as humans, to read Christian''s book is to be convinced that someday it is going to happen. It has also made me doubt whether, as Turing told us, that such a conversing program is really thinking.
There are manuals to tell programmers how best to make conversation realistic, but Christian discovers there are no such guides to tell humans how to show themselves human. He talks with former competitors (and seems to have a collegial relationship with the humans who were in the tests with him) to get advice. "Just be yourself" was the most common bit of advice, and on the face of it, this ought to be easy; Christian is, after all, human. But what does "Be yourself" mean? (Christian doesn''t give us the joke rejoinder "Unless you are a jerk. If so, be someone else.") And if he is so casual as to just be himself during the competition, does this have a good chance of bringing him victory?
Much of the book involves his interviews with linguists, information theorists, philosophers and even lawyers about what the Turing Test means, and thereby what it means to be human, and the best ways to show it. Way back in Plato''s time people were thinking about what makes humans unique, comparing ourselves to other animal species. Eventually, we learned that while we may be the best at, say, using tools or using language, we weren''t the only animals that did such things. It is clear to anyone who has a dog that dogs have some sort of mind and some sort of thought, so we are not the only ones who think. Nowadays the comparisons not only are between us and animals, but between us and computers.
And whatever it is that computers do, it is not thinking like we do. For instance, there is a conversational program called Cleverbot, which has been awarded prizes in the competition. It has a website, and not only can humans visit it and engage in conversation, Cleverbot borrows from what they tell it. It takes samples of these conversations and from the samples it makes its own answers and remarks. (Translation programs via Google do the same sort of thing.)
Conversing with Cleverbot can be convincing, if you ask it "How are you?" or "What is two squared?" or "Can computers think?" (I tried that one. Cleverbot assured me, "No, they can''t.") But since Cleverbot is an amalgamation of conversations, even though it can crunch a huge database of words and phrases actually used by humans, it doesn''t do too well with even the most basic of conversation starters. "Where are you from?" I asked, and it said, "I don''t know."
That''s a true answer, of course! None of the computer programs comes close to knowing anything. Christian often asks us to look at an example of successful artificial intelligence, Deep Blue which defeated Garry Kasparov in chess in 1997. There is no doubt that the computer was playing chess. It might even be said to be planning moves or playing aggressively. But it had no idea what it was doing; it could not tell you what a pawn was, nor could it feel any thrill of victory. No conversation programs have any idea what they are doing, either; they are all simulating conversation. Some of the conversational give-and-takes reproduced here are just clunkers, remarks no human would make, but there are others that are surprisingly life-like.
They are really conversations, just like Deep Blue was really playing chess, although the conversational computers are not nearly so good at their job as Deep Blue was at its job. It is comforting, in a way, that computers are so bad at something we take for granted, just chatting. Christian wants to call attention to how special we are, and his book is a success, showing that, among other things, humans can take into account context, allusion, and metaphor, which computers cannot. Even more important, when humans don''t understand what has been said, they don''t have to risk saying something stupid in response; they can ask questions to aid understanding, but computers have no understanding to be aided. It would be so fascinating to hear what Turing would say about these machines, or about the next generation of them that really is going to be able to converse with some sort of naturalness.
What would Turing think, for instance, if Cleverbot turned really clever and sampled its huge database of conversations so well that it really was a good conversation partner? It''s hard to believe that Turing would think that such successful sampling would actually be thinking. We will have reliable conversational computers sometime fairly soon; I predict that at that point, we will still be asking if computers are ever going to be able to think.
Rob Hardy is a local psychiatrist who reviews books for a hobby. His e-mail address is [email protected]