banner image
Sedang Dalam Perbaikan

Barack Obama passes the Turing test, too

The famous computer science pioneer Alan Turing decided to define "artificial intelligence" as the machine's ability to speak in such a way that it fools people around into thinking that he or she or it is an actual human being. I don't think that this very definition of intelligence is deep – this will be discussed later.



Barack Obama and his Japanese friend

But let's first cover the story. As the chatbot's namesake Eugene S told us, the media have been full of hype about a chatbot pretending to be a 13-year-old Ukrainian boy Eugene Goostman (see his or her or its web where you may chat with Eugene) who has tricked 1/3 of a London committee into believing the words were produced by a human. The programmer of the chatbot remained modest and he would probably agree that his program isn't dramatically more advanced than Eliza that was created half a century ago.

(I still remember my encounter with a 130-cm robot who came to me and shook my hand at the Rutgers Busch Campus Cafeteria sometime in 1999. The discussion with this robot – about Czechia, Werner Heisenberg, and other things – was much more inspiring than similar talks one may have with 99% of the people. For a day or so, I was stunned: has the artificial intelligence improved so much? Beware spoilers: After the day, I assured myself that the robot has had cameras, microphones, and speakers converting human voice to a funny robotic noise, and this "artificial personality" was controlled remotely from about a 50-meter-distant location.)

Here's my interview with another one that has tricked almost all Americans and people in the world that his sentences are genuine human creations rather than decorated rhetorical patterns invented by semi-automatic politically correct speechwriters.

Motl: Did you know about the policy of selective targeting of conservative groups by the Internal Revenue Service?

Obama: Let me make sure that I answer your specific question. I can assure you that I certainly did not know anything about the IG report before the IG report had been leaked through the press.

Motl: But that wasn't my question. I was asking generally about the harassment of right-wingers, not about a report of yours.

Obama: Let me be clear. Now, could you tell me where you live?

Motl: Hmm. What about the relationships with Eastern Europe? Don't you think that America should support the independently working prosperity of countries such as Poland instead of their obsession with permanently viewing Russia as the culprit behind all their failures?

Obama: Let me make sure: Poland is one of our strongest and closest allies. Using a phrase from boxing, Poland punches above its weight. ;-D




Motl: My question wasn't specifically about Poland, it could have been about Estonia or any other country in the region. But I am asking about the principle and about the real and desired American attitude.

Obama: Let me make sure: Estonia is one of our strongest and closest allies. Using a phrase from boxing, Estonia punches above its weight. ;-D




Motl: That's a disappointing response. Maybe you are just misunderstanding my Pilsner, nearly Hungarian, accent. But it seems that it was easier for a chatbot pretending to be a politically correct politician to pass the Turing test because people got used to their absence of nontrivial ideas and instead, the constant repetition of dull clichés. Millions of people are not only willing to accept the human origin of these clichés but even to vote for politicians who don't say anything else. Did you understand that my question was focusing on America and its states?

Obama: Now, make no mistake: Over the last fifteen months, we've traveled to every corner of the United States. I've now been to fifty-seven states. So, could you tell me where you live?

Motl: If I return to the frustrating events in Ukraine. Is the situation already improving?

Obama: Let me be clear. Change isn't easy. It won't happen overnight. There will be setbacks and false starts. So much of America needs to be rebuilt. We have crumbling roads and bridges.

Motl: I have already heard it somewhere. At least you're not repeating that most of the girls in your class are either fat or pimpled.

OK, let me stop this silly conversation which was inspired by a virtual dialogue of Scott Aaronson with another virtual chatbot. I didn't vote for Obama and (probably) wouldn't vote for Obama. But most American voters did – both in 2008 and 2012 – so Obama has passed the Turing test.

Instead of continuing the conversation that isn't too fruitful, let me offer you some serious words.

Summary

I think that a human judge who is not sufficiently attentive and clever may get easily fooled into believing that a computer program is controlled by a human. After all, humans and their masses get manipulated all the time and they are often forced to believe things that are much less likely than the existence of a computer program fully indistinguishable from a human.

A cleverer human judge will be able to manipulate any existing program into corners where its differences from the regular human behavior will be amplified. As Scott's example shows, the simplest questions involving some everyday life experience or kindergarten knowledge are enough to unmask the artificial origin of (almost?) all existing programs that emulate humans.

Concerning Eugene Goostman, it was much easier for the program to fool a committee because committees are stupider than human beings – and perhaps stupider than most chatbots in the world, too. The program has some really easy defects that should have been fixed a long time ago. In particular, its repetition of the exact long phrases is utterly inhuman (the pimpled girls are the best example here). Humans sometimes also repeat things verbatim but these segments are usually shorter and the humans get bored by that perfect repetition soon.

Also, the decision of the program to speak about topics that almost certainly cannot be relevant for the question – because the chatbot misunderstood the question in its detail – deviates from the expected behavior of the humans. (Eugene the chatbot began to talk about wealth and architecture in Russia after he or it was asked about a clearly unrelated question involving the post-Soviet realm.)

If a human misunderstands a question, he or she either gives up and makes it clear or he or she is trying to comprehend what the question meant. The latter approach is much harder, of course: you may see the human potential to learn and ultimately understand what is needed to be understood. The existing computer programs really lack this ability. You can easily predict that you won't be able to teach them everything that needs to be taught to understand a certain question, and that's how you identify that they're not human beings, at least not intelligent ones.

In other words, computer programs of the usual type only have the potential to exhibit some behavior within a certain "class of responses" that is already envisioned when the program is written down. On the other hand, intelligent humans have the potential to increasingly deepen, filter, and crystallize their knowledge and to offer complicated responses that (and whose organization) wasn't clear when Nature wrote the self-improving program for the first time (which was really when it created the first RNA/DNA/protein molecule!).

Scott Aaronson asked whether the excessive hype about the Eugene chatbot boils down to a defect of the Turing test as a profound paradigm; or just to the journalists' misinterpretation of the deep ideas that Alan Turing brought us.

Nothing against Alan Turing but I think it is the former. Turing was trying to make the concept of "artificial intelligence" more well-defined. He made it slightly more well-defined – the ability to imitate human beings may be decided to be there or not to be there more clearly, by an operational procedure – but the price he paid was that the content of "artificial intelligence" became shallower at the same moment.

It is simply not hard to fool many people – and perhaps most people. Politicians know it much like the authors of various spam e-mail messages that pretend that the writer is someone who needs to help, chatbots enhanced by videos of nude women who are waiting for sex with you on the web servers, and so on. I think that computer programs are already able to emulate the behavior of some stupider organisms and perhaps stupider human beings, too. Artificial insect may fly. Spambots are sometimes filling physics blogs with incoherent, worthless, repetitive rubbish about the unfalsifiability of a theory, or any theory. And some human beings are adding their own comments because these individuals are exactly as stupid as the spambots – and as obnoxious as some insect, too.

The real problem is for a machine to imitate an intelligent human being, one that has the capacity to learn new things and deepen his or her understanding of a subject matter – to the depth and breadth that isn't incorporated or envisioned or pre-planned or thought about at the very beginning, at the moment of conception (or programming). And this ability to deepen the knowledge and especially the coherence, structure, and inner organization of the knowledge is what makes intelligent people intelligent.

This ability – the real artificial intelligence – has very little to do with the much more superficial ability to fool human judges or committees that or who may be insufficiently attentive or clever by themselves.

Everyone understands what the adjective "artificial" is supposed to mean: the behavior doesn't result from the behavior of DNA-powered biological neural (and other) cells. The hard part of the phrase "artificial intelligence" is "intelligence" and that's exactly the weakness of the Turing test as a criterion, too. The human intelligence is a wonderful thing – but only when it's deep enough. The Turing rates programs according to average human judges, and because average humans are probably stupider than they were 50 years ago (or at least, to be certain, not significantly smarter), it shouldn't be shocking that the programs that pass the Turing test in 2014 may be stupider (or at least not much more advanced than) the programs that passed the same test (with different judges, however) half a century ago.

A definition of "artificial intelligence" that is more valuable must be independent of the quality and depth of intelligence of undefined groups of people. The human-like intelligence may look remarkable but if you look sufficiently closely, even many – if not most – people are really shallow, repetitive, dumb, and uninteresting which is why the programs emulating their behavior are inevitably uninteresting, too! In fact, really average humans may be emulated by recording and copying terabytes of generic human responses and dialogues and choosing the most appropriate one in a given context. That strategy – abusing the fact that the artificial agents' memory may be larger than the human memory – may be sort of enough from many judges' viewpoint.

The fascinating challenge that remains largely open – and may remain open for quite some time if not forever – is for a program to emulate some of the most creative, intelligent humans in the history. Computers and people are converging – computers are getting smarter while people are getting stupider. But only the former component of the convergence process may impress us.

The general character of the human, self-improving algorithms differ from the usual classical computer algorithms that are also behind Eliza or Eugene Goostman. But I believe that the biological material is in no way necessary for this biological-like intelligence to arise and at some moment, silicon-based engines using the same fuzzy, self-improving algorithms to become more intelligent will be produced and programmed, too.
Barack Obama passes the Turing test, too Barack Obama passes the Turing test, too Reviewed by MCH on June 12, 2014 Rating: 5

No comments:

Powered by Blogger.