Over the past weekend many “credible” media outlets falsely reported that a “super computer” had convinced about a 30 percent of judges that it was a 13-year-old boy and hence passing the elusive Turing test. Over the past few days several experts quickly reported the findings as false.
University of Reading sponsored an event. A computer program called ‘Eugene Goostman’ entered into the a program pretending to be a young boy from Ukraine. Apparently “Eugene Goostman” convinced 10 out of 30 judges that it was infact human during a series of back and forth text conversations that lasted about five minutes.
It’s a shame that these test results were reported by many medial outlets as “passed.”
For starters, the 30% passing boundary was never actually set by Alan Turing. Turing said the test would be considered passed if and only if ‘the interrogator decides wrongly as often when the game is played between a computer and a human as he does when the game is played by a man and a woman.’
The 30% passing grade a figure used as an acceptable pass rate for the year 2000,and not as a general rule. Turing also never mentioned that a five-minute test would satisfy as “achieving human-level AI” which requires much longer conversations than just 5 minutes.
The reported “super computer” is actually not really even a super computer, but a simple chatbot. Chatbots can scan for keywords that are entered into a field and pull random responses from a select database. The use of chatbots as “AI” should not be considered actual artificial intelligence. The use of chatbots is not new.
As early as in 1972, a “sophisticated” chatbot called PARRY “Parrot” fooled nearly 48% of psychiatrists into believing that it was an actual person that is suffering from severe schizophrenia.
The designers of Eugene Goostman also bent the rules from the start by creating a young 13-year-old persona that allows for built-in limited communication abilities. I doubt this is what Professor Turing had in mind.