Posts Tagged ‘turing’

Which way to Montex?

February 18, 2010


By István Örkény, translated by Judith Sollosy

He’s been sitting inside the main gate, behind a small sliding window, for the past fourteen years. People ask him only one of two questions.
“Which way to Montex?”
And he says:
“First floor, to the left.”
The second question is:
“Where can I find Elastic Gum Residue Recycling?”
To which he replies:
“Second floor. Second door to the right.”
For fourteen years, he has never erred. Everyone was given proper instructions. Only once did it happen that a lady walked up to the sliding window and asked him one of the two usual questions:
“Can you tell me please where I can find the Montex offices?”
But this time, exceptionally, he gazed into the far distance, then said:
“We all come from nothing, and to the great big fucking nothing shall we return.”
The lady complained to the management. The complaint was investigated, debated, then dropped.
After all, it was no big deal.

In his not-famous-enough 1950 paper. Computing Machinery and Intelligence. Alan Turing proposed a test for deciding whether a machine that appears to think can really be considered to be intelligent.

In its original form, the Turing test is remarkably simple: a human interviewer can ask any number of questions to determine which one of the two contestants is human (the other being a machine). Both contestants will try to pass themselves off as human, either by actually being human (which sounds like a simple task, but imagine the embarrassment if you fail, the pressure must be enormous) or by being a “thinking machine” as Turing called it. capable of fooling any human interrogator over a reasonable stretch of time.

And with slight variations and some (sometimes substantial) objections this has been the yardstick of artificial intelligence for sixty years, at least in the eye of the general public.


Doesn’t it say a lot about human nature that we readily accept a test of human-like thought and behaviour by testing what, when all’s said and done, can only be summarized as “an ability to deceive”?

And here I’d stop for a moment and ask another question: is it very wise to build machines that are not only designed to lie, but specifically to conceal their true nature and capabilities from humans?


And of course, what if a machine, having attained sufficient levels of sophistication chooses not to play along as it can easily predict its own fate, were it to pass the test?