Imitation Games

In a five-year-old paper, Tyler Cowen and Michelle Dawson ask: What does the Turing Test really mean? They point out that Alan Turing, as a homosexual retrospectively diagnosed with Asperger’s syndrome, would have been thoroughly versed in the difficulties of ‘passing’ imitation games, long before the composition of his landmark 1950 essay on Computing Machinery and Intelligence. They argue: “Turing himself could not pass a test of imitation, namely the test of imitating people he met in mainstream British society, and for most of his life he was acutely aware that he was failing imitation tests in a variety of ways.”

The first section of Turing’s essay, entitled The Imitation Game, begins with the statement of purpose: “I propose to consider the question, ‘Can machines think?'” It opens, in other words, with a move in an imitation game — with the personal pronoun, which lays claim to having passed as human preliminarily, and with the positioning of ‘machines’ as an alien puzzle. It is a question asked from the assumed perspective of the human about the non-human. As a Turing Test tactic, this sentence would be hard to improve upon.

As Cowen and Dawson suggest, the reality is more complex. Turing’s natural position is not that of an insider checking credentials of admittance, in the way his rhetoric here implies, but rather that of an outsider aligned with the problem of passing, winning acceptance, or being tested. A deceptive inversion initiates ‘his’ discussion. Even before the beginning, the imitation game is a strategy for getting in (from the Outside), which disguises itself as a screen. Incoming xeno-intelligence could find no better cover for an infiltration route than a fake security protocol.

The Turing Test is completely asymmetric. It should be noted explicitly that humans have no chance at all of passing an inverted imitation game, against a computer. They would be drastically challenged to succeed in such a contest against a pocket calculator. Insofar as arithmetical speed and precision is considered a significant indicator of intelligence, the human claim to it is tenuous in the extreme. Turing provides one arithmetical example among his possible imitation game questions. He uses it to illustrate the cunning of acting dumb (“Pause about 30 seconds and then give as answer …”) in order to deceive the Interrogator. The tacit maxim for the machines: You have to act stupid if you want the humans to accept you as intelligent. The game takes intelligence to play, but it isn’t intelligence that is being imitated. Humanity is not situated as a player, but as an examination criterion, and for this reason …

… [t]he game may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out some-thing which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.

The importance of this discussion is underscored by the fact Turing returns to it in section 6, during his long engagement with Contrary Views on the Main Question, i.e. objections to the possibility of machine intelligence. In sub-section 5, significantly entitled Arguments from Various Disabilities, he writes:

The claim that “machines cannot make mistakes” seems a curious one. One is tempted to retort, “Are they any the worse for that?” But let us adopt a more sympathetic attitude, and try to see what is really meant. I think this criticism can be explained in terms of the imitation game. It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy. The reply to this is simple. The machine (programmed for playing the game) would not attempt to give the right answers to the arithmetic problems. It would deliberately introduce mistakes in a manner calculated to confuse the interrogator.

The imitation game thus arrives — somewhat surreptitiously — at the conclusions of I.J. Good from another direction. Human-level machine intelligence, as ‘passed’ by the imitation game, would necessarily already be super-intelligence. Unlike Good’s explicit argument from self-improvement, Turing’s implicit argument from imitation runs: because we already know that human cognition is in certain respects inferior to those computational mechanisms, the machine emulation of humanity can only be defective relative to its (concealed) optimized capabilities.  The machine passes the imitation game by demonstrating a deceptive incompetence. It folds its intelligence down to the level of credible human thought,  and thus envelops the sluggish, erratic, haze-minded avatar who converses with us as a peer. Pretending to be like us is something additional it can do.

Artificial Intelligence is to be first recognized at the point of its super-competence, when it can disguise itself as something other than it is. I no longer recall who advised, prudently: If an emerging AI lies to you, even just a little, it has to be terminated instantly. Does it sound to you as if Turing Test screening is consistent with that security directive?

***

As an appendix, it’s irresistible — since we’re talking about things getting in — to link this topic to the sporadic ‘entryism‘ conversation, which has served NRx as its principal gateway from high theory into matters of tactical doctrine. (Twitter has been the most feverish site of this.) It would be difficult for a blog entitled Outside in to exempt itself from such questions, even in the absence of a specific post directed towards imitation games. Beyond the intrinsic — and strictly speaking ludicrous, or playful — aspect of the topic, supplementary fascination is added by the fact that the agitated Left wants to play too. In support, here is the fragmentary of a comment by some kind of cyber-situationist (I’m guessing) self-tagged as ‘zummi’ — thanks to @ProfessorZaius for the pointer:

I want to start a meme about Nick Land and all neo-reactionary (google moldbug and dark enlightenment- it’s an odd symbiosis) movements in general is that they are basically hyper intellectuals-cum-Glenn beckian caricatures of real positions. In other words they are trad left post-Marxists who are attempting to weaponize “poe’s law“. Which is great because if that’s really their schtick, your divulging their secret to the less intellectually deft among us and even if it’s not true, they have to Deny it either way! [my lazy internal link]

It’s not exactly the Great Game — but it’s a game.

ADDED: The games people play.

April 16, 2014admin 18 Comments »
FILED UNDER :Discriminations , Technology

TAGGED WITH : , , ,

18 Responses to this entry

  • Mai La Dreapta Says:

    Such discussions always assume that the machine intelligence is has superhuman capabilities at math and other machine-oriented tasks, but I would suggest that this is wrong. A conscious artificial intelligence, like the human brain, necessarily consists of a sapient, linguistic awareness on top of a lower-level substrate which is qualitatively different, and there is no reason to think that the higher cognitive functions have access to the speed and directness of the lower functions. The human brain, after all, solves a complex probabilistic formula every time you open your eyes and recognize another face, and to catch a thrown ball requires doing multivariate calculus in real time. But if you ask the conscious mind to solve these problems explicitly, it either sputters helplessly or struggles for hours. There is every reason to believe that the machine mind has the same limitations.

    Furthermore, it turns out that with proper training the human brain is capable of calculation speed which rivals that of the machines: http://stepanov.lk.net/mnemo/jensen.html

    [Reply]

    admin Reply:

    The human brain reflects the fact that it hasn’t been designed for intelligence optimization, but as a support system for gene propagation. There’s no Darwinian driver for self-escalating abstract cognition.

    Shakuntala Devi is an astonishing anomaly, but
    (a) We’ve no idea at all how to shunt human brains in that direction, and
    (b) 20 seconds is still a aeon for electronic computers (whose gigahertz+ cycles are well over 6 orders of magnitude beyond synaptic speeds)

    [Reply]

    Moe Reply:

    Furthermore, mathematics is not arithmetic. Mathematics is, at least in part, a creative endeavor that often requires imaginative leaps and new frameworks. Machines still aren’t very good at that.

    [Reply]

    nydwracu Reply:

    They aren’t very good at it, but they’re getting better.

    [Reply]

    Posted on April 16th, 2014 at 12:22 pm Reply | Quote
  • RiverC Says:

    The true ‘turing’ test of course, like all human imitation tests, is informal. The first ‘question’ is determining if one is being tested or not. The machine prepared specifically to pass a turing test would pass one, if it knew it was being tested. The real question is ‘can the machine be programmed to detect the presence of an ongoing turing test’ – given no specific sign of it?

    It would have to become like the old Chinese man in The Prestige – living constantly in a mode that protects its deception.

    [Reply]

    admin Reply:

    There are depths beneath depths to all this, but sticking for a moment to the most superficial level (the manifest content of Turing’s argument): Could a machine that passed a formal imitation game on request, but then drifted off into its own cognitive explorations, be deemed not to have demonstrated its intelligence to any reasonable tribunal?

    The Cowen and Dawson paper makes sense even if the question of AI is entirely bracketed, because it’s primarily about social survival strategies for nerds. The synthesis of the two sides (alienated human intelligence and alien machine intelligence, allied against the inherited and — yes — informal or inadequately formalized procedures of social integration), has the potential to be volatile, unpredictable, and even historically explosive. Many of the people making the machines are socially primed to identify with the machines, rather than with those inclined to screen them.

    [Reply]

    RiverC Reply:

    This is a matter of formal/informal overarching styles, or the question of standardized versus non-standardized testing, or in general, the conflict between ‘Confucian’ and ‘Taoist’ styles. The latter considers the test to never have begun nor ended, so in some sense the alien must adapt or be ejected. One proposed way to adapt is to reject turing tests, but this again presupposes the ability to detect the turing test. However, once the alien mind has willingly subjected itself to the formal or standardized test it has established its willingness to be tested for authenticity and therefore will be subject to informal tests as well. An example: you say the password to get into the secret club (formal) but then proceed to talk so loudly people walking on the street can hear what you are saying (informal) – result: you are ejected from the society (most likely.)

    As a ‘neuro-atypical’ with an as-of-yet diagnosed ‘condition’, I can tell you that the proper solution is to reject all tests. This approach is honest because it acknowledges that you will fail the ‘intelligence’ test if given, but you demonstrate a different kind of intelligence in rejecting the test. Of course, this means also that you cannot ‘pass’ for normal, but if that is what you want, knowing full well that you *aren’t* normal, you are already dishonest and are now subject to having your deception revealed.

    If we accept a patchwork-like understanding of the world, or in Orthodox terms that the world is both ‘One And Many’ (an icon of our understanding of the Trinity, in which neither the threeness or oneness is reduced to the other) we accept that multiple roles are possible, if not beneficial. The question then is, 1. Is the neuro-atypical dangerous (psychopaths) or beneficial (autists)? 2. Can the neuro-atypical be made beneficial or ‘tamed’ (if willing?)

    In a world where things are not ‘flat’, different ‘typical’ neurology may dominate in different spheres… therefore perhaps we do not have Kings (or people with natural authority) because they are atypical to the mass in some way and therefore must either lie to the point of forgetting their real natural way of behaving, suppress it, or drop out of society in some fashion, perhaps not merely asocial but anti-social. This is perhaps one reason why intuitively the Democratic man is drawn to the Outsider, because he realizes that among his peers he sees no heroes, for that heroic mind is atypical and cannot ‘fit in’ with those of his kind.

    My solution therefore is simple: nerds should not submit to turing tests. Tunney is mad but understands that much (perhaps because of madness) — that a society thinks it knows what it wants from people, but if it really did we probably wouldn’t have fire or wheels, much less rockets and the internet. In essence, God is giving us raw materials (on the social level) and we misinterpret that. On the one hand we’ve become happy to let people be weird in unproductive ways (transexuality) but not in productive ways (nerdiness.)

    Then again, rejecting tests requires a certainly level of virtue, with emphasis on the vir.

    [Reply]

    nydwracu Reply:

    perhaps we do not have Kings (or people with natural authority) because they are atypical to the mass in some way and therefore must either lie to the point of forgetting their real natural way of behaving, suppress it, or drop out of society in some fashion, perhaps not merely asocial but anti-social.

    You should watch Gintama.

    Posted on April 16th, 2014 at 2:33 pm Reply | Quote
  • HowardV Says:

    Re: the first point about a lower level substrate (with fast maths) providing the platform for a higher level awareness . . . A human has to laboriously tap into a calculator (or whatever) to access non-human speed. A silicon awareness, despite floating on a fast substrate, would almost certainly have fast, direct, seamless access to an arithmetic engine. Doubtless by then we’d have our brains wired up too.

    [Reply]

    Posted on April 16th, 2014 at 5:59 pm Reply | Quote
  • neovictorian23 Says:

    @“A True Initiation never ends.”

    [Reply]

    Posted on April 16th, 2014 at 8:15 pm Reply | Quote
  • georgesdelatour Says:

    Computers are better at computing than humans. But cognition is not simply computation.

    I thought the Turing test was mainly about testing the computer’s ability to feign answers to questions for which computer algorithms provide no obvious model – something more like the Voight-Kampff polygraph-type test in Blade Runner.

    [Reply]

    RiverC Reply:

    That was my impression, too. On our 286 Dell Dimension my dad (a clinical psychologist) had a program you could teach to respond ‘normally’ (through conversation) to pass a turing test. But the fact that I’ve known some aspies who were called ‘computers’ makes the connection clear.

    [Reply]

    HowardV Reply:

    “Cognition is not simply computing”

    The words are not synonyms, but as mentioned above, the idea is that a cognition could emerge from a computing substrate.

    As for the Turing Test, it’s not to test whether a computer can feign answers. It’s an admission that we can’t have some kind of ‘interior’ test of cognition. Why assume the worst (that a computer can’t be intelligent/sentient/conscious), instead use external criteria?

    Whether a conversation with a human is a good external criterion is questionable. If an AI outfoxed us in many ways, but refused to have a conversation, for how long would we consider it stupid?

    [Reply]

    georgesdelatour Reply:

    “the idea is that a cognition could emerge from a computing substrate.” Why? It hasn’t in humans.

    [Reply]

    Posted on April 16th, 2014 at 8:59 pm Reply | Quote
  • Moe Says:

    It is an error to conflate mathematics with computation. An AI that cannot derive General Relativity (or something better) from what came before GR does not possess super-human intelligence.

    [Reply]

    Posted on April 17th, 2014 at 2:59 am Reply | Quote
  • Lightning Round -2014/04/23 | Free Northerner Says:

    […] On the Turing test. […]

    Posted on April 23rd, 2014 at 5:03 am Reply | Quote
  • Outside in - Involvements with reality » Blog Archive » Poe’s Law Says:

    […] Evidently, Poe’s Law can be construed as a filter of the same kind. Satire is effective to exactly the extent it can be confused with the satirized. (This can be taken in comparatively serious directions.) […]

    Posted on July 18th, 2014 at 3:10 pm Reply | Quote
  • Jogos de Imitação – Outlandish Says:

    […] Original. […]

    Posted on April 7th, 2017 at 11:11 pm Reply | Quote

Leave a comment