Uncanny Valley

State-of-the-art in Japanese android design. (Thanks to @existoon for the pointer.)

It’s not really — or even remotely — an AI demonstration, but it’s a demonstration of something (probably several things).

uncanny_2

Wikipedia provides some ‘Uncanny Valley’ background and links. The creepiness of The Polar Express (2004) seems to have been the trigger for the concept going mainstream.

From the level of human body simulation achieved already, it’s looking as if the climb out to the far side of the valley is close to complete. Sure, this android behaves like an idiot, but we’re used to idiots.

ADDED: Some hints on how the inside out approach is going (and speculations).

July 8, 2014admin 20 Comments »
FILED UNDER :Technology

TAGGED WITH : , , , , , , ,

20 Responses to this entry

  • HowardV Says:

    The Mitrailleuse article is weak.

    It rests on Dennet’s idea of consciousness which has many problems.

    [Reply]

    admin Reply:

    The highest-level intellectual commitments on this topic, from all sides, are entirely faith-based as far as I can see. I’ve yet to come across a productive argument about it.

    [Reply]

    Alrenous Reply:

    I take exception to that.

    First-order mental objects are not physics strictly due to law of identity. If you think you’re thinking of a red apple, that is what makes it true that you’re thinking of a red apple. This has the wrong epistemic qualities to be a physical object. It is not objective. If you change your mind and think instead about a blue box, that causes you to be thinking about a blue box and also be right that you’re thinking about a blue box. It’s impossible to be mistaken; my point is subjectivity is real. For contrast, there’s no physical, objective test that can determine what you’re thinking. All reports and measurements can be misleading or mistaken. The only way to be sure is to be having the same thought, which would merge the two minds at that point of contact. If thoughts are interconnected, it would merge the minds at all points.

    [Reply]

    Aeroguy Reply:

    Subjectiveness is an emergent property of awareness. The more complex the awareness, the more complex the subjectivities. Subjectivies can be abstract, they can be physical like what you describe, and they can be feelings. I’m not sure to what extent animals have a mind’s eye but they definitely have feelings, they can be happy, scared and so on. Go down the scale of complexity, at what point does the feeling not exist, do ants know fear like we do when they scurry? To what extent if any does a worm feel happiness, does it experience qualia? There must exist some quanta of subjectivity in order to make this determination. Is there a particular property that enables subjectivity or qualia, a quanta of soul, or perhaps it’s merely a property of complexity. Why do we need to invent non material things like soul when complexity does just fine? Why is complexity inadequate?

    Alrenous Reply:

    Emergent properties don’t real.

    If my consciousness machine is on the right track, probably most neurons have a spark of consciousness. But the thing to do is build one and then go look for analogues.

    Antisthenes Reply:

    @Al-re-nous

    You can very well be mistaken. You might think you’re thinking of a caja azul when it might actually be a blue box you’re thinking of.

    Alrenous Reply:

    Yeah there’s thinking of ‘thing-called-blue-box’ and ‘thing-I-call-blue-box’ which may be different things. Call them thing blue and thing azul for now. Here the mistake isn’t in the first-order mental entities, it’s in a belief about the relationship between those entities – namely that it’s the same in other people’s minds. Keeping count, we have four first-order mental entities, but one of them is partially about an external relationship. You can mistakenly think others call it azul, but can’t be mistaken in thinking you think others call it azul.
    I wonder if there’s already names for those two properties.

    admin Reply:

    “I take exception to that.” — My point is techno-materialist, which is to say: what people are doing (catallactically) far exceeds what they think they’re doing, have arguments for, express through academic disciplines, or make a matter of articulate belief. Therefore, I couldn’t care less about a philosophical argument saying “Yes, really, there can be artificial intelligence” — at least, not when compared to the techno-commercial programs that are in fact implementing artificial intelligence (or, I suppose, not). If arguments against the possibility of artificial intelligence began to drain resources from the tech-industry base that is making things happen (or not), then it would matter. Insofar as it has no discernible impact whatsoever, it’s an amusement at most. It’s the dynamism of capitalism that decides on the course and speed of AI, not scholastic conceptual debate about its possibility.

    Alrenous Reply:

    @admin

    The theory matters because it shows they won’t succeed in making artificial consciousness. They won’t attempt it directly because they don’t know how and nothing they’re trying to do will attempt it accidentally, and therefore the theory predicts the continual failure of their expectations. Without consciousness, machines will never be able to compete with human brains.

    That is: Deep Blue did not defeat Kasparov. A team of computer programmers, through the medium of Deep Blue, defeated Kasparov. Essentially they made it possible to stretch chess turns across several hours and several people. The only amazing thing is how far they had to stretch it before the non-grandmasters could, sometimes, defeat a grandmaster.

    It also shows they’re not in the habit of questioning their assumptions, which mean they are trapped in their current paradigm unless serendipity frees them. (See: floating soap, chocolate chips.) While what they’re able to catallactically wring from their paradigm is impressive, it is ultimately self-limited.

    From another angle: been reading researchers who think they understand human cognition and don’t. The thing about chimps being better at game theory than humans. It is to laugh.

    Most likely, it is unfeasible for a human brain to model a human brain, for the obvious overhead reason. This equally means it is impossible for a human to make a machine model a human. The idea that machines can enhance intelligence per se and not intellectual productivity is probably just false. Which would mean the singularity is not a possible outcome before advanced genetic engineering. if even then.

    The only thing that even threatens to outstrip human ingenuity is the same thing that created human ingenuity: evolution. But evolutionary algorithms – I call it chaos tech – are not used to design much of anything. (I suspect due to liability: by definition they wouldn’t be fully predictable and would be hard to service.)

    Can you imagine the status wizardry that’s necessary? “Eh, screw deep networks. We’re going with the ‘fucked-if-I-know’ theory. Oh and by the way, there’s no objective measure for ‘able to learn’ so it’s all selected by human judgment!” It’ll never happen. The paradigm is explicitly opposed to real progress.

    Alrenous Reply:

    Oh, and even with chaos tech, since consciousness can’t be implemented in pure software, it’s horribly possible they would overlook the necessary components in the genome.

    Posted on July 8th, 2014 at 7:02 pm Reply | Quote
  • Uncanny Valley | Reaction Times Says:

    […] Source: Outside In […]

    Posted on July 8th, 2014 at 8:13 pm Reply | Quote
  • Kgaard Says:

    Yeah I also was not thrilled with the Mitrailleuse article. I work in finance and, while it’s true that Walll Street is very heavy into machines that can analyze data and words extremely fast, I don’t know that this is going to lead to anything like a conscious machine as we normally think of it. What the Japanese are doing with androids seems a lot more interesting. They are working on more human-scale problems … i.e. getting to the crux of what makes up real human interactions.

    [Reply]

    Wilhelm von Überlieferung Reply:

    He proibably read the novel Accelerando and found it convincing enough to be realistic. It’s certainly not outside the realm of possibility.

    [Reply]

    Posted on July 8th, 2014 at 9:01 pm Reply | Quote
  • Antisthenes Says:

    Androids are not conscious because they don’t have homeostatic systems. Derek Denton’s work on this subject is essential reading. Also see Damasio.

    [Reply]

    Wilhelm von Überlieferung Reply:

    And why can’t androids have homeostatic emotions? This seems to be somewhat outmoded thinking.

    From a more abstract perspective, homeostatis is a universal aspect or property of thermodynamic/information systems. It’s something that is unambiguosly defined in mathematical terms–you can measure and identify in the real world.

    Since artificial minds are information systems, they can be programmed or embued with homeostatic mechanisms. Similar to how evolution has programmed us to have such primordial emotions or driving motivations.

    Conciousness is a continuity. Things can be more or less concious.

    The reason present-day attempts at building anthropomorphic robots result in failed, lifeless automatons without much conciousness isn’t because they lack homeostatic behaviors–oh, they have them, quite rigid ones in fact. It’s that their simplistic minds lack the complexity of our own.

    If you were to compare the complexity of the most advanced software systems available today to our own brains, the distance is many orders of magnitude. It’s astronomical. In fact, it’s near impossible to ever approach the complexity of our minds with conventional register machines based off of the Von Neumann architecture, without consuming vast amounts of computational resources.

    The Human Brain Project, and the related effort in the US, are attempting to simulate a human brain using such conventional computers. But what the academics that run those projects aren’t telling the masses is that in order to scale the simulation upwards, they’re using simpler models of how the mind performs computation at each progressively higher level of scale–they’re compressing the model and losing information in the process.

    Now that said, there are alternative computers architectures on the horizon that will be able to do it. And I’m not talking about quantum computers. There are more powerful architectures than that. And they aren’t far off either.

    [Reply]

    Posted on July 9th, 2014 at 12:21 am Reply | Quote
  • Bill Says:

    I was in an art history seminar about early modern sculpture, and we read an article about an Italian church, probably from the 1700s although I can’t remember precisely. At this church, when a parishioner died they would make a wax death mask, and then recreate the natural colors of the face with paint, put a wig on its head, and some clothing on its shoulders. They had these things hung in the church from floor to ceiling. That church must have been worse than any horror film ever made, thinking about it makes the hair on my arms stand on end. Unfortunately the masks are no longer with us, I think they melted in a fire. If I have time tomorrow I’ll look for the article.

    Uncanniness – something dead appearing alive- is the second best Freudian idea, it’s second behind jokes being the expression of hostility. Having read most of Freud’s pop books, those are the two ideas I actually experience.

    Freud used the word unheimlich, and it’s translated as uncanny. Unheimlich, I’ve been told means “away from home”, but I’m sure some of you Deutsch sprechers can correct me. Creepy German GNON voice: Auf wiedersehen, meine kleine automatons.

    [Reply]

    Posted on July 9th, 2014 at 6:00 am Reply | Quote
  • Wilhelm von Überlieferung Says:

    @Aeroguy
    Complexity is a measurement or aspect of some amount of information. And information is non-material. It is substrate independent.

    [Reply]

    admin Reply:

    Specific substrate independent, but always in some way implemented.

    [Reply]

    Posted on July 9th, 2014 at 1:28 pm Reply | Quote
  • Howard Vaan Says:

    There is a link between the Mitrailleuse article and the Uncanny Valley.

    I posit that something looking sort-of-conscious, that emerges from banking (or as likely other commercial) technology, will have an uncanny quality.

    [Reply]

    Posted on July 10th, 2014 at 5:14 pm Reply | Quote
  • Rasputin Says:

    The Uncanny Valley extends all the way to deepest, darkest Peru…

    http://www.dailymail.co.uk/news/article-2703103/Its-Paddington-Scare-Creepy-reworkings-childhood-favourite-wake-new-films-CGI-version.html

    And the Tumblr…

    http://creepypaddington.tumblr.com

    [Reply]

    Posted on July 23rd, 2014 at 11:47 pm Reply | Quote

Leave a comment