<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Uncanny Valley</title>
	<atom:link href="http://www.xenosystems.net/uncanny-valley/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.xenosystems.net/uncanny-valley/</link>
	<description>Involvements with reality</description>
	<lastBuildDate>Thu, 05 Feb 2015 06:56:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>By: Rasputin</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-83446</link>
		<dc:creator><![CDATA[Rasputin]]></dc:creator>
		<pubDate>Wed, 23 Jul 2014 23:47:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-83446</guid>
		<description><![CDATA[The Uncanny Valley extends all the way to deepest, darkest Peru...

http://www.dailymail.co.uk/news/article-2703103/Its-Paddington-Scare-Creepy-reworkings-childhood-favourite-wake-new-films-CGI-version.html

And the Tumblr...

http://creepypaddington.tumblr.com]]></description>
		<content:encoded><![CDATA[<p>The Uncanny Valley extends all the way to deepest, darkest Peru&#8230;</p>
<p><a href="http://www.dailymail.co.uk/news/article-2703103/Its-Paddington-Scare-Creepy-reworkings-childhood-favourite-wake-new-films-CGI-version.html" rel="nofollow">http://www.dailymail.co.uk/news/article-2703103/Its-Paddington-Scare-Creepy-reworkings-childhood-favourite-wake-new-films-CGI-version.html</a></p>
<p>And the Tumblr&#8230;</p>
<p><a href="http://creepypaddington.tumblr.com" rel="nofollow">http://creepypaddington.tumblr.com</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Howard Vaan</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-78102</link>
		<dc:creator><![CDATA[Howard Vaan]]></dc:creator>
		<pubDate>Thu, 10 Jul 2014 17:14:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-78102</guid>
		<description><![CDATA[There is a link between the Mitrailleuse article and the Uncanny Valley.

I posit that something looking sort-of-conscious, that emerges from banking (or as likely other commercial) technology, will have an uncanny quality.]]></description>
		<content:encoded><![CDATA[<p>There is a link between the Mitrailleuse article and the Uncanny Valley.</p>
<p>I posit that something looking sort-of-conscious, that emerges from banking (or as likely other commercial) technology, will have an uncanny quality.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alrenous</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77680</link>
		<dc:creator><![CDATA[Alrenous]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 17:04:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77680</guid>
		<description><![CDATA[Oh, and even with chaos tech, since consciousness can&#039;t be implemented in pure software, it&#039;s horribly possible they would overlook the necessary components in the genome.]]></description>
		<content:encoded><![CDATA[<p>Oh, and even with chaos tech, since consciousness can&#8217;t be implemented in pure software, it&#8217;s horribly possible they would overlook the necessary components in the genome.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alrenous</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77679</link>
		<dc:creator><![CDATA[Alrenous]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 17:00:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77679</guid>
		<description><![CDATA[@admin

The theory matters because it shows they won&#039;t succeed in making artificial consciousness. They won&#039;t attempt it directly because they don&#039;t know how and nothing they&#039;re trying to do will attempt it accidentally, and therefore the theory predicts the continual failure of their expectations. Without consciousness, machines will never be able to compete with human brains. 

That is: Deep Blue did not defeat Kasparov. A team of computer programmers, through the medium of Deep Blue, defeated Kasparov. Essentially they made it possible to stretch chess turns across several hours and several people. The only amazing thing is how far they had to stretch it before the non-grandmasters could, sometimes, defeat a grandmaster. 

It also shows they&#039;re not in the habit of questioning their assumptions, which mean they are trapped in their current paradigm unless serendipity frees them. (See: floating soap, chocolate chips.) While what they&#039;re able to catallactically wring from their paradigm is impressive, it is ultimately self-limited. 

--

From another angle: been reading researchers who think they understand human cognition and don&#039;t. The thing about chimps being better at game theory than humans. It is to laugh. 

Most likely, it is unfeasible for a human brain to model a human brain, for the obvious overhead reason. This equally means it is impossible for a human to make a machine model a human. The idea that machines can enhance intelligence per se and not intellectual productivity is probably just false. Which would mean the singularity is not a possible outcome before advanced genetic engineering. if even then. 

The only thing that even threatens to outstrip human ingenuity is the same thing that created human ingenuity: evolution. But evolutionary algorithms - I call it &lt;a href=&quot;http://www.damninteresting.com/on-the-origin-of-circuits/&quot; rel=&quot;nofollow&quot;&gt;chaos tech&lt;/a&gt; - are not used to design much of anything. (I suspect due to liability: by definition they wouldn&#039;t be fully predictable and would be hard to service.) 

Can you imagine the status wizardry that&#039;s necessary? &quot;Eh, screw deep networks. We&#039;re going with the &#039;fucked-if-I-know&#039; theory. Oh and by the way, there&#039;s no objective measure for &#039;able to learn&#039; so it&#039;s all selected by human judgment!&quot; It&#039;ll never happen. The paradigm is &lt;i&gt;explicitly&lt;/i&gt; opposed to real progress.]]></description>
		<content:encoded><![CDATA[<p>@admin</p>
<p>The theory matters because it shows they won&#8217;t succeed in making artificial consciousness. They won&#8217;t attempt it directly because they don&#8217;t know how and nothing they&#8217;re trying to do will attempt it accidentally, and therefore the theory predicts the continual failure of their expectations. Without consciousness, machines will never be able to compete with human brains. </p>
<p>That is: Deep Blue did not defeat Kasparov. A team of computer programmers, through the medium of Deep Blue, defeated Kasparov. Essentially they made it possible to stretch chess turns across several hours and several people. The only amazing thing is how far they had to stretch it before the non-grandmasters could, sometimes, defeat a grandmaster. </p>
<p>It also shows they&#8217;re not in the habit of questioning their assumptions, which mean they are trapped in their current paradigm unless serendipity frees them. (See: floating soap, chocolate chips.) While what they&#8217;re able to catallactically wring from their paradigm is impressive, it is ultimately self-limited. </p>
<p>&#8212;</p>
<p>From another angle: been reading researchers who think they understand human cognition and don&#8217;t. The thing about chimps being better at game theory than humans. It is to laugh. </p>
<p>Most likely, it is unfeasible for a human brain to model a human brain, for the obvious overhead reason. This equally means it is impossible for a human to make a machine model a human. The idea that machines can enhance intelligence per se and not intellectual productivity is probably just false. Which would mean the singularity is not a possible outcome before advanced genetic engineering. if even then. </p>
<p>The only thing that even threatens to outstrip human ingenuity is the same thing that created human ingenuity: evolution. But evolutionary algorithms &#8211; I call it <a href="http://www.damninteresting.com/on-the-origin-of-circuits/" rel="nofollow">chaos tech</a> &#8211; are not used to design much of anything. (I suspect due to liability: by definition they wouldn&#8217;t be fully predictable and would be hard to service.) </p>
<p>Can you imagine the status wizardry that&#8217;s necessary? &#8220;Eh, screw deep networks. We&#8217;re going with the &#8216;fucked-if-I-know&#8217; theory. Oh and by the way, there&#8217;s no objective measure for &#8216;able to learn&#8217; so it&#8217;s all selected by human judgment!&#8221; It&#8217;ll never happen. The paradigm is <i>explicitly</i> opposed to real progress.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77661</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 16:05:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77661</guid>
		<description><![CDATA[Specific substrate independent, but always in some way implemented.]]></description>
		<content:encoded><![CDATA[<p>Specific substrate independent, but always in some way implemented.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77660</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 16:03:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77660</guid>
		<description><![CDATA[&quot;I take exception to that.&quot; -- My point is techno-materialist, which is to say: what people are doing (catallactically) far exceeds what they think they&#039;re doing, have arguments for, express through academic disciplines, or make a matter of articulate belief. Therefore, I couldn&#039;t care less about a philosophical argument saying &quot;Yes, really, there can be artificial intelligence&quot; -- at least, not when compared to the techno-commercial programs that are in fact implementing artificial intelligence (or, I suppose, not). If arguments against the possibility of artificial intelligence began to drain resources from the tech-industry base that is making things happen (or not), then it would matter. Insofar as it has no discernible impact whatsoever, it&#039;s an amusement at most. It&#039;s the dynamism of capitalism that decides on the course and speed of AI, not scholastic conceptual debate about its possibility.]]></description>
		<content:encoded><![CDATA[<p>&#8220;I take exception to that.&#8221; &#8212; My point is techno-materialist, which is to say: what people are doing (catallactically) far exceeds what they think they&#8217;re doing, have arguments for, express through academic disciplines, or make a matter of articulate belief. Therefore, I couldn&#8217;t care less about a philosophical argument saying &#8220;Yes, really, there can be artificial intelligence&#8221; &#8212; at least, not when compared to the techno-commercial programs that are in fact implementing artificial intelligence (or, I suppose, not). If arguments against the possibility of artificial intelligence began to drain resources from the tech-industry base that is making things happen (or not), then it would matter. Insofar as it has no discernible impact whatsoever, it&#8217;s an amusement at most. It&#8217;s the dynamism of capitalism that decides on the course and speed of AI, not scholastic conceptual debate about its possibility.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alrenous</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77616</link>
		<dc:creator><![CDATA[Alrenous]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 14:34:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77616</guid>
		<description><![CDATA[Yeah there&#039;s thinking of &#039;thing-called-blue-box&#039; and &#039;thing-&lt;i&gt;I&lt;/i&gt;-call-blue-box&#039; which may be different things. Call them thing blue and thing azul for now. Here the mistake isn&#039;t in the first-order mental entities, it&#039;s in a belief about the relationship between those entities - namely that it&#039;s the same in other people&#039;s minds. Keeping count, we have four first-order mental entities, but one of them is partially about an external relationship. You can mistakenly think others call it azul, but can&#039;t be mistaken in thinking you think others call it azul.
I wonder if there&#039;s already names for those two properties.]]></description>
		<content:encoded><![CDATA[<p>Yeah there&#8217;s thinking of &#8216;thing-called-blue-box&#8217; and &#8216;thing-<i>I</i>-call-blue-box&#8217; which may be different things. Call them thing blue and thing azul for now. Here the mistake isn&#8217;t in the first-order mental entities, it&#8217;s in a belief about the relationship between those entities &#8211; namely that it&#8217;s the same in other people&#8217;s minds. Keeping count, we have four first-order mental entities, but one of them is partially about an external relationship. You can mistakenly think others call it azul, but can&#8217;t be mistaken in thinking you think others call it azul.<br />
I wonder if there&#8217;s already names for those two properties.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Wilhelm von Überlieferung</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77585</link>
		<dc:creator><![CDATA[Wilhelm von Überlieferung]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 13:33:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77585</guid>
		<description><![CDATA[He proibably read the novel &lt;a href=&quot;http://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando-intro.html&quot; rel=&quot;nofollow&quot;&gt;Accelerando&lt;/a&gt; and found it convincing enough to be realistic. It&#039;s certainly not outside the realm of possibility.]]></description>
		<content:encoded><![CDATA[<p>He proibably read the novel <a href="http://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando-intro.html" rel="nofollow">Accelerando</a> and found it convincing enough to be realistic. It&#8217;s certainly not outside the realm of possibility.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Wilhelm von Überlieferung</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77584</link>
		<dc:creator><![CDATA[Wilhelm von Überlieferung]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 13:28:41 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77584</guid>
		<description><![CDATA[&lt;strong&gt;@Aeroguy&lt;/strong&gt;
Complexity is a measurement or aspect of some amount of information. And information &lt;b&gt;is&lt;b&gt; non-material. It is substrate independent.]]></description>
		<content:encoded><![CDATA[<p><strong>@Aeroguy</strong><br />
Complexity is a measurement or aspect of some amount of information. And information <b>is</b><b> non-material. It is substrate independent.</b></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Wilhelm von Überlieferung</title>
		<link>http://www.xenosystems.net/uncanny-valley/#comment-77578</link>
		<dc:creator><![CDATA[Wilhelm von Überlieferung]]></dc:creator>
		<pubDate>Wed, 09 Jul 2014 13:22:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3006#comment-77578</guid>
		<description><![CDATA[And why can&#039;t androids have homeostatic emotions? This seems to be somewhat outmoded thinking.

From a more abstract perspective, homeostatis is a universal aspect or property of thermodynamic/information systems. It&#039;s something that is unambiguosly defined in mathematical terms--you can measure and identify in the real world.

Since artificial minds are information systems, they can be programmed or embued with homeostatic mechanisms. Similar to how evolution has &lt;i&gt;programmed&lt;/i&gt; us to have such primordial emotions or driving motivations.

Conciousness is a continuity. Things can be more or less concious.

The reason present-day attempts at building anthropomorphic robots result in failed, lifeless automatons without much conciousness isn&#039;t because they lack homeostatic behaviors--oh, they have them, quite rigid ones in fact. It&#039;s that their simplistic minds lack the complexity of our own.

If you were to compare the complexity of the most advanced software systems available today to our own brains, the distance is many orders of magnitude. It&#039;s astronomical. In fact, it&#039;s near impossible to ever approach the complexity of our minds with conventional register machines based off of the Von Neumann architecture, without consuming vast amounts of computational resources.

The Human Brain Project, and the related effort in the US, are attempting to simulate a human brain using such conventional computers. But what the academics that run those projects aren&#039;t telling the masses is that in order to scale the simulation upwards, they&#039;re using simpler models of how the mind performs computation at each progressively higher level of scale--they&#039;re compressing the model and losing information in the process.

Now that said, there are alternative computers architectures on the horizon that will be able to do it. And I&#039;m not talking about quantum computers. There are more powerful architectures than that. And they aren&#039;t far off either.]]></description>
		<content:encoded><![CDATA[<p>And why can&#8217;t androids have homeostatic emotions? This seems to be somewhat outmoded thinking.</p>
<p>From a more abstract perspective, homeostatis is a universal aspect or property of thermodynamic/information systems. It&#8217;s something that is unambiguosly defined in mathematical terms&#8211;you can measure and identify in the real world.</p>
<p>Since artificial minds are information systems, they can be programmed or embued with homeostatic mechanisms. Similar to how evolution has <i>programmed</i> us to have such primordial emotions or driving motivations.</p>
<p>Conciousness is a continuity. Things can be more or less concious.</p>
<p>The reason present-day attempts at building anthropomorphic robots result in failed, lifeless automatons without much conciousness isn&#8217;t because they lack homeostatic behaviors&#8211;oh, they have them, quite rigid ones in fact. It&#8217;s that their simplistic minds lack the complexity of our own.</p>
<p>If you were to compare the complexity of the most advanced software systems available today to our own brains, the distance is many orders of magnitude. It&#8217;s astronomical. In fact, it&#8217;s near impossible to ever approach the complexity of our minds with conventional register machines based off of the Von Neumann architecture, without consuming vast amounts of computational resources.</p>
<p>The Human Brain Project, and the related effort in the US, are attempting to simulate a human brain using such conventional computers. But what the academics that run those projects aren&#8217;t telling the masses is that in order to scale the simulation upwards, they&#8217;re using simpler models of how the mind performs computation at each progressively higher level of scale&#8211;they&#8217;re compressing the model and losing information in the process.</p>
<p>Now that said, there are alternative computers architectures on the horizon that will be able to do it. And I&#8217;m not talking about quantum computers. There are more powerful architectures than that. And they aren&#8217;t far off either.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
