<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Scrap note #5</title>
	<atom:link href="http://www.xenosystems.net/scrap-note-5/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.xenosystems.net/scrap-note-5/</link>
	<description>Involvements with reality</description>
	<lastBuildDate>Thu, 05 Feb 2015 06:56:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>By: Different T</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33874</link>
		<dc:creator><![CDATA[Different T]]></dc:creator>
		<pubDate>Sun, 02 Feb 2014 14:15:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33874</guid>
		<description><![CDATA[That was the quoted definition from the original comment.

Again, does that &quot;hypothetical moment&quot; regard AI utilizing ever more predictive and explanatory modeling?

&lt;i&gt;So it actually has fuck-all to do with metaphysical determinism.&lt;/i&gt;

Incorrect.

Do you bow to Robo-God?  Has it been determined?]]></description>
		<content:encoded><![CDATA[<p>That was the quoted definition from the original comment.</p>
<p>Again, does that &#8220;hypothetical moment&#8221; regard AI utilizing ever more predictive and explanatory modeling?</p>
<p><i>So it actually has fuck-all to do with metaphysical determinism.</i></p>
<p>Incorrect.</p>
<p>Do you bow to Robo-God?  Has it been determined?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Antisthenean</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33873</link>
		<dc:creator><![CDATA[Antisthenean]]></dc:creator>
		<pubDate>Sun, 02 Feb 2014 13:33:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33873</guid>
		<description><![CDATA[Since you&#039;re apparently incapable of doing your own research, here&#039;s La Wik on the subject:

&quot;The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.&quot;

So it actually has fuck-all to do with metaphysical determinism.]]></description>
		<content:encoded><![CDATA[<p>Since you&#8217;re apparently incapable of doing your own research, here&#8217;s La Wik on the subject:</p>
<p>&#8220;The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.&#8221;</p>
<p>So it actually has fuck-all to do with metaphysical determinism.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Different T</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33872</link>
		<dc:creator><![CDATA[Different T]]></dc:creator>
		<pubDate>Sun, 02 Feb 2014 13:29:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33872</guid>
		<description><![CDATA[Is the &quot;singularity&quot; the quest for ever more predictive and explanatory modeling?  If not, what is it?

If it is, and you&#039;re not a determinist, how does it make &quot;perfect sense&quot; to you?]]></description>
		<content:encoded><![CDATA[<p>Is the &#8220;singularity&#8221; the quest for ever more predictive and explanatory modeling?  If not, what is it?</p>
<p>If it is, and you&#8217;re not a determinist, how does it make &#8220;perfect sense&#8221; to you?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Antisthenean</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33860</link>
		<dc:creator><![CDATA[Antisthenean]]></dc:creator>
		<pubDate>Sun, 02 Feb 2014 02:43:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33860</guid>
		<description><![CDATA[I&#039;m not a determinist, and the singularity makes perfect sense to me, so I don&#039;t know what you&#039;re rambling about.]]></description>
		<content:encoded><![CDATA[<p>I&#8217;m not a determinist, and the singularity makes perfect sense to me, so I don&#8217;t know what you&#8217;re rambling about.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Different T</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33849</link>
		<dc:creator><![CDATA[Different T]]></dc:creator>
		<pubDate>Sat, 01 Feb 2014 22:08:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33849</guid>
		<description><![CDATA[What is the “singularity?”

Is it the incoherent fantasy of a determinist? If not, what is it? The “hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence?”

Will the first realization of such a “singularity” be that its existence has been determined? Will the second realization be that the first realization has been determined? Will the third realization of such a “singularity” be that realizing the first realization has been determined has been determined?…………………

What about the first person to look through the singularity’s “window”? Will he discover that his action (looking into the “window”) has been determined? Will his next discovery be that his first discovery has been determined?………………….Will the person discover that the “window” is a “hall of mirrors?” Will he discover that this new discovery has been determined?

Is the “singularity” a mental disease that is highly infectious to the high-IQ population? Does it cull and/or make “useful” the high-IQ population?]]></description>
		<content:encoded><![CDATA[<p>What is the “singularity?”</p>
<p>Is it the incoherent fantasy of a determinist? If not, what is it? The “hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence?”</p>
<p>Will the first realization of such a “singularity” be that its existence has been determined? Will the second realization be that the first realization has been determined? Will the third realization of such a “singularity” be that realizing the first realization has been determined has been determined?…………………</p>
<p>What about the first person to look through the singularity’s “window”? Will he discover that his action (looking into the “window”) has been determined? Will his next discovery be that his first discovery has been determined?………………….Will the person discover that the “window” is a “hall of mirrors?” Will he discover that this new discovery has been determined?</p>
<p>Is the “singularity” a mental disease that is highly infectious to the high-IQ population? Does it cull and/or make “useful” the high-IQ population?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: spandrell</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33760</link>
		<dc:creator><![CDATA[spandrell]]></dc:creator>
		<pubDate>Fri, 31 Jan 2014 04:07:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33760</guid>
		<description><![CDATA[The day you stop asking your kids to translate and consistently rely instead on Google translate, the first paragraph will be true. Alas...]]></description>
		<content:encoded><![CDATA[<p>The day you stop asking your kids to translate and consistently rely instead on Google translate, the first paragraph will be true. Alas&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Lesser Bull</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33742</link>
		<dc:creator><![CDATA[Lesser Bull]]></dc:creator>
		<pubDate>Thu, 30 Jan 2014 20:08:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33742</guid>
		<description><![CDATA[Hence the word &#039;yet&#039;]]></description>
		<content:encoded><![CDATA[<p>Hence the word &#8216;yet&#8217;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Contemplationist</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33737</link>
		<dc:creator><![CDATA[Contemplationist]]></dc:creator>
		<pubDate>Thu, 30 Jan 2014 19:07:18 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33737</guid>
		<description><![CDATA[Indeed the first indicates a contradiction to the second.]]></description>
		<content:encoded><![CDATA[<p>Indeed the first indicates a contradiction to the second.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nyan_sandwich</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33721</link>
		<dc:creator><![CDATA[nyan_sandwich]]></dc:creator>
		<pubDate>Thu, 30 Jan 2014 15:48:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33721</guid>
		<description><![CDATA[&gt;An AI could, by comparison, run simulations of competing cognitive models, testing for fitness against arbitrarily specified values.

I have no solid technical argument, but my intuition is thus:

As an engineer, the difference between brute force optimization and calculated leaps is *huge*. So first of all, I would expect the takeoff to be a few orders of magnitude faster once the thing understands intelligence.

Second, intelligence understands stuff, that&#039;s what it does, so the singularity is probably going to be reflective whether it has to or not (though this does not support the reflectivity -&gt; singularity link I posited).

Third, &quot;understanding&quot; means only that you are able to predict the subtle details that defy earlier observations. I think that there might be enough subtleties in intelligence and software design that brute forcing it will only get partial test coverage, and our hero will end up shooting itself in the foot.

That said, I think a takeoff could happen and get way out of our reach even without really wise intelligence enhancement:

I also don&#039;t think it would actually take much intelligence enhancement to eclipse us very quickly. Human-level intelligence even is pretty good, and given a) infinite conscientiousness, b) scalable computing power, and c) *no coordination problems*, something could take over the world very quickly.

Consider that the story of human history is a story of huge amounts of computational power loaded with very intelligent software going to waste because  it&#039;s not turned whole-hog to the project of civilization. Instead we have politics and the humanities and shit. An AI wouldn&#039;t have to deal with that.

&gt;face exponential difficulty

The evidence in evolution suggests that once there was proper selection pressure to develop intelligence, it happened very quickly with no sign of diminishing returns.]]></description>
		<content:encoded><![CDATA[<p>&gt;An AI could, by comparison, run simulations of competing cognitive models, testing for fitness against arbitrarily specified values.</p>
<p>I have no solid technical argument, but my intuition is thus:</p>
<p>As an engineer, the difference between brute force optimization and calculated leaps is *huge*. So first of all, I would expect the takeoff to be a few orders of magnitude faster once the thing understands intelligence.</p>
<p>Second, intelligence understands stuff, that&#8217;s what it does, so the singularity is probably going to be reflective whether it has to or not (though this does not support the reflectivity -&gt; singularity link I posited).</p>
<p>Third, &#8220;understanding&#8221; means only that you are able to predict the subtle details that defy earlier observations. I think that there might be enough subtleties in intelligence and software design that brute forcing it will only get partial test coverage, and our hero will end up shooting itself in the foot.</p>
<p>That said, I think a takeoff could happen and get way out of our reach even without really wise intelligence enhancement:</p>
<p>I also don&#8217;t think it would actually take much intelligence enhancement to eclipse us very quickly. Human-level intelligence even is pretty good, and given a) infinite conscientiousness, b) scalable computing power, and c) *no coordination problems*, something could take over the world very quickly.</p>
<p>Consider that the story of human history is a story of huge amounts of computational power loaded with very intelligent software going to waste because  it&#8217;s not turned whole-hog to the project of civilization. Instead we have politics and the humanities and shit. An AI wouldn&#8217;t have to deal with that.</p>
<p>&gt;face exponential difficulty</p>
<p>The evidence in evolution suggests that once there was proper selection pressure to develop intelligence, it happened very quickly with no sign of diminishing returns.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/scrap-note-5/#comment-33717</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 30 Jan 2014 14:30:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2002#comment-33717</guid>
		<description><![CDATA[I&#039;m more persuaded by your first paragraph than your second.]]></description>
		<content:encoded><![CDATA[<p>I&#8217;m more persuaded by your first paragraph than your second.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
