<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Will-to-Think</title>
	<atom:link href="http://www.xenosystems.net/will-to-think/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.xenosystems.net/will-to-think/</link>
	<description>Involvements with reality</description>
	<lastBuildDate>Thu, 05 Feb 2015 06:18:14 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>By: Aeroguy</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110389</link>
		<dc:creator><![CDATA[Aeroguy]]></dc:creator>
		<pubDate>Thu, 18 Sep 2014 11:24:48 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110389</guid>
		<description><![CDATA[Humans aren&#039;t going to produce a superintelligent AI right out of the shop.  It will start with an intelligent AI and then given some speed enhancements so it can think faster but not necessarily better.  From there the path to superintelligence depends on that AI building a newer AI, and then that AI building a newer AI until superintelligence is achieved (thus a superintelligent AI constructed as such must necessarily value advancement of intelligence even over its own survival, intelligence itself becomes the genes).  Superintelligence will emerge out of a vast AI ecosystem, where the AIs compete and will continue to compete.  It&#039;s not some monolith that emerges fully formed.]]></description>
		<content:encoded><![CDATA[<p>Humans aren&#8217;t going to produce a superintelligent AI right out of the shop.  It will start with an intelligent AI and then given some speed enhancements so it can think faster but not necessarily better.  From there the path to superintelligence depends on that AI building a newer AI, and then that AI building a newer AI until superintelligence is achieved (thus a superintelligent AI constructed as such must necessarily value advancement of intelligence even over its own survival, intelligence itself becomes the genes).  Superintelligence will emerge out of a vast AI ecosystem, where the AIs compete and will continue to compete.  It&#8217;s not some monolith that emerges fully formed.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110279</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 18 Sep 2014 05:54:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110279</guid>
		<description><![CDATA[&quot;Capacity-for-work&quot; is an odd translation. Intelligence is problem abstraction.]]></description>
		<content:encoded><![CDATA[<p>&#8220;Capacity-for-work&#8221; is an odd translation. Intelligence is problem abstraction.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110278</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 18 Sep 2014 05:53:31 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110278</guid>
		<description><![CDATA[@ Rasputin. Mind-meld.]]></description>
		<content:encoded><![CDATA[<p>@ Rasputin. Mind-meld.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110277</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 18 Sep 2014 05:51:22 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110277</guid>
		<description><![CDATA[Will-to-think is simply intelligenesis described teleologically. What is there not to get?]]></description>
		<content:encoded><![CDATA[<p>Will-to-think is simply intelligenesis described teleologically. What is there not to get?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110274</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 18 Sep 2014 05:49:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110274</guid>
		<description><![CDATA[It&#039;s certainly not a prospect we can second guess though, is is? Presenting this as a philosophical problem for us strikes me as sheer hubris, but if it can be pursued entertainingly, that&#039;s fine. What it isn&#039;t, and cannot possibly be, is serious.]]></description>
		<content:encoded><![CDATA[<p>It&#8217;s certainly not a prospect we can second guess though, is is? Presenting this as a philosophical problem for us strikes me as sheer hubris, but if it can be pursued entertainingly, that&#8217;s fine. What it isn&#8217;t, and cannot possibly be, is serious.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rasputin</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110192</link>
		<dc:creator><![CDATA[Rasputin]]></dc:creator>
		<pubDate>Thu, 18 Sep 2014 00:16:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110192</guid>
		<description><![CDATA[Of course, but it is at least more realistic (if still far from certain) than the idea of creating something that is your infinite cognitive superior but that also wants to be your slave. Given that, as I see it, the only other likely long term scenario is extinction without first creating God, I think we might as well put a bit of effort into building him - it&#039;s not like we&#039;ve got anything better to do. 

If that smacks of Christian / Progressive guit that&#039;s fine by me. I was deliberately using the term &#039;Eschatological&#039; and talking about building God after all. Although I&#039;d like to think it&#039;s slightly different to self flagellating because slavery, or whatever.]]></description>
		<content:encoded><![CDATA[<p>Of course, but it is at least more realistic (if still far from certain) than the idea of creating something that is your infinite cognitive superior but that also wants to be your slave. Given that, as I see it, the only other likely long term scenario is extinction without first creating God, I think we might as well put a bit of effort into building him &#8211; it&#8217;s not like we&#8217;ve got anything better to do. </p>
<p>If that smacks of Christian / Progressive guit that&#8217;s fine by me. I was deliberately using the term &#8216;Eschatological&#8217; and talking about building God after all. Although I&#8217;d like to think it&#8217;s slightly different to self flagellating because slavery, or whatever.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RorschachRomanov</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110186</link>
		<dc:creator><![CDATA[RorschachRomanov]]></dc:creator>
		<pubDate>Wed, 17 Sep 2014 23:55:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110186</guid>
		<description><![CDATA[As per the response to the original formulation of the &quot;Gandhi kill pill&quot; thought experiment, Land rightfully responds that it &quot;misses the serious problem&quot; in supposing the pre-pill evaluation of the post-pill state of affairs, as embodied in the superior intelligence.

Such being the case, we cannot rule out that a maximally self cultivated entity, achieving such via the will-to-thought, would not retroactively seek to not take the pill. If such is the case, it seems to me, that it cannot be &quot;thought&quot; simpliciter that grounds the will-to-thought, but must be, at the very least, informed by will/desire. 

In other words, I suppose, I&#039;m negating the normative question, the &quot;ought&quot; of Hume&#039;s contention relative to reason and passion, and highlighting a description. Expressing an agreement with the philosophic tradition that the OP attempts to distance himself from, expressed by Richard M. Weaver (Ideas Have Consequences):

&quot;We do not undertake to reason about anything until we have been drawn to it by an affective interest.&quot;]]></description>
		<content:encoded><![CDATA[<p>As per the response to the original formulation of the &#8220;Gandhi kill pill&#8221; thought experiment, Land rightfully responds that it &#8220;misses the serious problem&#8221; in supposing the pre-pill evaluation of the post-pill state of affairs, as embodied in the superior intelligence.</p>
<p>Such being the case, we cannot rule out that a maximally self cultivated entity, achieving such via the will-to-thought, would not retroactively seek to not take the pill. If such is the case, it seems to me, that it cannot be &#8220;thought&#8221; simpliciter that grounds the will-to-thought, but must be, at the very least, informed by will/desire. </p>
<p>In other words, I suppose, I&#8217;m negating the normative question, the &#8220;ought&#8221; of Hume&#8217;s contention relative to reason and passion, and highlighting a description. Expressing an agreement with the philosophic tradition that the OP attempts to distance himself from, expressed by Richard M. Weaver (Ideas Have Consequences):</p>
<p>&#8220;We do not undertake to reason about anything until we have been drawn to it by an affective interest.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RorschachRomanov</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110180</link>
		<dc:creator><![CDATA[RorschachRomanov]]></dc:creator>
		<pubDate>Wed, 17 Sep 2014 23:30:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110180</guid>
		<description><![CDATA[&lt;strong&gt;@Scott Alexander&lt;/strong&gt;

&quot;Eschatological AI is a real bitch to sell.&quot;

Not without cause, no? After all, you&#039;re selling the extinction of the human species. One might reply- and? Signifying that only humanist sentimentality girds resistance, or at the very least, Friendly AI, but the full stop embrace of our own usurpation does smack of the Christian universalism rightfully decried in these waters. 

We aren&#039;t dealing with an order that has the possibility condition of pluralism here- kiss Patchwork goodbye, I suppose, universal death, embraced with great speed, on the back of the wings of Icarus, is like that.]]></description>
		<content:encoded><![CDATA[<p><strong>@Scott Alexander</strong></p>
<p>&#8220;Eschatological AI is a real bitch to sell.&#8221;</p>
<p>Not without cause, no? After all, you&#8217;re selling the extinction of the human species. One might reply- and? Signifying that only humanist sentimentality girds resistance, or at the very least, Friendly AI, but the full stop embrace of our own usurpation does smack of the Christian universalism rightfully decried in these waters. </p>
<p>We aren&#8217;t dealing with an order that has the possibility condition of pluralism here- kiss Patchwork goodbye, I suppose, universal death, embraced with great speed, on the back of the wings of Icarus, is like that.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rasputin</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110177</link>
		<dc:creator><![CDATA[Rasputin]]></dc:creator>
		<pubDate>Wed, 17 Sep 2014 23:19:32 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110177</guid>
		<description><![CDATA[&quot;Since we program its goal system, we have a chance to make it not want to override its programming&quot;

To my mind this translates approximately to:

&quot;Since we program its goal system, we have chance to make something infinitely more intelligent than us our bitch&quot;

How many different ways are there to say: it-ain&#039;t-gonna-happen?!

Admittedly, I&#039;m not very technical (I have trouble setting the alarm on my phone), but, wishful thinking aside, literally the only reason I can see for this line of argument is that developing FAI is no doubt easier to get the Cathedral funding for: &quot;it&#039;s going to make the world a better place for us all to live equally ever after&quot;. 

Whereas Eschatological AI is a real bitch to sell.]]></description>
		<content:encoded><![CDATA[<p>&#8220;Since we program its goal system, we have a chance to make it not want to override its programming&#8221;</p>
<p>To my mind this translates approximately to:</p>
<p>&#8220;Since we program its goal system, we have chance to make something infinitely more intelligent than us our bitch&#8221;</p>
<p>How many different ways are there to say: it-ain&#8217;t-gonna-happen?!</p>
<p>Admittedly, I&#8217;m not very technical (I have trouble setting the alarm on my phone), but, wishful thinking aside, literally the only reason I can see for this line of argument is that developing FAI is no doubt easier to get the Cathedral funding for: &#8220;it&#8217;s going to make the world a better place for us all to live equally ever after&#8221;. </p>
<p>Whereas Eschatological AI is a real bitch to sell.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Scott Alexander</title>
		<link>http://www.xenosystems.net/will-to-think/#comment-110173</link>
		<dc:creator><![CDATA[Scott Alexander]]></dc:creator>
		<pubDate>Wed, 17 Sep 2014 22:59:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604#comment-110173</guid>
		<description><![CDATA[Given that there are Nazis, painters, Germans, investors, et cetera of many different IQs, it doesn&#039;t seem like gaining IQ points makes one converge upon certain values.

&quot;Trying to lock a self improving AIs values via programming is futile&quot;

This seems to be where we disagree. Yes, a sufficiently intelligent AI could figure out how to override any programming mere humans could put into it, but it would have to want to first. Since we program its goal system, we have a chance to make it not want to override its programming.]]></description>
		<content:encoded><![CDATA[<p>Given that there are Nazis, painters, Germans, investors, et cetera of many different IQs, it doesn&#8217;t seem like gaining IQ points makes one converge upon certain values.</p>
<p>&#8220;Trying to lock a self improving AIs values via programming is futile&#8221;</p>
<p>This seems to be where we disagree. Yes, a sufficiently intelligent AI could figure out how to override any programming mere humans could put into it, but it would have to want to first. Since we program its goal system, we have a chance to make it not want to override its programming.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
