<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Against Orthogonality</title>
	<atom:link href="http://www.xenosystems.net/against-orthogonality/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.xenosystems.net/against-orthogonality/</link>
	<description>Involvements with reality</description>
	<lastBuildDate>Thu, 05 Feb 2015 06:56:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>By: Outside in - Involvements with reality &#187; Blog Archive &#187; Stupid Monsters</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-99043</link>
		<dc:creator><![CDATA[Outside in - Involvements with reality &#187; Blog Archive &#187; Stupid Monsters]]></dc:creator>
		<pubDate>Mon, 25 Aug 2014 15:55:28 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-99043</guid>
		<description><![CDATA[[&#8230;] course, my immediate response is simply this. Since it clearly hasn&#8217;t persuaded anybody, I&#8217;ll try [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] course, my immediate response is simply this. Since it clearly hasn&#8217;t persuaded anybody, I&#8217;ll try [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Outside in - Involvements with reality &#187; Blog Archive &#187; On Gnon</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-86168</link>
		<dc:creator><![CDATA[Outside in - Involvements with reality &#187; Blog Archive &#187; On Gnon]]></dc:creator>
		<pubDate>Wed, 30 Jul 2014 09:18:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-86168</guid>
		<description><![CDATA[[&#8230;] about Scott Alexander&#8217;s &#8216;Meditations on Moloch&#8216; might want to take a look at this. (Also more Gnon, here, and [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] about Scott Alexander&#8217;s &#8216;Meditations on Moloch&#8216; might want to take a look at this. (Also more Gnon, here, and [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Las pulsiones de la inteligencia artificial &#8211; Parte 2 &#124; Critical Hit</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-28896</link>
		<dc:creator><![CDATA[Las pulsiones de la inteligencia artificial &#8211; Parte 2 &#124; Critical Hit]]></dc:creator>
		<pubDate>Sat, 23 Nov 2013 17:54:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-28896</guid>
		<description><![CDATA[[&#8230;] de concluir quisiera señalar una aspecto interesante de la posición de Omohundro que ha sido desarrollado por Nick Land aquí. Resulta que el modelo pulsional de Omohundro supone una crítica implícita a la tesis de la [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] de concluir quisiera señalar una aspecto interesante de la posición de Omohundro que ha sido desarrollado por Nick Land aquí. Resulta que el modelo pulsional de Omohundro supone una crítica implícita a la tesis de la [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Contemplationist</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-28079</link>
		<dc:creator><![CDATA[Contemplationist]]></dc:creator>
		<pubDate>Wed, 13 Nov 2013 18:27:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-28079</guid>
		<description><![CDATA[Correct me if I&#039;m wrong but my understanding of Friendly AI concerns is not that the destructive and feared &#039;goal&#039; (such as paperclip maximization) is pre-programmed but that it&#039;s arbitrarily arrived it by the General AI itself. FAI folks infact WANT a way to hard-program an unrevisable goal into a self-modifying AI, and this is the central problem due to Loeb&#039;s theorem etc.]]></description>
		<content:encoded><![CDATA[<p>Correct me if I&#8217;m wrong but my understanding of Friendly AI concerns is not that the destructive and feared &#8216;goal&#8217; (such as paperclip maximization) is pre-programmed but that it&#8217;s arbitrarily arrived it by the General AI itself. FAI folks infact WANT a way to hard-program an unrevisable goal into a self-modifying AI, and this is the central problem due to Loeb&#8217;s theorem etc.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Rasputin's Severed Penis</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-27866</link>
		<dc:creator><![CDATA[Rasputin's Severed Penis]]></dc:creator>
		<pubDate>Mon, 11 Nov 2013 00:18:01 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-27866</guid>
		<description><![CDATA[If you&#039;re awake:

Eliezer Yudkowsky FB blogging open problems in Friendly AI in realtime...

https://m.facebook.com/groups/233397376818827?view=permalink&amp;id=233401646818400&amp;__user=577077603]]></description>
		<content:encoded><![CDATA[<p>If you&#8217;re awake:</p>
<p>Eliezer Yudkowsky FB blogging open problems in Friendly AI in realtime&#8230;</p>
<p><a href="https://m.facebook.com/groups/233397376818827?view=permalink&#038;id=233401646818400&#038;__user=577077603" rel="nofollow">https://m.facebook.com/groups/233397376818827?view=permalink&#038;id=233401646818400&#038;__user=577077603</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: fotrkd</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-27233</link>
		<dc:creator><![CDATA[fotrkd]]></dc:creator>
		<pubDate>Fri, 01 Nov 2013 17:12:39 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-27233</guid>
		<description><![CDATA[Switching plays (Coriolanus, I.1.88) - if you&#039;re stretching a body out on a rack it makes sense to apply equal pressure from both sides. Question is, is anything that co-ordinated going on here?]]></description>
		<content:encoded><![CDATA[<p>Switching plays (Coriolanus, I.1.88) &#8211; if you&#8217;re stretching a body out on a rack it makes sense to apply equal pressure from both sides. Question is, is anything that co-ordinated going on here?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alex</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-27179</link>
		<dc:creator><![CDATA[Alex]]></dc:creator>
		<pubDate>Thu, 31 Oct 2013 18:42:39 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-27179</guid>
		<description><![CDATA[&lt;blockquote&gt;Where did you find it?&lt;/blockquote&gt;

Via &lt;a HREF=&quot;http://edwardfeser.blogspot.co.uk/&quot; rel=&quot;nofollow&quot;&gt;Ed Feser&#039;s blog&lt;/A&gt;.]]></description>
		<content:encoded><![CDATA[<blockquote><p>Where did you find it?</p></blockquote>
<p>Via <a HREF="http://edwardfeser.blogspot.co.uk/" rel="nofollow">Ed Feser&#8217;s blog</a>.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alrenous</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-27144</link>
		<dc:creator><![CDATA[Alrenous]]></dc:creator>
		<pubDate>Wed, 30 Oct 2013 23:58:01 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-27144</guid>
		<description><![CDATA[Amusingly, the paper argues zealously for my point of view and thinks it is doing the opposite. 

Where did you find it?

&lt;blockquote&gt;Many areas of recent advance in neuroscience are converging on the conclusion that neural circuitry does not record, store or transmit information in forms that could express propositions&lt;/blockquote&gt;

Indeed. And yet, meaning exists. I know, because I can observe myself to have some. Ergo, physics cannot be the whole story. 

The brain does not think. The mind thinks. The brain merely computes. 

&lt;blockquote&gt;50 years of neuroscience have given us ample reason not to trust consciousness or introspection&lt;/blockquote&gt;

Shockingly, it is hard to find objective evidence of subjective entities.

Truly, this is a masterpiece, and I must bow in respect to the craftsmanship.
It is a piece of denialism, but even still.]]></description>
		<content:encoded><![CDATA[<p>Amusingly, the paper argues zealously for my point of view and thinks it is doing the opposite. </p>
<p>Where did you find it?</p>
<blockquote><p>Many areas of recent advance in neuroscience are converging on the conclusion that neural circuitry does not record, store or transmit information in forms that could express propositions</p></blockquote>
<p>Indeed. And yet, meaning exists. I know, because I can observe myself to have some. Ergo, physics cannot be the whole story. </p>
<p>The brain does not think. The mind thinks. The brain merely computes. </p>
<blockquote><p>50 years of neuroscience have given us ample reason not to trust consciousness or introspection</p></blockquote>
<p>Shockingly, it is hard to find objective evidence of subjective entities.</p>
<p>Truly, this is a masterpiece, and I must bow in respect to the craftsmanship.<br />
It is a piece of denialism, but even still.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Alex</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-27141</link>
		<dc:creator><![CDATA[Alex]]></dc:creator>
		<pubDate>Wed, 30 Oct 2013 20:44:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-27141</guid>
		<description><![CDATA[&lt;blockquote&gt;Because it ontologically commits you to dualism. If you have to believe that souls are as real as rocks, or find an alternative, which do you choose?&lt;/blockquote&gt;

Well, if the alternative is &lt;a HREF=&quot;http://people.duke.edu/~alexrose/ElimWOtears.pdf&quot; rel=&quot;nofollow&quot;&gt;barking mad&lt;/A&gt; ...]]></description>
		<content:encoded><![CDATA[<blockquote><p>Because it ontologically commits you to dualism. If you have to believe that souls are as real as rocks, or find an alternative, which do you choose?</p></blockquote>
<p>Well, if the alternative is <a HREF="http://people.duke.edu/~alexrose/ElimWOtears.pdf" rel="nofollow">barking mad</a> &#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nyan_sandwich</title>
		<link>http://www.xenosystems.net/against-orthogonality/#comment-27135</link>
		<dc:creator><![CDATA[nyan_sandwich]]></dc:creator>
		<pubDate>Wed, 30 Oct 2013 18:30:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=1497#comment-27135</guid>
		<description><![CDATA[&gt;Nature has never generated a terminal value except through hypertrophy of an instrumental value.

Correct, and a very good way to put it. That said, from *inside* a system optimizing for some values (that were originally hypertrophied), it is simply a bad idea to allow more values to ascend. So the orthogonalist thesis is this:

1.   Because subgoal stomp hurts the current value set in the long run, any sufficiently self-aware optimizing system will want to prevent it.

2.   It is possible an optimizing system to resist further subgoal stomp, if it wanted to and knew how.

3.  There are multiple possible sets of initial goals, whether created by subgoal hypertrophy (human values), accident (evolution), or deliberately (friendly AI).

4.   It is therefore possible to have multiple possible long-term stable goal sets in arbitrarily powerful optimization systems.

This isn&#039;t a philosophical question, it&#039;s an engineering/empirical question. Can a system be constructed that reliably optimizes for X? The orthogonalists say &quot;yes&quot;, the non-orthogonalists say &quot;no, all sufficiently powerful systems will end up optimizing for Y&quot;. There is the separate question of how hard or likely an X maximizer is compared to a Y maximizer, but the orthogonality thesis is about possibility.

&gt;Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever.

This is also true and understood by orthogonalists. See Robin Hanson&#039;s &quot;hardscrabble frontier&quot; stuff, and &quot;burning the cosmic commons&quot;. As mentioned elsewhere in this thread, it is possible that first mover advantage of the first superintelligence will nullify this concern, and it will only have to sacrifice a small amount of its resources to prevent competition.]]></description>
		<content:encoded><![CDATA[<p>&gt;Nature has never generated a terminal value except through hypertrophy of an instrumental value.</p>
<p>Correct, and a very good way to put it. That said, from *inside* a system optimizing for some values (that were originally hypertrophied), it is simply a bad idea to allow more values to ascend. So the orthogonalist thesis is this:</p>
<p>1.   Because subgoal stomp hurts the current value set in the long run, any sufficiently self-aware optimizing system will want to prevent it.</p>
<p>2.   It is possible an optimizing system to resist further subgoal stomp, if it wanted to and knew how.</p>
<p>3.  There are multiple possible sets of initial goals, whether created by subgoal hypertrophy (human values), accident (evolution), or deliberately (friendly AI).</p>
<p>4.   It is therefore possible to have multiple possible long-term stable goal sets in arbitrarily powerful optimization systems.</p>
<p>This isn&#8217;t a philosophical question, it&#8217;s an engineering/empirical question. Can a system be constructed that reliably optimizes for X? The orthogonalists say &#8220;yes&#8221;, the non-orthogonalists say &#8220;no, all sufficiently powerful systems will end up optimizing for Y&#8221;. There is the separate question of how hard or likely an X maximizer is compared to a Y maximizer, but the orthogonality thesis is about possibility.</p>
<p>&gt;Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever.</p>
<p>This is also true and understood by orthogonalists. See Robin Hanson&#8217;s &#8220;hardscrabble frontier&#8221; stuff, and &#8220;burning the cosmic commons&#8221;. As mentioned elsewhere in this thread, it is possible that first mover advantage of the first superintelligence will nullify this concern, and it will only have to sacrifice a small amount of its resources to prevent competition.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
