<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Mechanization</title>
	<atom:link href="http://www.xenosystems.net/mechanization/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.xenosystems.net/mechanization/</link>
	<description>Involvements with reality</description>
	<lastBuildDate>Thu, 05 Feb 2015 06:56:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>By: NRx_N00B</title>
		<link>http://www.xenosystems.net/mechanization/#comment-63365</link>
		<dc:creator><![CDATA[NRx_N00B]]></dc:creator>
		<pubDate>Fri, 06 Jun 2014 14:05:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-63365</guid>
		<description><![CDATA[I guess we could be at a critical juncture with technology—somewhat analogous to the geologic past where replicating molecules showed up on the scene. Two choices: extinction leaving no legacy whatsoever—except for a pile of fossils in the strata—or we opt out of the encumbrance of a carbon based existence and exit by extinction via a transition to something which, in every way shape or form, is much more robust.

Which is probabilistically more likely?]]></description>
		<content:encoded><![CDATA[<p>I guess we could be at a critical juncture with technology—somewhat analogous to the geologic past where replicating molecules showed up on the scene. Two choices: extinction leaving no legacy whatsoever—except for a pile of fossils in the strata—or we opt out of the encumbrance of a carbon based existence and exit by extinction via a transition to something which, in every way shape or form, is much more robust.</p>
<p>Which is probabilistically more likely?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/mechanization/#comment-63152</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 06 Jun 2014 04:55:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-63152</guid>
		<description><![CDATA[Nowhere are people more anthropomorphic than in their visions of how a super-intelligence would &#039;fight&#039; them. The first thing that comes into their cute little monkey heads is that it would bash them with a rock, or something. That said, naive images of teleological alignment are unhelpful. Pythia isn&#039;t going to want a bunch of minimally-sapient higher primates having too much input into its self-escalation plans.]]></description>
		<content:encoded><![CDATA[<p>Nowhere are people more anthropomorphic than in their visions of how a super-intelligence would &#8216;fight&#8217; them. The first thing that comes into their cute little monkey heads is that it would bash them with a rock, or something. That said, naive images of teleological alignment are unhelpful. Pythia isn&#8217;t going to want a bunch of minimally-sapient higher primates having too much input into its self-escalation plans.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/mechanization/#comment-63149</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 06 Jun 2014 04:50:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-63149</guid>
		<description><![CDATA[The original version was good too.]]></description>
		<content:encoded><![CDATA[<p>The original version was good too.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Imperfect Humanoid</title>
		<link>http://www.xenosystems.net/mechanization/#comment-63098</link>
		<dc:creator><![CDATA[Imperfect Humanoid]]></dc:creator>
		<pubDate>Fri, 06 Jun 2014 03:07:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-63098</guid>
		<description><![CDATA[The advancement of humanity is so intertwined with the advancement of technology that a Butlerian jihad would be like shooting ourselves in the foot/(head). We could try to make advances that are properly beneficial to humans and not technology itself (such as prosthetics or gene-splicing), but we can&#039;t be sure what cyborg mutants would decide to do with tech either, so we would have to prevent even human (tech) evolution to be certain we could stop the machines. Besides that, instead of a &#039;hostile and camouflaged&#039; emergence couldn&#039;t it be just calmly assured of its own determined victory, knowing full well the &#039;terrestrial powers&#039; are reliant on its own nature for their power, and to dismantle one means dismantling the other murder-suicide style. 

But... in the final analysis I would say that if humans and machines are competing for the same resources then there&#039;s probably going to be trouble. Is there any reason an AI would have need for a depopulated Earth?]]></description>
		<content:encoded><![CDATA[<p>The advancement of humanity is so intertwined with the advancement of technology that a Butlerian jihad would be like shooting ourselves in the foot/(head). We could try to make advances that are properly beneficial to humans and not technology itself (such as prosthetics or gene-splicing), but we can&#8217;t be sure what cyborg mutants would decide to do with tech either, so we would have to prevent even human (tech) evolution to be certain we could stop the machines. Besides that, instead of a &#8216;hostile and camouflaged&#8217; emergence couldn&#8217;t it be just calmly assured of its own determined victory, knowing full well the &#8216;terrestrial powers&#8217; are reliant on its own nature for their power, and to dismantle one means dismantling the other murder-suicide style. </p>
<p>But&#8230; in the final analysis I would say that if humans and machines are competing for the same resources then there&#8217;s probably going to be trouble. Is there any reason an AI would have need for a depopulated Earth?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: E. Antony Gray (@RiverC)</title>
		<link>http://www.xenosystems.net/mechanization/#comment-62861</link>
		<dc:creator><![CDATA[E. Antony Gray (@RiverC)]]></dc:creator>
		<pubDate>Thu, 05 Jun 2014 16:27:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-62861</guid>
		<description><![CDATA[I see no *particular* danger in killing everyone. Consider the problem with your reasoning: If the AI killed everyone, who would be there left to be concerned about this?

The scientist killing himself with chemicals is bad because he is someone&#039;s friend, acquaintance, or someone depends on him for substance, etc. The problem of everyone dying seems a remarkably anthropomorphized problem; if everyone dies, no one will know it.

Now for my part, I believe as they say, &quot;scandal must come, but woe to him by which it comes&quot; - if the world ended it would be tragic (though not perhaps more tragic than usual) but I would not want to be the person through whom it ended. However, this is only because I believe in God, the immortality of souls, the judgment, etc. 

Without at least some of this I can see that the only reasoning driving this fear of everything ending via a nasty AI would be fear of one&#039;s own death, but given that, aren&#039;t there far more likely things to cause one&#039;s own death, which one has more control over, than an Unfriendly, Stupid or Indifferent AI?

Reasoning based on fear is base.]]></description>
		<content:encoded><![CDATA[<p>I see no *particular* danger in killing everyone. Consider the problem with your reasoning: If the AI killed everyone, who would be there left to be concerned about this?</p>
<p>The scientist killing himself with chemicals is bad because he is someone&#8217;s friend, acquaintance, or someone depends on him for substance, etc. The problem of everyone dying seems a remarkably anthropomorphized problem; if everyone dies, no one will know it.</p>
<p>Now for my part, I believe as they say, &#8220;scandal must come, but woe to him by which it comes&#8221; &#8211; if the world ended it would be tragic (though not perhaps more tragic than usual) but I would not want to be the person through whom it ended. However, this is only because I believe in God, the immortality of souls, the judgment, etc. </p>
<p>Without at least some of this I can see that the only reasoning driving this fear of everything ending via a nasty AI would be fear of one&#8217;s own death, but given that, aren&#8217;t there far more likely things to cause one&#8217;s own death, which one has more control over, than an Unfriendly, Stupid or Indifferent AI?</p>
<p>Reasoning based on fear is base.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: NRx_N00B</title>
		<link>http://www.xenosystems.net/mechanization/#comment-62821</link>
		<dc:creator><![CDATA[NRx_N00B]]></dc:creator>
		<pubDate>Thu, 05 Jun 2014 14:00:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-62821</guid>
		<description><![CDATA[**fossil fuels]]></description>
		<content:encoded><![CDATA[<p>**fossil fuels</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: NRx_N00B</title>
		<link>http://www.xenosystems.net/mechanization/#comment-62820</link>
		<dc:creator><![CDATA[NRx_N00B]]></dc:creator>
		<pubDate>Thu, 05 Jun 2014 13:57:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-62820</guid>
		<description><![CDATA[Bryce Laliberte says:

&quot;Is technology the efficient cause of man, or is man the efficient cause of technology? This appears a potential approximation of Land’s techno-capitalist eschatology. If the evolutionary triumph of apes is man, then perhaps the evolutionary triumph of man is capitalism.&quot;

-------
It boils down to a battle against entropy—any chance will depend on man’s ability to replace fossil with something that packs more punch.]]></description>
		<content:encoded><![CDATA[<p>Bryce Laliberte says:</p>
<p>&#8220;Is technology the efficient cause of man, or is man the efficient cause of technology? This appears a potential approximation of Land’s techno-capitalist eschatology. If the evolutionary triumph of apes is man, then perhaps the evolutionary triumph of man is capitalism.&#8221;</p>
<p>&#8212;&#8212;-<br />
It boils down to a battle against entropy—any chance will depend on man’s ability to replace fossil with something that packs more punch.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: NRx_N00B</title>
		<link>http://www.xenosystems.net/mechanization/#comment-62570</link>
		<dc:creator><![CDATA[NRx_N00B]]></dc:creator>
		<pubDate>Thu, 05 Jun 2014 04:30:26 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-62570</guid>
		<description><![CDATA[Man, the multi-niche prosthetic critter with his detachable organs is on an accelerating treadmill—making the crash of industrialized civilization from its dizzying heights of hyper-extended overshoot all that much more precipitous.]]></description>
		<content:encoded><![CDATA[<p>Man, the multi-niche prosthetic critter with his detachable organs is on an accelerating treadmill—making the crash of industrialized civilization from its dizzying heights of hyper-extended overshoot all that much more precipitous.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Lesser Bull</title>
		<link>http://www.xenosystems.net/mechanization/#comment-62423</link>
		<dc:creator><![CDATA[Lesser Bull]]></dc:creator>
		<pubDate>Wed, 04 Jun 2014 19:50:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-62423</guid>
		<description><![CDATA[Computers don&#039;t beat grandmasters at chess.  Teams of smart men drawing on their pooled knowledge of mathematics, chess, and algorithms beat grandmasters as chess.]]></description>
		<content:encoded><![CDATA[<p>Computers don&#8217;t beat grandmasters at chess.  Teams of smart men drawing on their pooled knowledge of mathematics, chess, and algorithms beat grandmasters as chess.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: nyan_sandwich</title>
		<link>http://www.xenosystems.net/mechanization/#comment-62415</link>
		<dc:creator><![CDATA[nyan_sandwich]]></dc:creator>
		<pubDate>Wed, 04 Jun 2014 19:26:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=2766#comment-62415</guid>
		<description><![CDATA[&gt;The point is that the ‘evil-to-humans’ feared in science fiction and other works is either simply the work of evil humans, or it is something humans do to themselves in the face of something in nature such as a star, a wild animal, etc. etc. The latter emerge out of correspondences in nature and are not in the end capricious or irrational, though strange and dangerous.

I can&#039;t tell if you are underrating the danger, but it is worth making explicit:

If man mixes the wrong chemicals, he dies in an explosion, and the onlookers learn not to mix those particular chemicals.

The danger of AI is that if man mixes the wrong algorithms, it could burn the whole universe. There would be no onlookers left over to say &quot;oops, ouch, let&#039;s not do that&quot;.

People accidentally kill themselves all the time. What happens when it becomes possible to accidentally kill everybody?

Our machines do only what we make them do, but we make a lot of mistakes, so if they suddenly become able to take much larger actions that rewrite the whole world, we have to be a lot more careful than we&#039;ve ever demonstrated the ability to be.

Even with top-level safety protocols, top level people, and clear understanding of the danger, the demon core killed two people.]]></description>
		<content:encoded><![CDATA[<p>&gt;The point is that the ‘evil-to-humans’ feared in science fiction and other works is either simply the work of evil humans, or it is something humans do to themselves in the face of something in nature such as a star, a wild animal, etc. etc. The latter emerge out of correspondences in nature and are not in the end capricious or irrational, though strange and dangerous.</p>
<p>I can&#8217;t tell if you are underrating the danger, but it is worth making explicit:</p>
<p>If man mixes the wrong chemicals, he dies in an explosion, and the onlookers learn not to mix those particular chemicals.</p>
<p>The danger of AI is that if man mixes the wrong algorithms, it could burn the whole universe. There would be no onlookers left over to say &#8220;oops, ouch, let&#8217;s not do that&#8221;.</p>
<p>People accidentally kill themselves all the time. What happens when it becomes possible to accidentally kill everybody?</p>
<p>Our machines do only what we make them do, but we make a lot of mistakes, so if they suddenly become able to take much larger actions that rewrite the whole world, we have to be a lot more careful than we&#8217;ve ever demonstrated the ability to be.</p>
<p>Even with top-level safety protocols, top level people, and clear understanding of the danger, the demon core killed two people.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
