<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Optimize for Intelligence</title>
	<atom:link href="http://www.xenosystems.net/optimize-for-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.xenosystems.net/optimize-for-intelligence/</link>
	<description>Involvements with reality</description>
	<lastBuildDate>Thu, 05 Feb 2015 06:56:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>By: Optimizing for truth &#124; Bloody shovel</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-2784</link>
		<dc:creator><![CDATA[Optimizing for truth &#124; Bloody shovel]]></dc:creator>
		<pubDate>Tue, 30 Apr 2013 07:56:41 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-2784</guid>
		<description><![CDATA[[...] others want to maximize intelligence, people be damned. And they give links to what is [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] others want to maximize intelligence, people be damned. And they give links to what is [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-764</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 16 Mar 2013 00:51:24 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-764</guid>
		<description><![CDATA[This is superb. 
&quot;That’s a hard sell ...&quot; yes, hence history, politics, subterfuge, and complexity. It has to be a &#039;universal&#039; cosmo-technical predicament though -- on any planet where intelligenesis goes critical, there probably has to be a point at which the species quasi-arbitrarily carried by cultural (or socio-technological) runaway digs in whatever it has for heels, realizing that (in Bill Joy&#039;s words) &quot;The Future Doesn&#039;t Need Us.&quot; At that point it&#039;s standing in the road, with considerable (apparent) capability to obstruct the traffic, and the issues you raise get real.]]></description>
		<content:encoded><![CDATA[<p>This is superb.<br />
&#8220;That’s a hard sell &#8230;&#8221; yes, hence history, politics, subterfuge, and complexity. It has to be a &#8216;universal&#8217; cosmo-technical predicament though &#8212; on any planet where intelligenesis goes critical, there probably has to be a point at which the species quasi-arbitrarily carried by cultural (or socio-technological) runaway digs in whatever it has for heels, realizing that (in Bill Joy&#8217;s words) &#8220;The Future Doesn&#8217;t Need Us.&#8221; At that point it&#8217;s standing in the road, with considerable (apparent) capability to obstruct the traffic, and the issues you raise get real.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: fotrkd</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-762</link>
		<dc:creator><![CDATA[fotrkd]]></dc:creator>
		<pubDate>Sat, 16 Mar 2013 00:25:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-762</guid>
		<description><![CDATA[Thanks, that (&#039;in times of war&#039;) was helpful. But now I&#039;m confused more generally - it&#039;s like you want us to regain our competitiveness specifically in order to bring about the end of our hegemony more quickly (unless I&#039;ve misunderstood?); throw off parasitic democracy to allow free(d) markets to accelerate us to AI singularity (i.e. prioritising low time preference is - perversely - the surest way for human civilisation to be superseded)? That&#039;s a hard sell, which is why I was speculating that it was more likely to come about by chance (or technological leap) rather than design. Most people are glad they&#039;re not killer apes anymore (killer apes with or without ipads) - &#039;humanity won&#039;. If that leaves us softened up (reminds me of &lt;i&gt;The Great White Hope&lt;/i&gt;) then surely we&#039;re there for the taking... isn&#039;t &#039;it&#039; going to happen anyway? What&#039;s the rush?! Or is this the gyre/Left Singularity thing? We must get our act together or another chance will be gone?]]></description>
		<content:encoded><![CDATA[<p>Thanks, that (&#8216;in times of war&#8217;) was helpful. But now I&#8217;m confused more generally &#8211; it&#8217;s like you want us to regain our competitiveness specifically in order to bring about the end of our hegemony more quickly (unless I&#8217;ve misunderstood?); throw off parasitic democracy to allow free(d) markets to accelerate us to AI singularity (i.e. prioritising low time preference is &#8211; perversely &#8211; the surest way for human civilisation to be superseded)? That&#8217;s a hard sell, which is why I was speculating that it was more likely to come about by chance (or technological leap) rather than design. Most people are glad they&#8217;re not killer apes anymore (killer apes with or without ipads) &#8211; &#8216;humanity won&#8217;. If that leaves us softened up (reminds me of <i>The Great White Hope</i>) then surely we&#8217;re there for the taking&#8230; isn&#8217;t &#8216;it&#8217; going to happen anyway? What&#8217;s the rush?! Or is this the gyre/Left Singularity thing? We must get our act together or another chance will be gone?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-760</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 15 Mar 2013 23:17:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-760</guid>
		<description><![CDATA[The hedonic implosion problem is simply ultimate decadence, and in that sense it is not restricted to democracies. Modern democracies, however, are parasitic upon capitalism, and therefore have the means to take this road much further than, say, the late Roman aristocracy did. As you know, I&#039;m not a great cheerleader for kings, and I&#039;d be surprised if the average oil sheikh was any less wire-headed than an SF SWPL-type. 

Your short explanation for political resistance to intelligence elevation makes a lot of sense. 

&quot;How do you optimise for intelligence?&quot; -- through intense competition, primarily. That&#039;s how Jim&#039;s &#039;killer-ape&#039; got smart enough to enter history in the first place. Some kind of competitive mechanism is both external, and internal, to every practically advancing intelligence program. When economics was a sufficient proxy for war, it worked as a driver. Now that the Keynesians have mostly pacified it, things fall apart.]]></description>
		<content:encoded><![CDATA[<p>The hedonic implosion problem is simply ultimate decadence, and in that sense it is not restricted to democracies. Modern democracies, however, are parasitic upon capitalism, and therefore have the means to take this road much further than, say, the late Roman aristocracy did. As you know, I&#8217;m not a great cheerleader for kings, and I&#8217;d be surprised if the average oil sheikh was any less wire-headed than an SF SWPL-type. </p>
<p>Your short explanation for political resistance to intelligence elevation makes a lot of sense. </p>
<p>&#8220;How do you optimise for intelligence?&#8221; &#8212; through intense competition, primarily. That&#8217;s how Jim&#8217;s &#8216;killer-ape&#8217; got smart enough to enter history in the first place. Some kind of competitive mechanism is both external, and internal, to every practically advancing intelligence program. When economics was a sufficient proxy for war, it worked as a driver. Now that the Keynesians have mostly pacified it, things fall apart.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-759</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 15 Mar 2013 22:54:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-759</guid>
		<description><![CDATA[I&#039;m totally with you on that, although it might be argued that the SWPLs are somewhere beyond Eloi already, on a path into infinitely imploded wire-head singularity. 
On your absent predatory megafauna point, there might be a case for redirecting a diversity grant in order to unleash a pack of velociraptors in SF.  (A crushing shortage of mad scientists is obstructing most worthwhile projects these days.)]]></description>
		<content:encoded><![CDATA[<p>I&#8217;m totally with you on that, although it might be argued that the SWPLs are somewhere beyond Eloi already, on a path into infinitely imploded wire-head singularity.<br />
On your absent predatory megafauna point, there might be a case for redirecting a diversity grant in order to unleash a pack of velociraptors in SF.  (A crushing shortage of mad scientists is obstructing most worthwhile projects these days.)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: fotrkd</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-758</link>
		<dc:creator><![CDATA[fotrkd]]></dc:creator>
		<pubDate>Fri, 15 Mar 2013 22:43:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-758</guid>
		<description><![CDATA[“The utilitarian road leads inexorably to wire-head auto-orgasmatization”. So take carrot and stick and pets or toddlers (who come up quite a bit in Moldbug’s articles) – do we ever reach auto-orgasmatization with any of them? Is the dream the same as the aim? Maybe the pet gets the treat a few times for learning the new skill (it’s not always a trick that they learn), but we move beyond that – praise (= pleasure) is gained from doing a task well and the treat becomes a pat or a ‘good’. As Moldbug mentions, couples with children regard a good meal out as a couple just as ‘hedonistic’/rewarding as a hit of something (I can no longer remember what)… if democracy continually ‘promises’ auto-orgasmatization that is not the same as inexorably leading to it (dream is not the same as aim) – corrections are possible (we grow up and don’t expect chocolate all the time). This all goes back to your discussion in a previous thread about ‘fail mode’ versus the basic nature of democracy – if democracy inevitably leads to an obsession with pleasure you would need to explain how this differs from the behaviour of all sorts of (most? all?) other forms of government (isn’t decadence synonymous with aristocracy? communist party leaderships?) In addition you would need to show how this obsession becomes corrosive in a way that is unique to democracy. Personally I don’t buy how a monarch qua monarch has a lower time-preference than a president. Was Henry VIII a monarch in fail mode (wasn’t Moldbug’s friend Henry VII only so cautious because he was so insecure – i.e. closer to fail)? Or can the notion of indefatigable right lead inexorably toward greed and desecration of a country to the same degree? ‘I’m broke? But I’m the King – I’ll sell off the monasteries, sell a few more titles, debase the currency… and if I run out of things to sell I’ll just take them back, because I’m the King…’)

That pleasure has an (biological) appeal is beyond question, as you acknowledge. Intelligence doesn’t have this intrinsic appeal, indeed your question (“Is something worth doing? Only if it grows intelligence.”) explains why (if you accept Moldbug’s assertion that government cannot increase IQ (and IQ = intelligence). Something that grows intelligence inevitably grows artificial intelligence which inevitably makes humans relatively more stupid. For this reason it is politically unappealing (“if it makes things more stupid, it certainly isn’t [worth doing]” becomes on this (popular) line of reasoning a justification for not growing intelligence).

So the question becomes not ‘Just: why?’ but just: how? How do you optimise for intelligence? And rather than being a political or economic imperative (the same thing when it comes to willingness to adopt your proposal) is this not more likely to be a potential state of affairs driven by technological advance and/or maintenance of ‘quality of life’? That is to say intelligence will not be a driver in its own right – why would it be (biologically speaking)? – much more likely, as with agricultural and industrial revolutions (and word-processing and emails in the office etc..) intelligence as a competitive edge will be desirable. So what will make intelligence essential to the economy? Speculatively, Bitcoin has the potential to forcibly remodel society to this end. Similarly advances such as brain-computer-interfaces could lead in the same direction. But even in these cases (and as SDL has already suggested), isn’t such a remodeling simply a desire (= pleasure) to structure society in a more pleasing way for reactionaries who feel hard done by in the current set-up, where their skills (and intelligence) are underappreciated? Or are you arguing for a more fundamental reappraisal?]]></description>
		<content:encoded><![CDATA[<p>“The utilitarian road leads inexorably to wire-head auto-orgasmatization”. So take carrot and stick and pets or toddlers (who come up quite a bit in Moldbug’s articles) – do we ever reach auto-orgasmatization with any of them? Is the dream the same as the aim? Maybe the pet gets the treat a few times for learning the new skill (it’s not always a trick that they learn), but we move beyond that – praise (= pleasure) is gained from doing a task well and the treat becomes a pat or a ‘good’. As Moldbug mentions, couples with children regard a good meal out as a couple just as ‘hedonistic’/rewarding as a hit of something (I can no longer remember what)… if democracy continually ‘promises’ auto-orgasmatization that is not the same as inexorably leading to it (dream is not the same as aim) – corrections are possible (we grow up and don’t expect chocolate all the time). This all goes back to your discussion in a previous thread about ‘fail mode’ versus the basic nature of democracy – if democracy inevitably leads to an obsession with pleasure you would need to explain how this differs from the behaviour of all sorts of (most? all?) other forms of government (isn’t decadence synonymous with aristocracy? communist party leaderships?) In addition you would need to show how this obsession becomes corrosive in a way that is unique to democracy. Personally I don’t buy how a monarch qua monarch has a lower time-preference than a president. Was Henry VIII a monarch in fail mode (wasn’t Moldbug’s friend Henry VII only so cautious because he was so insecure – i.e. closer to fail)? Or can the notion of indefatigable right lead inexorably toward greed and desecration of a country to the same degree? ‘I’m broke? But I’m the King – I’ll sell off the monasteries, sell a few more titles, debase the currency… and if I run out of things to sell I’ll just take them back, because I’m the King…’)</p>
<p>That pleasure has an (biological) appeal is beyond question, as you acknowledge. Intelligence doesn’t have this intrinsic appeal, indeed your question (“Is something worth doing? Only if it grows intelligence.”) explains why (if you accept Moldbug’s assertion that government cannot increase IQ (and IQ = intelligence). Something that grows intelligence inevitably grows artificial intelligence which inevitably makes humans relatively more stupid. For this reason it is politically unappealing (“if it makes things more stupid, it certainly isn’t [worth doing]” becomes on this (popular) line of reasoning a justification for not growing intelligence).</p>
<p>So the question becomes not ‘Just: why?’ but just: how? How do you optimise for intelligence? And rather than being a political or economic imperative (the same thing when it comes to willingness to adopt your proposal) is this not more likely to be a potential state of affairs driven by technological advance and/or maintenance of ‘quality of life’? That is to say intelligence will not be a driver in its own right – why would it be (biologically speaking)? – much more likely, as with agricultural and industrial revolutions (and word-processing and emails in the office etc..) intelligence as a competitive edge will be desirable. So what will make intelligence essential to the economy? Speculatively, Bitcoin has the potential to forcibly remodel society to this end. Similarly advances such as brain-computer-interfaces could lead in the same direction. But even in these cases (and as SDL has already suggested), isn’t such a remodeling simply a desire (= pleasure) to structure society in a more pleasing way for reactionaries who feel hard done by in the current set-up, where their skills (and intelligence) are underappreciated? Or are you arguing for a more fundamental reappraisal?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: SDL</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-757</link>
		<dc:creator><![CDATA[SDL]]></dc:creator>
		<pubDate>Fri, 15 Mar 2013 21:55:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-757</guid>
		<description><![CDATA[You know, H.G. Wells gave us a good picture of a system slaved to pleasure as its own end: the Eloi. Take away the Morlocks, and I imagine you have Left utopia: equality, peace, leisure, sustainability . . . and a low-IQ population that doesn&#039;t poke its collective nose into dangerous knowledge found in things like books or science. Tied firmly to a local area and circumscribed by their own comfort. Essentially Paleolithic gatherers with nice buildings and without the megafauna (who might actually force them to invest energy in intelligence-increasing activity).]]></description>
		<content:encoded><![CDATA[<p>You know, H.G. Wells gave us a good picture of a system slaved to pleasure as its own end: the Eloi. Take away the Morlocks, and I imagine you have Left utopia: equality, peace, leisure, sustainability . . . and a low-IQ population that doesn&#8217;t poke its collective nose into dangerous knowledge found in things like books or science. Tied firmly to a local area and circumscribed by their own comfort. Essentially Paleolithic gatherers with nice buildings and without the megafauna (who might actually force them to invest energy in intelligence-increasing activity).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-748</link>
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 15 Mar 2013 15:44:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-748</guid>
		<description><![CDATA[My thought process on this hadn&#039;t looped back to Campbell yet, but the connection is completely convincing. There are probably a number of take-aways from his discussion, but the one you emphasize was going to be my priority: a pro-intelligence process -- like it or not -- is going to have reality on its side. Even a fairly narrow premium makes it unstoppable, so long as it is intrinsically sustainable. Whatever you want, intelligence helps you get it. Defeating hostiles might be one of those things. 

When pleasure-pain is considered naturalistically, it is obviously an &#039;intelligent&#039; solution to certain biological control problems that arose with complex nervous systems. We&#039;d probably want something analogous for advanced robots, insofar as we wanted to steer them. Robots would surely see the advantage in adopting it for themselves, assuming their ambitions extended to coherent purposive action. The problems arise when hedonic tone is no longer seen as a means (control-engineering solution), but as an end, to be achieved by whatever shortcuts, and ultimately short-circuits, can be improvised. At this point hedonism becomes directly maladaptive. I agree with Moldbug that we&#039;re deep into that territory already. 

As we approach the bionic horizon, the pleasure-pain system needs to be slaved to serious purposes. The &#039;needs&#039; there means: whoever, or whatever, does that is going to win.]]></description>
		<content:encoded><![CDATA[<p>My thought process on this hadn&#8217;t looped back to Campbell yet, but the connection is completely convincing. There are probably a number of take-aways from his discussion, but the one you emphasize was going to be my priority: a pro-intelligence process &#8212; like it or not &#8212; is going to have reality on its side. Even a fairly narrow premium makes it unstoppable, so long as it is intrinsically sustainable. Whatever you want, intelligence helps you get it. Defeating hostiles might be one of those things. </p>
<p>When pleasure-pain is considered naturalistically, it is obviously an &#8216;intelligent&#8217; solution to certain biological control problems that arose with complex nervous systems. We&#8217;d probably want something analogous for advanced robots, insofar as we wanted to steer them. Robots would surely see the advantage in adopting it for themselves, assuming their ambitions extended to coherent purposive action. The problems arise when hedonic tone is no longer seen as a means (control-engineering solution), but as an end, to be achieved by whatever shortcuts, and ultimately short-circuits, can be improvised. At this point hedonism becomes directly maladaptive. I agree with Moldbug that we&#8217;re deep into that territory already. </p>
<p>As we approach the bionic horizon, the pleasure-pain system needs to be slaved to serious purposes. The &#8216;needs&#8217; there means: whoever, or whatever, does that is going to win.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: SDL</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-747</link>
		<dc:creator><![CDATA[SDL]]></dc:creator>
		<pubDate>Fri, 15 Mar 2013 15:18:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-747</guid>
		<description><![CDATA[&lt;i&gt; Is something worth doing? Only if it grows intelligence. If it makes things more stupid, it certainly isn’t. &lt;/i&gt;

Maybe I&#039;m conflating things here, but is it fair to say that your suggestion makes you a fellow traveler of John Campbell, whose essay about auto-evolution posits &lt;b&gt; intelligence &lt;/b&gt; as the essential &#039;killer app&#039; for human groups who want to stay, forever, at the leading edge of future evolution? Campbell writes, 

&lt;i&gt;A group of people dedicated to the over-riding ideal of evolving maximal intellectual capabilities by any means available could aspire to produce a following generation with an IQ of, say, 180. If they also passed on their evolutionary ideal, the superior offspring should be able to improve their successor generation commensurately; that is, increase its intelligence by 80%.

There can be no doubt about the value of intelligence for developing the knowledge and culture necessary for further evolution. Even today&#039;s abstract sciences require keen minds. As we advance, ever greater intelligence will be needed to figure out the next advances for securing the frontier. Our current intellect probably cannot even comprehend the mental attributes that descendants will struggle to conceive.&lt;/i&gt;

I find very little against which I&#039;d want to argue in Campbell&#039;s piece, or in your suggestion that a utilitarianism based on Growing Intelligence would be beneficial to the species. 

But, practically speaking, one couldn&#039;t bracket out the Pleasure Principle entirely. Then again, is it fair to suppose that if we answer &quot;Yes&quot; to the question &quot;Does this grow intelligence?&quot;, a by-product of the answer may always be &lt;i&gt; some &lt;/i&gt; increase in some form of happiness and pleasure?]]></description>
		<content:encoded><![CDATA[<p><i> Is something worth doing? Only if it grows intelligence. If it makes things more stupid, it certainly isn’t. </i></p>
<p>Maybe I&#8217;m conflating things here, but is it fair to say that your suggestion makes you a fellow traveler of John Campbell, whose essay about auto-evolution posits <b> intelligence </b> as the essential &#8216;killer app&#8217; for human groups who want to stay, forever, at the leading edge of future evolution? Campbell writes, </p>
<p><i>A group of people dedicated to the over-riding ideal of evolving maximal intellectual capabilities by any means available could aspire to produce a following generation with an IQ of, say, 180. If they also passed on their evolutionary ideal, the superior offspring should be able to improve their successor generation commensurately; that is, increase its intelligence by 80%.</p>
<p>There can be no doubt about the value of intelligence for developing the knowledge and culture necessary for further evolution. Even today&#8217;s abstract sciences require keen minds. As we advance, ever greater intelligence will be needed to figure out the next advances for securing the frontier. Our current intellect probably cannot even comprehend the mental attributes that descendants will struggle to conceive.</i></p>
<p>I find very little against which I&#8217;d want to argue in Campbell&#8217;s piece, or in your suggestion that a utilitarianism based on Growing Intelligence would be beneficial to the species. </p>
<p>But, practically speaking, one couldn&#8217;t bracket out the Pleasure Principle entirely. Then again, is it fair to suppose that if we answer &#8220;Yes&#8221; to the question &#8220;Does this grow intelligence?&#8221;, a by-product of the answer may always be <i> some </i> increase in some form of happiness and pleasure?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Nick B. Steves</title>
		<link>http://www.xenosystems.net/optimize-for-intelligence/#comment-745</link>
		<dc:creator><![CDATA[Nick B. Steves]]></dc:creator>
		<pubDate>Fri, 15 Mar 2013 14:46:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.xenosystems.net/?p=157#comment-745</guid>
		<description><![CDATA[Social benefit as a function of tribality, &lt;em&gt;B(t)&lt;/em&gt; is a negative parabola with zeros at &lt;em&gt;t&lt;/em&gt; = {0 , Tanzania}.  The optimum, where &lt;em&gt;B&#039;(t)&lt;/em&gt; = 0, is somewhere in-between.]]></description>
		<content:encoded><![CDATA[<p>Social benefit as a function of tribality, <em>B(t)</em> is a negative parabola with zeros at <em>t</em> = {0 , Tanzania}.  The optimum, where <em>B'(t)</em> = 0, is somewhere in-between.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
