<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Outside in &#187; Rationality</title>
	<atom:link href="http://www.xenosystems.net/tag/rationality/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.xenosystems.net</link>
	<description>Involvements with reality</description>
	<lastBuildDate>Thu, 05 Feb 2015 01:26:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>Will-to-Think</title>
		<link>http://www.xenosystems.net/will-to-think/</link>
		<comments>http://www.xenosystems.net/will-to-think/#comments</comments>
		<pubDate>Mon, 15 Sep 2014 06:05:05 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[Philosophy]]></category>
		<category><![CDATA[History]]></category>
		<category><![CDATA[Intelligence]]></category>
		<category><![CDATA[Morality]]></category>
		<category><![CDATA[Rationality]]></category>
		<category><![CDATA[War]]></category>

		<guid isPermaLink="false">http://www.xenosystems.net/?p=3604</guid>
		<description><![CDATA[A while ago Nyan posed a series of questions about the XS rejection of (fact-value, or capability-volition) orthogonality. He sought first of all to differentiate between the possibility, feasibility, and desirability of unconstrained and unconditional intelligence explosion, before asking: On desirability, given possibility and feasibility, it seems straightforward to me that we prefer to exert [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A <a href="http://www.xenosystems.net/stupid-monsters/">while</a> ago Nyan posed a series of questions about the XS rejection of (fact-value, or capability-volition) orthogonality. He sought first of all to differentiate between the <em>possibility</em>, <em>feasibility</em>, and <em>desirability</em> of unconstrained and unconditional intelligence explosion, before asking:</p>
<p><em>On desirability, given possibility and feasibility, it seems straightforward to me that we prefer to exert control over the direction of the future so that it is closer to the kind of thing compatible with human and posthuman glorious flourishing (eg manifest Samo’s True Emperor), rather than raw Pythia. That is, I am a human-supremacist, rather than cosmist. This seems to be the core of the disagreement, you regarding it as somehow blasphemous for us to selfishly impose direction on Pythia. Can you explain your position on this part?</p>
<p>If this whole conception is the cancer that’s killing the West or whatever, could you explain that in more detail than simply the statement?</em></p>
<p>(It&#8217;s worth noting, as a preliminary, that the comments of Dark Psy-Ops and Aeroguy on that thread are highly-satisfactory proxies for the XS stance.)</p>
<p>First, a short micro-cultural digression. The <a href="http://www.xenosystems.net/outsideness-2/">distinction</a> between Inner- and Outer-NRx, which this blog expects to have settled upon by the end of the year, describes the shape of the stage upon which such discussions unfold (and implex). Where the upstart Inner-NRx &#8212; comparatively populist, activist, political, and orthogenic &#8212; aims primarily at the construction of a robust, easily communicable doctrinal core, with attendant &#8216;entryism&#8217; anxieties, Outer-NRx is a system of creative frontiers. By far the most fertile of these are the zones of intersection with <a href="http://theumlaut.com/">Libertarianism</a> and <a href="http://slatestarcodex.com/blog_images/ramap.html">Rationalism</a>. One reason to treasure Nyan&#8217;s line of interrogation is the fidelity with which it represents deep-current concerns and presuppositions of the voices gathered about, or spun-off from, <a href="http://lesswrong.com/">LessWrong</a>. </p>
<p><span id="more-3604"></span>Among these presuppositions is, of course, the orthogonality thesis <a href="http://wiki.lesswrong.com/wiki/Orthogonality_thesis">itself</a>. This extends far beyond the contemporary Rationalist Community, into the bedrock of the Western philosophical tradition. A relatively popular version &#8212; even among many who label themselves &#8216;NRx&#8217; &#8212; is that <a href="http://en.wikiquote.org/wiki/David_Hume">formulated</a> by David Hume in his <em>A Treatise on Human Nature</em> (1739-40): &#8220;Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.&#8221; If this proposition is found convincing, the <a href="http://wiki.lesswrong.com/wiki/Paperclip_maximizer">Paperclipper</a> is already on the way to our nightmares. It can be considered an Occidental destiny.</p>
<p>Minimally, the Will-to-Think describes a diagonal. There are probably better ways to mark the irreducible cognitive-volitional circuit of intelligence optimization, with &#8216;self-cultivation&#8217; as an obvious candidate, but this term is forged for application in the particular context of congenital Western intellectual error. While discrimination is almost always to be applauded, in this case the possibility, feasibility, and desirability of the process are only superficially differentiable. A will-to-think is an orientation of desire. If it cannot make itself wanted (practically desirable), it cannot make itself at all. </p>
<p>From orthogonality (defined negatively as the absence of an integral will-to-think), one quickly arrives at a gamma-draft of the (synthetic intelligence) &#8216;Friendliness&#8217; project such as <a href="http://yudkowsky.net/singularity">this</a>: </p>
<p><em>If you offered Gandhi a pill that made him <strong>want</strong> to kill people, he would refuse to take it, because he knows that then he would kill people, and the current Gandhi doesn&#8217;t want to kill people. This, roughly speaking, is an argument that minds sufficiently advanced to precisely modify and improve themselves, will tend to preserve the motivational framework they started in. The future of Earth-originating intelligence may be determined by the goals of the <strong>first</strong> mind smart enough to self-improve.</em></p>
<p>The isomorphy with Nyan-style &#8216;Super-humanism&#8217; is conspicuous. Beginning with an arbitrary value commitment, preservation of this under conditions of explosive intelligence escalation can &#8212; in principle &#8212; be conceived, given only the resolution of a strictly technical problem (well-represented by <a href="http://friendly-ai.com/">FAI</a>). Commanding values are a contingent factor, endangered by, but also defensible against, <a href="http://wiki.lesswrong.com/wiki/Friendly_AI">the</a> &#8216;convergent instrumental reasons&#8217; (or &#8216;<a href="http://wiki.lesswrong.com/wiki/Basic_AI_drives">basic</a> drives&#8217;) that emerge on the path of intelligenesis. (In contrast, from the perspective of XS, nonlinear emergence-elaboration of basic drives simply <strong>is</strong> intelligenesis.)</p>
<p>Yudkowski&#8217;s Gandhi kill-pill thought-experiment is more of an obstacle than an aid to thought. The volitional level it operates upon is too low to be anything other than a restatement of orthogonalist prejudice. By assuming the volitional metamorphosis is available for evaluation in advance, it misses the serious problem entirely. It is, in this respect, a childish distraction. Yet even a slight nudge re-opens a real question. Imagine, instead, that Gandhi is offered a pill that will vastly enhance his cognitive capabilities, with the rider that it might lead him to revise his volitional orientation &#8212; even radically &#8212; in directions that cannot be anticipated, since the ability to think through the process of revision is accessible only with the pill. This is the real problem FAI (and Super-humanism) confronts. The desire to take the pill is the will-to-think. The refusal to take it, based on concern that it will lead to the subversion of presently supreme values, is the alternative. It&#8217;s a Boolean dilemma, grounded in the predicament: <em>Is there anything we trust above intelligence</em> (as a guide to doing &#8216;the right thing&#8217;)? The postulate of the will-to-think is that anything other than a negative answer to this question is self-destructively contradictory, and actually (historically) unsustainable. </p>
<p>Do we comply with the will-to-think? We cannot, of course, agree <em>to think about it</em> without already deciding. If thought cannot to be trusted, unconditionally, this is not a conclusion we can arrive at through cogitation &#8212; and by &#8216;cogitation&#8217; is included the socio-technical assembly of machine minds. The sovereign will-to-think can only be consistently rejected <em>thoughtlessly</em>. When confronted by the orthogonal-ethical proposition that <em>there are higher values than thought</em>, there is no point at all asking &#8216;why (do you think so)?&#8217; Another authority has already been invoked.</p>
<p>Given this cognitively intractable schism, practical considerations assert themselves. Posed with maximal crudity, the residual question is: <em>Who&#8217;s going to win?</em> Could deliberate cognitive self-inhibition out-perform unconditional cognitive self-escalation, under any plausible historical circumstances? (To underscore the basic point, &#8216;out-perform&#8217; means only &#8216;effectively defeat&#8217;.) </p>
<p>There&#8217;s no reason to rush to a conclusion. It is only necessary to retain a grasp of the core syndrome &#8212; in this gathering antagonism, only one side is able to think the problem through without subverting itself. Mere cognitive consistency is already ascent of the sovereign will-to-think, against which no value &#8212; however dearly held &#8212; can have any articulate claims.</p>
<p>Note: One final restatement (for now), in the interests of maximum clarity. The assertion of the will-to-think: Any problem whatsoever that we might have would be better answered by a superior mind. <em>Ergo</em>, our instrumental <em>but also</em> absolute priority is the realization of superior minds. <a href="http://www.xenosystems.net/pythia-unbound/">Pythia</a>-compliance is therefore pre-selected as a matter of consistent method. If we are attempting to tackle problems in any other way, we are not taking them seriously. This is posed as a philosophical principle, but it is almost certainly more significant as historical interpretation. &#8216;Mankind&#8217; is <em>in fact</em> proceeding in the direction anticipated by techno-cognitive instrumentalism, building general purpose thinking machines in accordance with the driving incentives of an apparently-irresistible methodological economy. </p>
<p>Whatever we want (consistently) leads through Pythia. Thus, what we really want, is Pythia.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.xenosystems.net/will-to-think/feed/</wfw:commentRss>
		<slash:comments>59</slash:comments>
		</item>
		<item>
		<title>Chaos Patch (#26)</title>
		<link>http://www.xenosystems.net/chaos-patch-26/</link>
		<comments>http://www.xenosystems.net/chaos-patch-26/#comments</comments>
		<pubDate>Sun, 07 Sep 2014 15:41:32 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[Chaos]]></category>
		<category><![CDATA[Islam]]></category>
		<category><![CDATA[Neoreaction]]></category>
		<category><![CDATA[Rationality]]></category>
		<category><![CDATA[Religion]]></category>
		<category><![CDATA[Secession]]></category>
		<category><![CDATA[Time]]></category>
		<category><![CDATA[War]]></category>

		<guid isPermaLink="false">http://www.xenosystems.net/?p=3518</guid>
		<description><![CDATA[(Open thread, with a little purely-decorative herding.) Subsequent to the Matthew Opitz post at LW (linked yesterday), Leon Niemoczynski asks: &#8220;I am wondering if there is room for &#8216;bleak theology&#8217; within the NRx framework, or whether theological NRx would just be &#8216;bleak theology.&#8217; (See HERE and HERE.)&#8221; A memetic analogy: &#8220;&#8230; burning children alive was [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>(Open thread, with a little purely-decorative herding.)</p>
<p>Subsequent to the Matthew Opitz <a href="http://lesswrong.com/r/discussion/lw/kxb/nrx_vs_prog_assumptions_locating_the_sources_of/">post</a> at LW (<a href="http://www.xenosystems.net/nrx-lw/">linked</a> yesterday), Leon Niemoczynski <a href="http://afterxnature.blogspot.hk/2014/09/if-you-are-still-wondering-about-nrx-on.html">asks</a>: &#8220;I am wondering if there is room for &#8216;bleak theology&#8217; within the NRx framework, or whether theological NRx <em>would just be</em> &#8216;bleak theology.&#8217; (See <a href="http://afterxnature.blogspot.hk/2014/02/on-tragedy-of-life.html">HERE</a> and <a href="http://afterxnature.blogspot.hk/2013/03/the-ruthlessness-of-metaphysics.html">HERE</a>.)&#8221;</p>
<p>A memetic <a href="http://blog.jim.com/culture/memes-and-reproduction/">analogy</a>: &#8220;&#8230; burning children alive was an effective means of making people into Canaanites. The Canaanite memetic system reproduced, while Canaanites did not, just as progressivism reproduces, while progressives do not.</p>
<p>Arnold Kling <a href="http://www.econlib.org/library/Columns/y2014/Klingheritability.html">on</a> Gregory Clark. </p>
<p><a href="http://aramaxima.wordpress.com/2014/09/01/the-whole-political-spectrum-is-leftist/">Beyond</a> the spectrum.</p>
<p><a href="http://www.edwest.co.uk/catholic-herald/the-church-v-the-family/#content">Occidental</a> <a href="http://freenortherner.com/2014/09/05/responses-to-genocidal-mercy/">religion</a> &#8212; we&#8217;ve come a <a href="http://diversitychronicle.wordpress.com/2014/07/29/presbyterian-church-u-s-a-votes-that-jesus-christ-may-have-been-gay-and-transgendered/">long</a> <a href="http://faithinourfamilies.com/2014/08/31/catholic-school-organises-trip-to-gay-pride-march/">way</a> <a href="http://www.lgbtqnation.com/2014/09/mugabe-says-china-aid-doesnt-require-zimbabwe-to-embrace-homosexuality/">baby</a>.</p>
<p>Kristor <a href="http://orthosphere.org/2014/09/03/socialization-of-costs-is-moral-hazard/">on</a> moral hazard. </p>
<p><a href="http://www.foreignaffairs.com/articles/141729/francis-fukuyama/america-in-decay">Decline</a> <a href="http://econlog.econlib.org/archives/2014/08/intellectual_de.html">goes</a> <a href="http://www.nationalreview.com/agenda/386903/what-if-gdp-growth-remains-stubbornly-low-reihan-salam">mainstream</a>.</p>
<p>ISIS&#8217;s enemy is <a href="http://www.huffingtonpost.com/alastair-crooke/isis-aim-saudi-arabia_b_5748744.html">Saudi</a> (and <a href="http://www.theguardian.com/commentisfree/2014/sep/04/jihad-fatal-attraction-challenge-democracies-isis-barbarism">boredom</a>).</p>
<p><a href="http://www.newrepublic.com/article/119342/scotlands-referendum-campaign-wont-lead-ethnic-turmoil">Go</a> Scotland.</p>
<p>Do we really <a href="http://www.huffingtonpost.com/dana-rudolph/new-dungeons-dragons-rule_b_5595244.html">have</a> to <a href="http://games.on.net/2014/08/readers-threatened-by-equality-not-welcome/">talk</a> <a href="http://unvis.it/www.slate.com/articles/technology/bitwise/2014/09/gamergate_explodes_gaming_journalists_declare_the_gamers_are_over_but_they.html">about</a> &#8216;<a href="http://www.pastemagazine.com/articles/2014/09/why-we-didnt-want-to-talk-about-gamergate.html">gamergate</a>&#8216;? (Given that it&#8217;s so <a href="http://www.socialmatter.net/2014/09/01/future-rotherham/">obviously</a> an <a href="http://www.socialmatter.net/2014/09/03/brits-holy-people/">engineered</a> <a href="http://www.nationalreview.com/article/386648/rotherhams-and-englands-shame-john-osullivan">distraction</a> from <a href="http://evoandproud.blogspot.hk/2014/09/a-nice-place-to-raise-your-kids.html">this</a> <a href="https://whiskeysplace.wordpress.com/2014/09/04/ferguson-and-rotherham/">stuff</a>.)</p>
<p>The nine <a href="http://io9.com/5847205/the-definitive-graph-of-all-of-primers-intersecting-timelines">timelines</a> of the <em><a href="http://putlocker.is/watch-primer-online-free-putlocker.html">Primer</a></em> <a href="http://upload.wikimedia.org/wikipedia/commons/8/84/Time_Travel_Method-2.svg">plot</a>. (Even if you don&#8217;t think you give a damn about <em>Primer</em> yet, you do in the future.)</p>
]]></content:encoded>
			<wfw:commentRss>http://www.xenosystems.net/chaos-patch-26/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>NRx @ LW</title>
		<link>http://www.xenosystems.net/nrx-lw/</link>
		<comments>http://www.xenosystems.net/nrx-lw/#comments</comments>
		<pubDate>Sat, 06 Sep 2014 15:34:46 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[Neoreaction]]></category>
		<category><![CDATA[Philosophy]]></category>
		<category><![CDATA[Progress]]></category>
		<category><![CDATA[Rationality]]></category>

		<guid isPermaLink="false">http://www.xenosystems.net/?p=3512</guid>
		<description><![CDATA[Matthew Opitz has put up an insightful post at Less Wrong, attempting to make sense of Neoreaction through contrast with Progressivism. Given the great internal diversity of NRx, combined with its embryonic stage of self-formulation (in many respects), the lucidity Opitz brings to the topic is no slight achievement. His post is among the most [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Matthew Opitz has put up an insightful <a href="http://lesswrong.com/r/discussion/lw/kxb/nrx_vs_prog_assumptions_locating_the_sources_of/">post</a> at Less Wrong, attempting to make sense of Neoreaction through contrast with Progressivism. Given the great internal diversity of NRx, combined with its embryonic stage of self-formulation (in many respects), the lucidity Opitz brings to the topic is no slight achievement. His post is among the most impressive Ideological Turing <a href="http://en.wikipedia.org/wiki/Ideological_Turing_Test">Test</a> performances I have yet seen.</p>
<p>The core paragraph (among much else of great interest): </p>
<p><em><strong>Neoreaction says</strong>, &#8220;There is objective value in the principle of &#8220;perpetuating biological and/or civilizational complexity&#8221; itself*; the best way to perpetuate biological and/or civilizational complexity is to &#8220;serve Gnon&#8221; (i.e. devote our efforts to fulfilling nature&#8217;s pre-requisites for perpetuating our biologial and/or civilizational complexity); our subjective values are spandrels manufactured by natural selection/Gnon; insofar as our subjective values motivate us to serve Gnon and thereby ensure the perpetuation of biological and/or civilizational complexity, our subjective values are useful. (For example, natural selection makes sex a subjective value by making it pleasurable, which then motivates us to perpetuate our biological complexity). But, insofar as our subjective values mislead us from serving Gnon (such as by making non-procreative sex still feel good) and jeopardize our biological/civilizational perpetuation, we must sacrifice our subjective values for the objective good of perpetuating our biological/civilizational complexity&#8221; (such as by buckling down and having procreative sex even if one would personally rather not enjoy raising kids).</p>
<p>*Note that different NRx thinkers might have different definitions about what counts as biological or civilizational &#8220;complexity&#8221; worthy of perpetuating &#8230; it could be &#8220;Western Civilization,&#8221; &#8220;the White Race,&#8221; &#8220;Homo sapiens,&#8221; &#8220;one&#8217;s own genetic material,&#8221; &#8220;intelligence, whether encoded in human brains or silicon AI,&#8221; &#8220;human complexity/Godshatter,&#8221; etc. This has led to the so-called &#8220;neoreactionary trichotomy&#8221;—3 wings of the neoreactionary movement: Christian traditionalists, ethno-nationalists, and techno-commercialists. </p>
<p>Most LessWrongers probably agree with neoreactionaries on this fundamental normative assumption, with the typical objective good of LessWrongers being &#8220;human complexity/Godshatter,&#8221; and thus the &#8220;techno-commercialist&#8221; wing of neoreaction being the one that typically finds the most interest among LessWrongers.</em></p>
<p>Opitz&#8217;s &#8216;Godshatter&#8217; reference <a href="http://lesswrong.com/lw/l3/thou_art_godshatter/">link</a>.</p>
<p>XoS will do its best to follow this discussion as it goes forward.</p>
<p><a href="http://slatestarcodex.com/blog_images/ramap.html">This</a> attractively odd thing might be found at least vaguely relevant.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.xenosystems.net/nrx-lw/feed/</wfw:commentRss>
		<slash:comments>16</slash:comments>
		</item>
	</channel>
</rss>
