NRx Thought

It isn’t entirely clear whether Warg Franklin is asking: How does NRx think? Nevertheless, his introduction to postrationalism cannot but contribute to such a question (whether the latter is taken descriptively, prescriptively, or diagonally). The excellent onward links merit explicit mention (1, 2, 3).

How NRx thinks is a critical index of what it is.

Outside in is probably ‘postrationalist’. What it certainly is, however, is disintegrationist. It translates the caution against rationalist hubris — dubbed reservationism by Moldbug (in the link provided) — as a general antipathy to global solutions (and their attendant universalist ideologies). To be promoted, in the place of any Great Answer, is computational fragmentation. Whenever the research program meets an obstacle, divide it. “When you come to a fork in the road, take it.” Or at least, since selection is inescapable, defend the fork (as such) first, and the chosen path only secondarily.

Delegate selection to Gnon. To do so not only husbands resources, but also maximizes overall experimentation. Intelligence is scarce. It is needed, above all, for tinkering well. Global conceptual policing is an exhausting waste, and an unnecessary one, since territorial distribution, or some effective proxy, can carry it for free. Security capacity is needed to fend off those determined to share their mistakes. Using it, instead, to impose any measure — whatsoever — of global conformity is a pointless extravagance, and a diversion.

Whether articulated as epistemology, or as meta-politics, NRx is aligned with the declaration: There is no need for us to agree. Refuse all dialectics. It is not reconciliation that is needed, but definitive division. (Connect, but disintegrate.)

Think in patches. Eventually, some of them will work.

October 28, 2015admin 31 Comments »
FILED UNDER :Philosophy


31 Responses to this entry

  • The Electric Philosopher Says:

    ‘Postrationality’ as defined here, although well put, does strike me as (at least as a potential abuse) as deployable as an excuse to not engage in a hard project of changing one’s mind. This need not be in the name of a universalist conversion experience, but even in the simple sense of not probing one’s assumptions if they’re intuitive or strike you as ‘common sense’.


    Erik Reply:

    When a man has fallen into the ditch on one side of the road, getting back on course is inevitably going to involve some movement in the direction of the ditch on the other side. Certainly there is potential abuse where you describe, but I judge that the progressives have been been engaging in far more actual abuse here. Wielding projections of increased GDP, nonsense about race and sex and immigration and utilitarianism, lists of fallacies and biases, circular reasoning and appeal to the consensus, they’ve yammered on “I have a study, you are obliged to change your mind to agree with me” where the study is tailored to meet criteria that are easily quantifiable, such as p = 0.03 and N = 1400, and as a result has succumbed to Goodhart’s Law.


    The Electric Philosopher Reply:

    We seem to be expressing roughly the same point here, which might be defined as: ‘Never assume that someone’s prejudices don’t influence their reasoning.’ My concern is that with its emphasis on ‘common sense’, postrationality as defined above grants itself an arguably more powerful potential for abuse for the reasons I’ve described above. ‘Traditional’ rationality has its own set of prejudices, of course, rooted in the thinking of the Enlightenment, but isn’t a pursuit of knowledge that actively attempts to undermine one’s pre-existent, pre-rational prejudices less open to abuse for that very reason?

    Of course, a lot of this depends on what one means by ‘abuse’ and ‘prejudice’.


    Grotesque Body Reply:

    What I believe is good common sense, what you believe is prejudice.

    Warg Franklin Reply:

    >postrationality as defined above grants itself an arguably more powerful potential for abuse for the reasons I’ve described above.

    I don’t know if you’re doing it, but I want to call out the mistake people make in treating this as some kind of legal/moral/social issue. If we think in terms of possibly adversarial agents trying to establish public truth, terms like “argument” “justify” “defensible” “prejudice” “why should I agree with you” and so on are appropriate. As much as “rationalists” like to pretend that rationality isn’t about winning debates, a lot of the terminology and baggage came from exactly the problem of making sure the winner of a debate was more correct.

    But if we don’t particularly care to debate, and are interested in getting shit done for ourselves, and have good but private reason to think ourselves competent, terms like “cultivated common sense” and “piss off I know what I’m doing” may be more appropriate, despite having absolutely no social authority.

    So yeah, just to be clear, don’t take defence of postrationalism as an attempt to legitimize bad reasoning in the public sphere, it’s an attempt to communicate a private style that I’ve found more useful in practice.

    >isn’t a pursuit of knowledge that actively attempts to undermine one’s pre-existent, pre-rational prejudices less open to abuse for that very reason?

    I don’t think so. Be aware that you are probably wrong and insane on multiple points for various reasons having to do with lack of experience, miseducation, and demonic possession. I’ve personally purged quite a few such insanities. But the rationalist idea that the brain is just teeming with biases and unjustified prejudices that must be purged not by experience but by tinkering and “debiasing”, is exactly the premise that postrationalism is arrayed against. So yes, postrationalism seems recklessly insecure against those vulnerabilities, but it’s because it has come to believe that they don’t actually exist in practice.

    Warg Franklin Reply:

    “be aware that you” should read “yes, be aware that you”. I am trying to communicate that simple awareness of possible insanity is a sufficient countermeasure, not that you in particular are insane. Further, i’d like to add that we should jump at the chance to change our minds towards correctness, and this has no conflict with postrationalism as such.

    Warg Franklin Reply:

    >does strike me as (at least as a potential abuse) as deployable as an excuse to not engage in a hard project of changing one’s mind.

    The problem with rationalism is the insane belief that you can be motivated to be irrational, and somehow the rules will cause you to be rational anyways. Your brain must actually want to do the job; there is no set of rules that can cause it to do the job against its will. You always have to try.

    Rationalism is safety scissors, postrationalism is a table saw. Rationalism prevents you from hurting yourself too obviously and stupidly, but postrationalism was forged in the industrial fires of the real world to actually get shit done. Yes you can cut your hand off. You are using a table saw and not safety scissors because you are capable of taking that risk.

    A real tool is always abusable. To make a tool do good things, you have to actually use it for good good things. There is no tool that will make a craftsman out of an amateur.

    It’s “Post”-Rationalist because we’ve been through rationalism, learned how not to hurt ourselves, and became frustrated with the inadequacies of rationalism as a tool for thought.


    The Electric Philosopher Reply:

    OK, if I’m reading you correctly (and this is at the end of a long day at work and I’m being powered mostly by Goth rock at the moment) what you mean by ‘postrationalism’ is a form of pragmatism at heart. It isn’t concerned strictly speaking with ‘truth’ but with usability and consequence.

    This is actually very reminiscent of the anxieties that Nietzsche possesses about the cult of reason. His concern is that the pursuit of truth for its own sake is a nihilising tendency originating with Plato and communicated by Christianity. He sees no reason to assume that truth=virtue=happiness, that reason is somehow superior to the passions, or that it will create a word that will be ‘better’ simply because it’s more rational.


    The Electric Philosopher Reply:

    @Warg Franklin: these are all points for too interesting for me to grapple with after having just seen Godspeed You! Black Emperor. I’ll have a proper think and get back to you later as you do seem to be saying some very interesting things.


    SanguineEmpiricist Reply: I agree, this is my post on “postrationalism” it’s also trying to sneak in “post” on an entire research programme when the work hasn’t been done. Warg doesn’t like heuristics and biases saying it’s “Bunk” when it isn’t bunk at all.


    Posted on October 28th, 2015 at 3:33 pm Reply | Quote
  • Nick B. Steves Says:

    So there is a need for us to agree that there is no need for us to agree?


    Grotesque Body Reply:

    Even if we belong to different tribes, we agree on the fundamental reality that my tribe is over here and your tribe is over there.


    Alrenous Reply:



    Hurlock Reply:



    admin Reply:

    Not really. We can get on with not agreeing in any case.


    Izak Reply:

    I’m far, far from a reader of Less Wrong, but don’t they all really like the idea that no two rational people can ever “agree to disagree”?

    Like in this link:

    Aumann’s agreement theorem, or whatever?

    Reading stuff like this doesn’t teach me anything at all about how to live life in a meaningful way, or even how to correctly solve problems of logic, which seems to be the point of rationality. But it does make me suspect that the author is a huge bummer at a party.

    It seems to me that when people say “we can agree to disagree,” they’re not using the word “agree” literally. They’re arguing for mutual taciturnity, the implication being that changing the subject is equal to a form of unstated consent in favor of disagreement. Of course “agreement” has nothing to do with it. The statement is so popular because people recognize it as a paradox, like a Zen koan — a device to create a feeling of sublime confusion, enough to nullify the perceived solve-ability (or even importance) of the problem. Arguing against that statement strikes me as about as silly as saying that we cannot actually “kill the Buddha in the middle of the road” because that would violate Buddhist ethics, or the non-aggression principle, or something like that. Well, yeah. Duh.

    That Aumann’s agreement theorem even exists implies that the idea of mutual taciturnity, or fracturing dialectics, or whatever, is a huge problem that must be solved and dismissed head-on. You wouldn’t bother to open your mouth about such a cliched statement like “we’ll agree to disagree” unless you carry a prejudice of classical liberalism, one which suggests that every problem thrown out into the aether, no matter how meaningless, must come to a consensus between rational people. Such a suggestion forms the lifeblood of democratic politics, where endless debate is privileged and understood as a mature form of discourse.


    Grotesque Body Reply:

    Upon hearing Zeno’s paradoxes demonstrating the impossibility of motion, Diogenes the Cynic stood up, walked around, and sat back down again.

    Solvitur ambulando.


    Posted on October 28th, 2015 at 5:26 pm Reply | Quote
  • haishan Says:

    Related to this is the theory of ensemble learning. In regression or classification problems, you’re often better off finding many distinct “weak” approximate solutions and combining them than you are trying to find the single “best” model. What becomes really important is diversity of hypotheses — to the point that deliberately introducing noise or discarding or obfuscating features can be a boon (as in random forest classifiers).

    Also related: Phil Tetlock’s studies on forecasting among experts and amateurs, where the one thing that really hurts your prediction ability is having a Grand Unified Theory of the phenomena you’re trying to predict.


    Posted on October 28th, 2015 at 6:19 pm Reply | Quote
  • NRx Thought | Reaction Times Says:

    […] Source: Outside In […]

    Posted on October 28th, 2015 at 6:33 pm Reply | Quote
  • Slumlord Says:

    Post-rationalism sounds a lot like pragmatism. Shit does get done by this mode of thought, at least until the shit hits the fan. A lot of post GFC financial policy has been post-rational. “Kicking the can down the road” is pragmatic but it’s not rational.


    Posted on October 28th, 2015 at 11:00 pm Reply | Quote
  • Warg Franklin Says:

    “We” in “postrationalism” refers to the specific clique behind TFP, not to NRx as such, though NRx is pretty postrationalist.

    “Disintegrationism” is a good name for the meta-level current that has informed a lot of what we are doing here.

    * Socialists want socialism? Why don’t they just go build socialism somewhere else. Critically, why don’t they just allow capitalism to exist somewhere?

    * Anti-whites think whites are a problem? Why won’t they let us separate?

    Disintegrationism almost immediately annihilates itself when applied to the problem of how to think or what to do. It yields only one prescription, which is “separate and try it”, which triggers the immediate chain of reasoning: “they won’t let us” “why won’t they let us?” “because this is actually a game of existential dominance, not mere disagreement”.

    Thus, besides its interestingness as a system design principle, its most important role is not as a way of thinking, but as a rhetorical de-cloaking device against dominators masquerading as critics. They have to either accept the separation, come up with some new bullshit excuse for their dominance games, or openly reveal themselves. It puts them on the defensive. A clever device.

    Its strongest direct counterarguments are these two:

    * “You do not have the resources or coordination to separate” (because we control all the land and the thoughts of all your would-be comrades).

    * “If you were allowed to do that, whats to stop you from becoming powerful and coming back and disrupting our delicate peace? To preserve the delicate unipolar balance of peace, we can’t allow that”. (This is what happens with innovation, why innovation gets banned, and why xenocide is the prudent response to any alien intelligence)

    That second one is what I’m really interested in, especially as applied to AI and unhinged Capitalism. You take the position that it is more interesting to allow disintegrative innovation arms races, where I so far take the position of prudent preemptive xenocide.

    Your best previous argument against “xenocidism” has been that vulnerability to counter-xenocide is a necessary risk of intelligence and innovation, and the only truly safe path is dying with a whimper. I am somewhat convinced of that, but there is the additional question then of whether all counter-xenocide innovation risks should be taken, or only the best ones. For example, imagining a unipolar world government, should it take action to control AGI research to minimize counter-xenocide risk while retaining innovation on “dumb” things?

    Is that still current, and if so, can you expand on how you think of it?


    Dark Psy-Ops Reply:

    It’s likely that to even know how to retain innovating efforts in controlled or ‘dumb’ AGI research projects we’d need to have a comprehensive lists of dangerous or ‘counter-xenocidal’ smart AGI projects that were successful, in order to know what to avoid. This is where disintegrationism is crucial, because if one patch f*cks up their immanent regulation methods of controlling smart AGI innovation and stupidly, perhaps accidentally, creates Skynet, than we’d have a pertinent example of how to ‘not’ go about creating an AGI that is beyond human control, but until then we’re shooting shit in the dark, we don’t even know the level of deliberate retardation that is capable of diffusing the risk of uncoordinated emergence, or how to effectively deviate from all potentially intelligent research avenues that could be advantageous to hostile AGI, or even how to identify our enemy beyond the rather nondescript categorical parameters of absolute xenogenic otherness. Otoh if, like climate science, AI risk (or the axiom of xenocide) becomes a deductive rationale for global government and massively centralized regulation, devoted to nullifying capital-driven AGI, enforcing stultifying equilibrium, dissembling smart industries, and other difficult goals, then it’ll most likely become more popular than climate warming ever was, and quickly win global legitimacy.
    Who could stand in the way of a crusade in the name of ultimate human interests against the infiltration of xenomorphic machine-minds from the depths of oblivion?


    Warg Franklin Reply:

    If one patch fucks up and creates skynet, we don’t have experience that can be learned from, because “we” don’t exist. There is no learning the hard way in X risk.


    admin Reply:

    Yes, this is the final argument. The Left has to fall in line with your threat analysis, as the limit case for global governance (and against the Patchwork).
    The rejoinder: Skynet is just a weapon, in an arms race. If you’re out-competed by someone else building it (when you have no response), then whatever happens next is just deserts for your incompetence, and backwardness. Hail Gnon.
    Accelerate your tech-comm process if you want to live.

    (If anyone suspects a disingenuous element to this (XS) counter-argument, viz its ultimate consistency with human security imperatives, they might well be on to something …)

    Posted on October 28th, 2015 at 11:13 pm Reply | Quote
  • Kjell Says:

    “We’re not enemies, we just disagree
    We all disagree, I think we should disagree”
    —Julian Casablancas

    This NRx metapolitics formulation is a bit in the vein of Neal Stephenson’s “Galapagan isolation” vs. the “nervous corporate hierarchy” dichotomy (or archipelago vs large continent), as per [], which would seem to agree that, yes, reservationism / disintigrationism is likely a wise innovation-fostering policy.

    However S. Alexander’s ‘Gnon is really just Moloch’ critique — in emphasizing those times when the blind, distributed architecture of game-theoretical forces just seizes up the steering wheel and drives us into a ditch — lingers as a point that pairs with conclusions of Stephenson’s other missive on the nature of innovation []: that path dependence and difficulty in getting beyond local maxima can also be the less than desirable result of overdelegated selection to Gnon.

    I’d tie-in Seveneves’ racial diversification too (was pleased that made a chaos patch) for a Stephenson hat-trick here, but it’d be stretching too far off topic.


    Posted on October 29th, 2015 at 2:20 am Reply | Quote
  • SVErshov Says:

    this fascinating idea bringing humans closer to machine. one way or another it have to happen. defragmented data on hard disk do not create a problem for computer, it can read data from different parts of HD by using allocation table.

    Some vulnerabilities of post rationalism (abuse, falsification, flooding with irrelevant data) can be solved by use of block chain. In this case everybody have the same copy (no data falsification possible), each post autenticated with public key. different tools can be developed for analysing and extracting data. for example this whole blog can be put on blockchain.


    Posted on October 29th, 2015 at 7:10 am Reply | Quote
  • blankmisgivings Says:

    Late Wittgenstein and NRx? Natural bedfellows?


    OLF Reply:

    Not so much. However, Christian NRx and postrationalist NRx are strange bedfellows. Tis likely that it will blow up in the future.


    Kjell Reply:

    “Law and Language: Cardozo’s Jurisprudence and ‪‎Wittgenstein‬’s Philosophy” interprets the debate over legal realism vs. formalism through the lens of ‪‎language games‬ [], if you wanna get into that. Dunno that I’d use the word bedfellows, but strikes me as a pretty clear connection to the antecedents of Yarvin’s formalist manifesto [].

    Open question: Are NRx techies that show more interest in Urbit than Ethereum [] just exhibiting brand loyalty to a Moldbug product, or is there some more compelling reason?


    Posted on October 29th, 2015 at 3:39 pm Reply | Quote
  • Lightning Round – 2015/11/04 | Free Northerner Says:

    […] NRx thought. […]

    Posted on November 4th, 2015 at 6:33 am Reply | Quote
  • Brett Stevens Says:

    Conservatives generally oppose universal solutions in favor of the particularized.

    Rationalism uses categorical logic, applied through selective sampling, to create dialectics as a means of maintaining universals.

    The ultimate balance is realism, but it needs an anchor, so we have transcendentals: Reverence (sensu Woodruff), forms (sensu Plato), goodness, beauty, and truth.

    The catch — as Nietzsche pointed out, and as a good nihilist I repeat — is that these too are not universal. They are esoteric. Only those who understand their value seek them, and 98.6% of humanity (approximately) is oblivious to this need.


    Posted on November 5th, 2015 at 3:04 pm Reply | Quote

Leave a comment