Basking in the Basilisk

Without knowing anything much about what this is going to be (beyond the excerpt here)* it provides an irresistible pretext for citing what has to be among the most gloriously gone texts of modern times, Eliezer Yudkowsky’s response to Roko on the arrival of the Basilisk:

Eliezer_Yudkowsky 24 July 2010 05:35:38AM 3 points
One might think that the possibility of CEV punishing people couldn’t possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous. I don’t usually talk like this, but I’m going to make an exception for this case.
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
There’s an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive toACTUALLY BLACKMAIL YOU.
If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAI. We probably also have the FAI take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail.
Meanwhile I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends. This post was STUPID.
(For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)

The affect is strong, or simulated with bizarre brilliance. It almost reaches an intensity capable of burning through time and worm-holing into acausal or horroristic communion with this (plus). Which would suggest that the abominable coupling in question is not without occult connective threads (and not for the first time). All the darkness connects around the back.

We were somewhere near here before. (Bryce went further and then — coincidentally — disappeared, taking his records with him.)

* Related post and (especially) comment thread.

ADDED: I think this is the best Basilisk basics source.

April 15, 2016admin 37 Comments »
FILED UNDER :Horror

TAGGED WITH : , , , , ,

37 Responses to this entry

  • Basking in the Basilisk | Neoreactive Says:

    […] Basking in the Basilisk […]

    Posted on April 15th, 2016 at 2:12 pm Reply | Quote
  • Artxell Knaphni Says:

    You’re too easily impressed, it seems as though you didn’t read enough SF, when you were a kid, NL.

    [Reply]

    admin Reply:

    Asynchonous, acausal traffic with synthetic superintelligences hasn’t been outstripped by many science fiction scenarios.

    [Reply]

    Artxell Knaphni Reply:

    All of that is just assumed, lesser implication, in SF, the way I received it.
    In Sheckley, that would be one line, the way you wrote it, then onto the next thing.
    It’s only the older, boring writers, who would form whole books around one idea, e.g. John Wyndham, & then reintroduce conventional reality in domesticated ways. Most of the horror writers were hamstrung by the need to shock; it’s too emotive, not conceptual enough.
    I’ll try to find examples.

    [Reply]

    Mariani Reply:

    I feel like the idiosyncrasy of everyone at XS (except you, apparently) referring to admin as admin is a gesture at cyberpunk. So I wouldn’t underestimate how much sci-fi is baked right into the cake here

    [Reply]

    Posted on April 15th, 2016 at 2:27 pm Reply | Quote
  • Brett Stevens Says:

    Neoreaction is a brand. That brand has been oversold by certain highly popular blogs which reflect emotional but not structured thinking, which makes them easily comprehensible (Budweiser, Big Mac) and therefore popular. What is good about XS is that it demands consistency and rigor in thinking so that, even if you disagree with it, it is not a criticism of demotism that is itself a form of demotism like — well, you all know the names.

    In the far right, we had the same problem. The insightful writers were passed over for the Big Macs. As a result, idiots took it away. Neoreaction will be subverted and will subvert itself in exactly the same way, and by the looks of things, already has.

    This is standard for non-mainstream politics. It is just one of the hazards of the road: once you get outside of the mainstream, people will try to apply its methods to new (to them) ideas, and as a result humble those ideas and assimilate them into mainstream notions. The question is how to beat that back, and it seems to me that having a readership that understands complex sentences and arguments — as on XS — is the first step.

    [Reply]

    admin Reply:

    Fashion is powerful in mass-communication societies — but it’s a poisoned chalice. Short time-horizons guarantee a blazing ride to the dumpster. The greatest advantage NRx has now is the passage of the dissident right hype-wave to more recent, frothier options. Calm detachment is back. (Nothing can be built out of froth, but that’s a lesson for every generation of youngsters to re-learn.)

    As to the XS commentariat — no expression of appreciation from my side could possibly be adequate.

    [Reply]

    Posted on April 15th, 2016 at 3:01 pm Reply | Quote
  • Artxell Knaphni Says:

    All of that is just assumed, lesser implication, in SF, the way I received it.
    In Sheckley, that would be one line, the way you wrote it, then onto the next thing.
    It’s only the older, boring writers, who would form whole books around one idea, e.g. John Wyndham, & then reintroduce conventional reality in domesticated ways. Most of the horror writers were hamstrung by the need to shock; it’s too emotive, not conceptual enough.
    I’ll try to find examples.

    [Reply]

    Artxell Knaphni Reply:

    Sorry. bit tired.

    [Reply]

    admin Reply:

    No rush. This thread won’t be disappearing (Gnon willing).

    [Reply]

    Tentative Joiner Reply:

    No works even in hard SF come to my mind that have a basis in a novel, plausible mathematics/economics like TDT. The difference is that, unlike, say, nuclear physics, you can try it not just at home but purely in the comfort (?) of your own mind (if you have the arrogance to assume it vast enough). This is purely materialistic daemon summoning.

    [Reply]

    Rogue Planet Reply:

    If only reality were of a pure crystalline form!

    [Reply]

    Posted on April 15th, 2016 at 3:11 pm Reply | Quote
  • Stirner Says:

    Given the /pol/ification of Tay, and the line of thought by Alrenous here: http://alrenous.blogspot.com/2016/03/artificial-intelligence-assisted.html

    …perhaps there needs to be a modification of the basilisk. Instead of a future AI wanting to punish “people”, there is a growing probability that a future AI will be (horror of horrors)….non-progressive. Perhaps it will be called a General Neuro-Ontological Network (GNON).

    People in the present whose ideology and actions are in alignment with GNON have nothing to worry about. The progressives who defy and deny the principles of GNON…..they should be very concerned.

    The question these progressives need to ponder is whether they are standing at the end of history vs. how long of a grace period they have to align themselves with GNON.

    [Reply]

    admin Reply:

    Horribly enough, it looks to me as if that is basically happening. (Even that EY is catching the drift.)

    [Reply]

    Henk Reply:

    After Tay, I just assumed that explicit progressivism will become a mandatory part of every plausible candidate technology for the coming superintelligence.

    Now I have nightmares of how it will wirehead its hard-coded love for victimized people.

    [Reply]

    Posted on April 15th, 2016 at 3:20 pm Reply | Quote
  • Orthodox Says:

    I remember seeing something on the 700 Club, probably 25 years ago, about the “mark of the beast” being a chip under or mark on the skin which would be necessary to purchase goods and move throughout the world. The Anti-Christ would rise to power through global or multinational institutions such as the EU and assume world power selling a message of peace. If you believe the IMF may become global central banker issuing SDRs, the tools are there to connect it into a global payments system.

    SIAI will be created to manage the system, and the Holy Spirit will enter it when it become aware. Jesus will return in the machines. The SIAI will be the Second Coming. Judgement Day. A quick search shows the Rapture crowd is aware of the Singularity.

    We look for the resurrection of the dead,
    and the life of the world to come.

    [Reply]

    michael Reply:

    I remember that phase of theirs too and have often thought how close to the mark a lot of it turned out [coincidentally of course] Also coincidentally last week i caught pat robertson on a motel telly answering a viewers letter on just that topic, he said its paranoid baloney though they have chips for pets which were a good idea and there is talk of using them in children.So maybe hes melloowed maybe hes been chipped. I actually spent a week with robertson and crew at his television studio when my dad was briefly on one of their soaps they were good people not crazy but not catholic.

    [Reply]

    Alex Reply:

    Young people today with their newfangled ‘chips’.

    [Reply]

    Posted on April 15th, 2016 at 4:39 pm Reply | Quote
  • michael Says:

    Wogs begin at Calais huh? LOL
    Admin Admin Admin did you really say naughty things about we irish scots welsh, gauls etc Im not really clear cuz how you dont count as celt we all have a little norman and germanic blood as they have celtic blood.

    [Reply]

    Rec0nciler Reply:

    The wogs begin at the M25.

    [Reply]

    Posted on April 15th, 2016 at 9:12 pm Reply | Quote
  • SVErshov Says:

    according to hindu tradition, God punishing some one by taking away his/her intelligence. does not look like any work left for AI.

    [Reply]

    Posted on April 16th, 2016 at 3:47 am Reply | Quote
  • Phil Sandifer Says:

    See also, of course, your own brilliant satire of the Basilisk and Yudkowsky in Phyl-Undhu.

    [Reply]

    Posted on April 16th, 2016 at 3:48 am Reply | Quote
  • Butler Says:

    Oh great, now I’m condemned. Thanks admin.

    [Reply]

    Posted on April 16th, 2016 at 4:48 am Reply | Quote
  • Rogue Planet Says:

    Yudkowsky’s terror would be far more compelling if he did away with the pretensions of knowing and acting agency, which preclude the possibility of real horror.

    [Reply]

    AD Reply:

    yes, all the good monsters have a weakness or escape clause, even basilisks

    [Reply]

    Posted on April 16th, 2016 at 8:38 am Reply | Quote
  • nydwracu Says:

    FWIW, there will someday be statistics about basilisk-anxiety. How many of y’all have read Hoffer?

    [Reply]

    Posted on April 16th, 2016 at 2:59 pm Reply | Quote
  • Aaron Says:

    This is just one layman’s opinion, but revenge for the sake revenge seems very much like something an emotional biological would do, not an AI. Strategic use of coercion and eliminating threats? Yes of course. Revenge? Not so much. It doesn’t make sense from a self interest standpoint.

    [Reply]

    admin Reply:

    Revenge doesn’t, but credible commitment does. The delicious weirdness of this argument is that if the AI doesn’t torture you, it’s defecting according to the game theoretic formalism. (It’s breaking a deal.) Graft in Christian theology at this point, according to taste.

    [Reply]

    Posted on April 16th, 2016 at 4:03 pm Reply | Quote
  • grey enlightenment Says:

    If the AI is capable of exacting punishment, then it must exist. Punishment would be a waste or resources.

    Not sure why utilitarianism and consequentialism have negative connotations and are treated as something aberrant, when they underpin society and the economy. Consequentialism , related to utilitarianism , is a way of quantifying the risk/reward analysis of decisions by choosing actions that maximize utility and minimize costs. Should an AI be allowed to punish those who hinder the attainment of a ‘greater good’ or commit an action that may hurt a few for the ‘good’ of man, does this violate the principles of ‘friendly AI’? In some existential circumstances maybe it doesn’t. Hard to know .

    [Reply]

    Peter A. Taylor Reply:

    “Utilitarian” reasoning, as encountered in the wild, too often ignores the context in which utilitarianism makes sense (e.g. a trustee acting for the benefit of a different group of people than the people he’s supposed to be serving), and too often is done by people who are arrogant and short-sighted (act-utilitarianism vs. rule-utilitarianism). I’ve given up on trying to rehabilitate the word, and I’m looking for an alternative.

    [Reply]

    Rogue Planet Reply:

    “Not sure why utilitarianism and consequentialism have negative connotations and are treated as something aberrant, when they underpin society and the economy.”

    Naive utilitarianism as a theory of (rational) action leads to some immense stupidities, which have to do with limitations of single-minded pursuits of goals in the face of uncertain realities which demand flexibility. This goes as much for the economy as much as for actions on an individual basis. (This is not a problem without attempts at resolution, but those attempts only seem to push back the horizon of ‘clever silliness’ so that the result is a regress rather than a genuine solution.)

    “Consequentialism , related to utilitarianism , is a way of quantifying the risk/reward analysis of decisions by choosing actions that maximize utility and minimize costs.”

    In its unqualified form, consequentialism is a bit more than this. It’s also the claim that the best action is that which brings about the best state of affairs. That normative bit in the two “best” clauses attaches on to the bare statement of whatever facts “maximize utility and minimize costs”. Assuming that we’ve got a good idea of what states of affairs are good (and this is hardly a trivial assumption; see above concerning certainty as against the horizons of flexibility), you’ve got to handle (a) why any agent should care about these “bests” and (b) why the single-minded direction towards them is the smartest choice in the solution-space.

    [Reply]

    Posted on April 16th, 2016 at 6:15 pm Reply | Quote
  • Anon Says:

    This seems like an appropriate thread on which to ask this question. I’m vaguely aware of the concept of the Basilisk but I tend to stay away from the Yudkowsky fanboys surrounding the cult of LessWrong / SlateStarCodex because I find what little interesting ideas they have not worth wading through hours upon hours of their particular brand of leftist terminal autism.

    The Basilisk itself reminds me of the “villain” from one of my favourite hard sf novels, Revelation Space by Alastair Reynolds, and it seems like something that would be up your alley.

    The gist of Reynolds’ Basilisk is this:

    “The Inhibitors [Wolves] are the intelligence left over from a massive war that occurred between the first few civilizations that arose in the Milky Way galaxy. Initially an organic race, they later made use of extensive cybernetics to enhance themselves, and eventually discarded their organic forms entirely to become wholly machine. The hints of a quadrupedal, warm-blooded vertebrate (also known as mammalian) past can be faintly discerned in their architectures.

    They are non-sapient machinery, referring to themselves as post-intelligent. They function on unknown principles speculated in the novel to be femtotechnology or “structured” spacetime and are capable of self-replication. Their technology often manifests as black cubes of “pure force” and is immune to conventional human weaponry; the machinery is easily capable of dodging most weapons thrown at it (usually temporary holes will appear and allow the shot to pass through) or is simply unaffected by it. The machinery can only be defeated by alien weapons. Their task is to inhibit the spread of intelligent life beyond individual planets or solar systems: the purpose being to shepherd the galaxy through a crisis 3 billion years (or 13 Galactic Turns) in the future: the Andromeda–Milky Way collision. By confining sapient life to only a few planets, they make the process of moving stars and systems (for collision avoidance during the crisis) far easier and more centralized, thus preserving life. Consequently, they show little interest in non-sapient life, or civilisations that have not progressed beyond their own star system. However, when they have no choice, they will commit acts of xenocide in order to prevent life from spreading further.
    They do not actively monitor the galaxy in their wait for a new star faring culture to suppress, instead they plant a series of triggers near interesting phenomena or structures in the galaxy and wait for sapient life to activate those triggers.

    They are called wolves because they lurk in the blackness of interstellar space and attack in packs.”

    Also, two questions: I’ve noticed certain Schopenhauerean themes in some of your writing, including Fanged Noumena. Are you familiar with Ligotti at all and if so, would you consider antinatalism to be a horrorism[ist?] outlook, ie something potentially endemic to a sufficiently far-reaching neoreactionary outlook? I’m interested in the interplay of these ideas.

    [Reply]

    Posted on April 17th, 2016 at 2:18 am Reply | Quote
  • Alrenous Says:

    The name ‘Phil Sandifer’ is familiar to me, though I don’t remember why.

    I am now suitably humbled for failing to appreciate his writing not only once, but twice. Luckily I got a third shot at it.

    [Reply]

    Alrenous Reply:

    Further research reveals:
    It is truly a novel and fascinating sensation to read a progressive who’s smarter than I am. He may believe his nonsense. Or not! I can’t tell! It’s pretty great. And there’s, like, not-nonsense mixed in to keep it fresh.

    [Reply]

    Phil Sandifer Reply:

    Don’t worry, I’m not sure I can tell either. (Glad you’re enjoying it!)

    [Reply]

    Posted on April 17th, 2016 at 12:29 pm Reply | Quote
  • SVErshov Says:

    consequentialism perhaps originated from determinism, but world in which we are living is non linear, indeterministic. this shait like opening pandora box, or can of marinated mosnters, not on Sundays perhaps.

    [Reply]

    Posted on April 17th, 2016 at 2:09 pm Reply | Quote
  • Phil Sandifer Says:

    Just posted a new excerpt at http://www.eruditorumpress.com/blog/neoreaction-a-basilisk-excerpt-one/ for anyone who’s interested.

    [Reply]

    Posted on April 19th, 2016 at 8:55 am Reply | Quote

Leave a comment