Quote note (#125)

Another blog comment reproduction, this one from More Right, where Nyan Sandwich lays out the basic stress-lines of a potential tech-comm schism (of a kind initially — and cryptically — proposed in a tweet):

There are definitely two opposing theories of a fast high-tech future. I call them “Accelerationism” and “Futurism”

“Accelerationism” is the perspective that emphasizes Capital teleology, that someone is going to eat the stars (win), that humans have many inadequacies that hold us back from winning, that our machines, unbound from our sentimental conservatism could win, and advocates accelerating the arrival of the machine gods from Outside.

“Futurism” agrees that someone is going to win, and wants it to be *us*, that we can become God’s favored children by Nietz[schean] will to power, grit, and self improvement. That the path to the future is Man getting his shit together and improving himself, incorporating technology into himself. That Enhancement is preferable to Artifice.

Someone is going to win. Enhancement or Artifice? Us, or our machines?

I’m a futurist Techcom, Land is an accelerationist Techcom.

FWIW I think this is nicely done, but the complexities will explode when we get into the details. Fortunately, distinctions closely paralleling Nyan’s enhancement / artifice option have been quite carefully honed within certain parts of the Singularity literature. Hugo de Garis, in particular, does a lot with it — through the discrimination between ‘Cosmists’ (artificers) and ‘Cyborgists’ (enhancers) — although he thinks it is ultimately unstable, and a more sharply polarized species-conservative / techno-futurist conflict is bound to eventually absorb it.

It’s also interesting to see Nyan describe himself as a “futurist Techcom”. That’s new, isn’t it?

October 30, 2014admin 66 Comments »
FILED UNDER :Discriminations

TAGGED WITH : , , , ,

66 Responses to this entry

  • Chris B Says:

    Nyan’s chicken. Bring on the Robocalypse.


    nyan_sandwich Reply:

    Ya’ll are evil. Bring on the Transhuman Empire of Man.


    Hanfeizi Reply:

    If it’s transhuman, is it still man?


    nydwracu Reply:

    Not gonna happen. Not within the lifespan of our civilization. So the utility you should assign to thinking about it depends on how much attention you think future civilizations will pay.

    My guess: not much. Eurasia is the only game in town; so if not Europe, then Asia. Have they ever been interested?


    Hanfeizi Reply:

    China is building their way towards the robocalpyse; while they’re not quite moving at nightmare speed, their march is inexorable, and almost every young Chinese I know takes the conceits of transhumanism as a matter of course (think about it- everyone in that country under 40 has known a life of nothing but breakneck technological advance. They’ve gone from peasant agriculture in dirt-floor shacks without running water to high rise apartments, Baidu and jaw-dropping industrial might in… my (32 year) lifetime. Their ruling ideology is the technocratic-transhumanist-sounding Scientific Development Concept. Everybody goes to engineering school. All that stuff of high technocracy that we dreamed up in the postwar years? THEY. ARE. BUILDING. IT.

    Yeah, I’d say they’re interested. They’re more than happy to raise Gnon/Moloch to the heavens, if only to spite the rest of the world. They’ve shoved 300 million babies in his gaping maw already… Why stop now?


    Posted on October 30th, 2014 at 5:59 am Reply | Quote
  • Ex-pat in Oz Says:

    Watch out for the Butlerian JIhad and long live the fighters!


    Posted on October 30th, 2014 at 6:17 am Reply | Quote
  • Alrenous Says:

    “complexities explode in the details”

    I’ll say. E.g. aspergoids lack most of the relevant human frailties. Sub-e.g, it’s not a problem to send a machine agent instead of going themselves to deal with low g-resistance.

    On the other hand, it’s obvious that starting from enhancement and starting from machines ends up in the same place. On the gripping hand, it means engineers have to outsmart natural selection. By a lot; self-repair is going to look a lot like squishy cells otherwise. If you appreciate how far engineers currently are from outsmarting evolution…


    spandrell Reply:

    Looking at LW I see plenty of frail aspergoids. If anything most are worse than neurotypicals.

    By the way let me pour some cold water in this optimism technophilia with this neat graph:



    Aeroguy Reply:

    Yours is a threat I take seriously. However I see the establishment of the antiversity as the counter, an ark for intelligence to weather the left singularity.


    Hanfeizi Reply:

    Not seeing it. The future is either Borg or a burnt-out husk. When we’re done, there ain’t gonna be nothing left to hunt or gather from.


    Posted on October 30th, 2014 at 7:16 am Reply | Quote
  • Hurlock Says:

    “Enhancement or Artifice?”

    I hope I am not the only one who realizes this is a completely false dichotomy.


    admin Reply:

    I’m assuming it translates into “enhancement or replacement” (although that, too, is a dichotomy which buckles badly under stress).


    Hurlock Reply:

    When you take the enhancement line to its ultimate conclusion, it becomes almost indistinguishable from replacement.

    It would take too long for me to explain, but the way I see it, it’s something like the difference between getting killed and commiting suicide.


    E. Antony Gray (@RiverC) Reply:

    only a materialist would believe that!

    Hurlock Reply:


    Damn it! My cover has been blown.

    Artxell Knaphni Reply:

    “When you take the enhancement line to its ultimate conclusion, it becomes almost indistinguishable from replacement.”

    Apologies for quoting myself.

    From “The After-Blend of the Words of Man”

    “If ‘Man’ chooses to name this process, through which ‘intellect’, ‘thought’, passes from it’s ‘natural’, anthropic site, to the locus of the ‘artificial’, to the territories of techne, does this choice not serve a purpose? It allows the illusion that ‘Man’ has a territory, one which somehow belongs to him. Through the inflation of the egoic complex of concepts such as ‘action’, agency’, etc., such an ‘imaginary of ownership’ can be sustained, if only because it is caught in a ‘holding pattern’ of disputation concerning the ‘nature’ of these half-baked concepts. As this culture of altercations proceeds (all the while, providing comforts of insularity), the veritable drives for territorial precision cast the anthropic into the abyssal logics of a f(lawed) understanding, Caught in an invariable transition, by its ‘own’ desire for an ultimate performance of knowledge, anthropic figurality continues on, to the point at which it is possible to say, finally:”Behold the Man!”

    But the declaration is an inhuman utterance, the figure itself has transitioned beyond any fixed determination: the announcement can only issue as retrospection, when the name of ‘Man’ no longer has a bearer. Such is the price for the exaction of knowledge. And it is this unerringly human precision, that anxiously sketches the shape of things to come…”

    Wen Shuang Reply:

    Hurlock, you are correct by my estimation. Isn’t this another form of the orthogonality fallacy?

    I’ll add that I think there is no “win”, there’s only interesting or not. Techcom singularity is an interesting doom. To some people, so is zombie apocalypse. Totalitarian eternal stasis is boring doom.

    Posted on October 30th, 2014 at 8:04 am Reply | Quote
  • Aeroguy Says:

    I’m not sure the full implications are being appreciated here. If Nyan wants to race then I have not ideological issues. A race means competition, Malthusian pressure made manifest as the bulk are culled and the survivors accelerate towards resembling the terribleness of the dark gods that are sculpting their evolution. I thought Nyan was allied with Scott in wanting to hold off Malthusian pressure indefinitely and allow humanity to evolve leisurely, a faux competition with catgirls, perpetual evolutionary adolescence, rather than a true contest against our unleashed Lovecraftian horrors. Biology and machine will merge, it matters not if the victor started as machine or post-human, the result will converge on the hyper intelligent equivalent of a crab, something entirely foreign, alien, and Lovecraftian. Preserving humanity is like trying to preserve the innocence of a little boy, intelligent life is growing up. Picking sides is pointless, they’re both going to the same destination and can’t spare the room to bring our delicate sensibilities along for the ride.

    “When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things.”


    Durtal1963 Reply:

    Isn’t this just Shaper versus Mechanist writ apocalyptic?


    Artxell Knaphni Reply:

    Sterling Sheckley


    Yes, it is. But Science Fiction is always ‘larger’. “Neoreaction” is always going to be a farcical repetition.

    “We only want a quiet place to finish working while God eats our brains.”

    “We want to join your Kluster,” the Superbright said. “We must join your Kluster. No one else will have us.”
    Nikolai doodled absently with his light pen on a convenient videoscreen. “How many of you are there?”
    “There were fifty in our gene-line. We were working on quantum physics before our mass defection. We made a few minor breakthroughs. I think they might be of some commercial use.”
    “Splendid,” said Nikolai. He assumed an air of speculative pity. “I take it the Ring Council persecuted you in the usual manner — claimed you were mentally unstable, ideologically unsound, and the like.”
    “Yes. Their agents have killed thirty-eight of us.” The Superbright dabbed uneasily at the sweat beading on his swollen forehead. “We are not mentally unsound, Kluster-Chairman. We will not cause you any trouble. We only want a quiet place to finish working while God eats our brains.”
    STERLING (1989))

    “‘What is it you have that we want?'”

    Nikolai was aboard the alien ship. He felt uncomfortable in his brocaded ambassador’s coat. He adjusted the heavy sunglasses over his plastic eyes. “We
    appreciate your visit to our Kluster,” he told the reptilian ensign. “It’s a very great honor.”
    The Investor ensign lifted the multicolored frill behind his massive head. “We are prepared to do business,” he said.
    “I’m interested in alien philosophies,” Nikolai said. “The answers of other species to the great questions of existence.”
    “But there is only one central question,” the alien said. “We have pursued its answer from star to star. We were hoping that you would help us answer it.”
    Nikolai was cautious. “What is the question?”
    “‘What is it you have that we want?'”
    STERLING (1989))

    “Give our ears your old frontiers, we’re listening! Those idiot video ideologies, those antique spirit splits. Mechs and Shapers, right? The wars of the coin’s two halves!”

    “You’ve really got it through you, right? All that old gigo stuff!” The
    young people spoke a slang-crammed jargon that Nikolai could barely comprehend. When they watched him their faces showed a mixture of aggression, pity, and awe. To Nikolai, they always seemed to be shouting. “I feel outnumbered,” he murmured.
    “You are outnumbered, old Nikolai! This bar is your museum, right? Your mausoleum! Give our ears your old frontiers, we’re listening! Those idiot video ideologies, those antique spirit splits. Mechs and Shapers, right? The wars of the coin’s two halves!”
    “I feel tired,” Nikolai said. “I’ve drunk too much. Take me home, one of you.”
    They exchanged worried glances. “This is your home! Isn’t it?
    STERLING (1989))

    “I find myself awakened again,” Swarm said…
    “I find myself awakened again,” Swarm said dreamily. “I am pleased to see that there is no major emergency to concern me. Instead it is a threat that has become almost routine.” It hesitated delicately. Mirny’s body moved slightly in midair; her breathing was inhumanly regular. The eyes opened and closed. “Another young race.”
    “What are you?”
    “I am the Swarm. That is, I am one of its castes. I am a tool, an adaptation; my specialty is intelligence. I am not often needed. It is good to be needed again.”
    “Have you been here all along? Why didn’t you greet us? We’d have dealt with you. We meant no harm.”
    The wet mouth on the end of the plug made laughing sounds. “Like yourself, I enjoy irony,” it said. “It is a pretty trap you have found yourself in, Captain-Doctor. You meant to make the Swarm work for you and your race. You meant to breed us and study us and use us. It is an excellent plan, but one we hit upon long before your race evolved.”
    Stung by panic, Afriel’s mind raced frantically. “You’re an intelligent being,” he said. “There’s no reason to do us any harm. Let us talk together. We can help you.”
    “Yes,” Swarm agreed. “You will be helpful. Your companion’s memories tell me that this is one of those uncomfortable periods when galactic intelligence is rife. Intelligence is a great bother. It makes all kinds of trouble for us.”
    “What do you mean?”
    “You are a young race and lay great stock by your own cleverness,” Swarm said. “As usual, you fail to see that intelligence is not a survival trait.”
    Afriel wiped sweat from his face. “We’ve done well,” he said. “We came to you, and peacefully. You didn’t come to us.”
    “I refer to exactly that,” Swarm said urbanely. “This urge to expand, to explore, to develop, is just what will make you extinct. You naively suppose that you can continue to feed your curiosity indefinitely. It is an old story, pursued by countless races before you. Within a thousand years — perhaps a little longer… your species will vanish.”
    “You intend to destroy us, then? I warn you it will not be an easy task–”
    “Again you miss the point. Knowledge is power! Do you suppose that fragile little form of yours — your primitive legs, your ludicrous arms and hands, your tiny, scarcely wrinkled brain — can contain all that power? Certainly not! Already your race is flying to pieces under the impact of your own expertise. The original human form is becoming obsolete. Your own genes have been altered, and you, Captain-Doctor, are a crude experiment. In a hundred years you will be a relic. In a thousand years you will not even be a memory. Your race will go the same way as a thousand others.”
    “And what way is that?”
    “I do not know.” The thing on the end of the Swarm’s arm made a chuckling sound. “They have passed beyond my ken. They have all discovered something, learned something, that has caused them to transcend my understanding. It may be that they even transcend being. At any rate, I cannot sense their presence anywhere. They seem to do nothing, they seem to interfere in nothing; for all intents and purposes, they seem to be dead. Vanished. They may have become gods, or ghosts. In either case, I have no wish to join them.”
    “So then — so then you have–”
    “Intelligence is very much a two-edged sword, Captain-Doctor. It is useful only up to a point. It interferes with the business of living. Life, and intelligence, do not mix very well. They are not at all closely related, as you childishly assume.”
    “But you, then — you are a rational being–”
    “I am a tool, as I said.” The mutated device on the end of its arm made a sighing noise. “When you began your pheromonal experiments, the chemical imbalance became apparent to the Queen. It triggered certain genetic patterns within her body, and I was reborn. Chemical sabotage is a problem that can best be dealt with by intelligence. I am a brain replete, you see, specially designed to be far more intelligent than any young race. Within three days I was fully self-conscious. Within five days I had deciphered these markings on my body. They are the genetically encoded history of my race… within five days and two hours I recognized the problem at hand and knew what to do. I am now doing it. I am six days old.”
    “What is it you intend to do?”
    “Your race is a very vigorous one. I expect it to be here, competing with us, within five hunded years. Perhaps much sooner. It will be necessary to make a thorough study of such a rival. I invite you to join our community on a permanent basis.”
    “What do you mean?”
    “I invite you to become a symbiote. I have here a male and a female, whose genes are altered and therefore without defects. You make a perfect breeding pair. It will save me a great deal of trouble with cloning.”
    “You think I’ll betray my race and deliver a slave species into your hands?”
    “Your choice is simple, Captain-Doctor. Remain an intelligent, living being, or become a mindless puppet, like your partner. I have taken over all the functions of her nervous system; I can do the same to you.”
    “I can kill myself.”
    “That might be troublesome, because it would make me resort to developing a cloning technology. Technology, though I am capable of it, is painful to me. I am a genetic artifact; there are fail-safes within me that prevent me from taking over the Nest for my own uses. That would mean falling into the same trap of progress as other intelligent races. For similar reasons, my life span is limited. I will live for only a thousand years, until your race’s brief flurry of energy is over and peace resumes once more.”
    “Only a thousand years?” Afriel laughed bitterly. “What then? You kill off my descendants, I assume, having no further use for them.”
    “No. We have not killed any of the fifteen other races we have taken for defensive study. It has not been necessary. Consider that small scavenger floating by your head, Captain-Doctor, that is feeding on your vomit. Five hundred million years ago its ancestors made the galaxy tremble. When they attacked us, we unleashed their own kind upon them. Of course, we altered our side, so that they were smarter, tougher, and, naturally, totally loyal to us. Our Nests were the only world they knew, and they fought with a valor and inventiveness we never could have matched…. Should your race arrive to exploit us, we will naturally do the same.”
    “We humans are different.”
    “Of course.”
    “A thousand years here won’t change us. You will die and our descendants will take over this Nest. We’ll be running things, despite you, in a few generations. The darkness won’t make any difference.”
    “Certainly not. You don’t need eyes here. You don’t need anything.”
    “You’ll allow me to stay alive? To teach them anything I want?”
    “Certainly, Captain-Doctor. We are doing you a favor, in all truth. In a thousand years your descendants here will be the only remnants of the human race. We are generous with our immortality; we will take it upon ourselves to preserve you.”
    “You’re wrong, Swarm. You’re wrong about intelligence, and you’re wrong about everything else. Maybe other races would crumble into parasitism, but we humans are different.”
    “Certainly. You’ll do it, then?”
    “Yes. I accept your challenge. And I will defeat you.”
    “Splendid. When the Investors return here, the springtails will say that they have killed you, and will tell them to never return. They will not return. The humans should be the next to arrive.”
    “If I don’t defeat you, they will.”
    “Perhaps.” Again it sighed. “I’m glad I don’t have to absorb you. I would have missed your conversation.”
    Bruce Sterling, SWARM [The Magazine of Fantasy & Science Fiction, April 1982]


    “Intelligence counts for no more in the scheme of things than
    long claws or strong hooves”

    He was quite proud of himself. It is
    difficult for a man of Earth to come without prep-
    aration into any Galactic Center. The higher life
    forms to be encountered there are not necessarily
    more intelligent than humans; intelligence
    counts for no more in the scheme of things than
    long claws or strong hooves. But aliens do have
    many resources, both verbal and otherwise. For
    example, certain races can literally talk a man’s
    arm off, and then explain away the presence of the
    severed limb. In the face of this kind of activity,
    Humans of Earth have been known to experience
    deep sensations of inferiority, impotence, inadequacy,
    and anomie. And, since these feelings are usually justified,
    the psychic damage is intensified accordingly.
    The result, more often than not, is complete psychomotor
    shutdown and a cessation of all except the most automatic
    functions. A malfunction of this type can be cured
    only by changing the nature of the universe,
    which is, of course, impractical. Therefore, by
    virtue of his spirited counterattack, Carmody had
    met and overcome s considerable spiritual risk.
    (Robert Sheckley, DIMENSION OF MIRACLES, 1968)

    ‘The aim of intelligence is to put the whole goddamned human race out of work.”

    ‘I said you were unskilled, which you are. And I said that a machine can do anything you can do better, faster, and more cheerfully, but not more cheaply.’
    ‘Oh.’ Marvin said.
    ‘Yep, in the cheapness department, you still got an edge over the gadgets. And that’s quite an achievement in this day and age. I have always considered it one of the glories of mankind that, despite its best efforts, it has never completely succeeded in rendering itself superfluous. You see, kid, our instincts order us to multiply, while our intelligence commands us to conserve. We are like a father who bears many sons, but contrives to dispossess all but the eldest. We call instinct blind, but intelligence is equally so. Intelligence has its passions, its loves and its hates; woe to the logician whose superbly rational system does not rest upon a solid base of raw feeling. Lacking such a base, we call that man – irrational!’
    ‘I never knew that,’ Marvin said.
    ‘Well, hell, it’s obvious enough,’ McHonnery said. ‘The aim of intelligence is to put the whole goddamned human race out of work. Luckily, it can never be done. A man will outwork a machine any day in the week. In the brute-labour department, there’ll always be opportunities for the unwanted.’
    © Robert Sheckley 1966)


    Aeroguy Reply:

    I’ll have to read those, but based on the wiki not quite. From what I’m thinking humans would have to adopt and blend both tactics and then some just to scrape above extinction in a competition with hyper intelligent AI. The competitors will have more in common with each other than even the most organic would have in common with us. Frankly if humans want to survive in that kind of environment, rather than compete directly, finding a new niche would be easier. The bloody edge of posthuman advancement may be enough to serve the hyper intelligent overlords as worker/soldier drones (because delegation would still be a thing), but it would be entirely on their terms and may not be voluntary.


    Lesser Bull Reply:

    * the result will converge on the hyper intelligent equivalent of a crab, something entirely foreign, alien, and Lovecraftian.*

    You need to know what you don’t know. Accelerated competition is what reveals the truth. Until is is revealed, it is unknown, by definition.


    Aeroguy Reply:

    There is a great deal we don’t know and likely can’t comprehend about the future. My point is that hyper intelligence is best described as being incomprehensible and terrifying. The other thing I want to point out is that while there is a concrete immediate path to machines reaching hyper intelligence. I don’t see such a path for biologicals, post-humanism has a concrete immediate path to super intelligence, but when the bar is set at hyper intelligence that is total failure. With hyper intelligence, biology could be engineered to a point that the line between biological and mechanical becomes meaningless. Compared to hyper intelligence Nyan’s post human god emperor of mankind is just another dumb ape, seeing a self proclaimed post humanist continue to romanticize dumb apes is absurd. If Gnon favors spirituality then the evolution of hyper intelligence will converge on it, if Gnon doesn’t favor spirituality then it’s dead weight. The suitability of traits depends on the environment and niche, things that serve us well today may in the realm of hyper intelligence be a hindrance.

    The oceans are filled with fish, some of those fish are sharks and have pleased Gnon so much in spire of their lack of intelligence that they have remained the same for hundreds of millions of years. Dolphins are newcomers, descended of land dwelling mammals they retain some intelligence. Both have fins for their watery environment, sharks like all fish have vertical tail fins while dolphins who evolved from having feet developed a horizontal tail fin. Vertical or horizontal doesn’t matter, Gnon just wants to see fins. The intelligence of dolphins implies that they are under pressure to maintain their level of intelligence, they’re adapted to need it. Sharks are as dumb and successful as ever.

    The founding species does have an impact but not in a way that deserves to be romanticized. But what does concern me are local maxima intelligence traps like the one sharks are in. Given our dread of Malthusian pressure, humans are occupying our own intelligence trap. Moving as far out ahead of the curve and seeding as many starting points as possible gives the best shot at avoiding intelligence traps.

    My concern for intelligent life supersedes my concern for humanity, the threat of intelligent life stagnating compared to humanity going extinct is equivalent to the threat of humanity stagnating compared to my continued existence (death is Azathoth’s currency, without it there is only stagnation, eventually the best thing we can do is to get out of the way and die already).

    I see super intelligent post humans engineered to retain our traits that we are sentimental about becoming quite cautious and conservative with their engineering so as to preserve sentimentality, they will quickly engineer into things quite inhuman. Reducing the distance between founder species and decedent reduces the potential of those descendents. Whether a hyper intelligence is born of silicon or carbon, compared even to super intelligence it remains an unknowable monstrosity and mortal threat to the entirety of the status quo. Bias in preferring to preserve any part of the status quo necessarily leads to an intelligence trap.

    Bats and birds both fly, sharks and dolphins both swim, even if an uninterrupted chain of humans could reach hyper intelligence there will be traits that Gnon requires of all hyper intelligent life, from what we know about Gnon, lovecraftian (unknowable and terrifying) is extremely likely to be included in those traits.


    Hurlock Reply:

    ” The other thing I want to point out is that while there is a concrete immediate path to machines reaching hyper intelligence. I don’t see such a path for biologicals, post-humanism has a concrete immediate path to super intelligence, but when the bar is set at hyper intelligence that is total failure. ”

    This is curious to me. Why do you think achieving hyper intelligence is not possible for biological organisms, but is entirely possible for machines? It seems there are some unspoken assumptions here who should be revealed for purposes of clarity.

    Lesser Bull Reply:

    * My point is that hyper intelligence is best described as being incomprehensible and terrifying*

    Nope, this is an assumption. It’s your savannah instincts talking.

    Aeroguy Reply:


    I think a biological route to hyper intelligence is possible. I just don’t see it occurring before the end of the century while I do see that for the mechanical route. I find it interesting that you’re more optimistic about the biological route. Either way I see the biological and mechanical merging into indistinguishably but the early development along the biological route would still involve decades before each generation is ready to engineer the improvements for the next.

    Lesser Bull,

    So you’re saying we have no evidence or precedent for predicting whether evolved hyper intelligence looks more like Jesus or the great old ones, that seems optimistic. Just the potentiality space alone for comprehensible (so you can list them all out) non-terrifying hyper intelligences is meager compared the vast incomprehensible lovecraftian ecosystem of potentiality.

    Lesser Bull Reply:

    you don’t know what the potentiality space looks like, or what the relative sizes are, or what the probabilities of the relative sizes are.
    The unknown is unknown, full stop.

    Posted on October 30th, 2014 at 8:26 am Reply | Quote
  • Quote note (#125) | Reaction Times Says:

    […] Source: Outside In […]

    Posted on October 30th, 2014 at 9:11 am Reply | Quote
  • Baron von Strucker Says:


    Did you receive this prophetic knowledge about our destiny to become Lovecraftian crabs directly from Azathoth Itself?


    Aeroguy Reply:

    I don’t claim prophesy but I certainly think that is a solid possibility. Next in line would be intelligence failing to take root and degeneration back to beasts. The best hope for a blue sky vision would require developing the ability to tap other universes with different laws of physics for unlimited energy, but there’s as much evidence for that as heaven itself. Given what we know about Azathoth I struggle to see alternatives as viable. I am confident that the idea of self directed evolution is delusion, optimization must trump, and crabs are the symbol of Azathoth’s optimizations.


    Posted on October 30th, 2014 at 10:30 am Reply | Quote
  • nyan_sandwich Says:

    What I wrote on MR is extremely rough, puked out in 2 minutes on a whim. As such, it’s far too underdeveloped to really endorse or criticize. That said, it seems to point in an interesting direction that I will be continuing to develop.

    Hurlock correctly points out that the weak point, and the site for further development will be actually showing that there are (at least) two paths to different interesting techno-singularity futures. Wen Shuang further notices that this is an orthogonality thing.

    For now I’ll assert that Gnon exists and is driving the universe in a coherent direction, but it seems as if there are there are multiple possible trajectories that this could take that are all Gnon-pleasing, but some of which have more spiritual continuity with us and more meaning to us than others. I assert that we want to act in such a way that leads us onto the TransHuman Supremacist Galactic Empire track, as opposed to the Superintelligent Bitcoin Successor Liquidates Humans track.

    Is there room in the future for children our ancestors could be proud of? Or is all evidence that it was *us* that ate the stars erased? Will we or our descendents be there in the future building spiritually significant monuments and living great lives, or will all the fun be ground out of the universe by the rigors of competing for Gnon’s favor? Spiritual singularity, or just economic?

    This is a component of the awaited “post-yudkowskian” vision that Konk and I are working towards.

    As for being techcom, I’ve always been a transhumanist, but I’m also a traditionalist and an identitarian. This post-yudkowskian thing is my trying to integrate the spiritual component of identity and tradition into something that can actually continue to grow and gain Gnon’s favor (techcom).


    Aeroguy Reply:

    Have you ever read Baxter’s book Evolution? In it humanity experiences a catastrophe that reduces humans to evolve back into beasts. These descendants are post human – in contrast to the post human +. He describes the speciation of these post human beasts and the eventual whimper of an extinction for the last human descendent. However shortly before the cataclysm the beginnings of machine life had been sent to Mars, that life evolved, developed intelligence and spread across the stars eventually rediscovering their system of origin to find no sign of intelligence.

    My question, would you be happier if the ending didn’t have humanity go extinct but machine life never developed intelligence? Just how loyal are you to humanity, does it have limits, if so what are they?


    Nyan Sandwich Reply:

    I prefer machine singularity to local stagnation, and human singularity to machine singularity.


    scientism Reply:

    So if you’re a singularity-skeptic transhumanist do you default to ‘futurist’ techcom?


    Hurlock Reply:

    ” I assert that we want to act in such a way that leads us onto the TransHuman Supremacist Galactic Empire track, as opposed to the Superintelligent Bitcoin Successor Liquidates Humans track.”

    Look, my point is that when taken to their logical conclusion the difference between these two is simply cosmetic and of any importance only from a fuzzy emontially humanistic standpoint.

    In both cases you are replacing the human species as we know it. Your preffered method is controlled enhancements, which will inevitably will end up with the human species evolving to something completely different i.e. the species ‘homo sapiens’ will practically go extinct by their own voluntary engineering. (unless at some point some romantic impulse kicks in and decides to halt the whole process this will happen eventually, and you know it)
    After we have engineered ourselves into hyper-intelligent species, we go on to conquer the universe, or whatever.

    In the other case humans create an AI, vastly superior in intelligence, it outsmarts the humans and eradicates them. This is a ‘shoggothic insurgence’ type scenario where our slave machines rise up against us and devour our species. They then proceed to conquer the universe, or whatever.

    In both cases the species go extinct. This was my point with the commiting suicide/getting killed allegory. The difference is simply in the method, the means, not the end result itself. In both cases it is the same: extinction.

    Obviously nobody would like getting incinerated off the face of the planet terminator-style, but is there really a difference of any degree of essential philosophical importance between that and self-engineered extinction via continuous enhancement? (even the self-engineered/non-voluntary extinction distinction is somewhat inaccurate considering that the AI that will rip our spines out will also be technically ‘engineered’ by us)

    The way I see it, the dichotomy you are proposing is essentially controlled (voluntary) extinction vs. ‘ooops we didn’t program this AI properly and now we are dead’ extinction
    In both cases the species are gone.
    In both cases a new, superior species is born.

    Where is the substantial difference here? I don’t see any.


    Aeroguy Reply:



    an inanimate aluminum tube Reply:

    Homo Erectus and Homo Floresiensis are both extinct.

    But Homo Erectus is a winner and Homo Floresiensis is a loser.

    Because Homo Floresiensis died out and left no descendants, while Homo Erectus just evolved into something cooler. Quite a bit of Homo Erectus lives on, in us.

    If our descendants evolve into something cooler than us, we’re winners.

    If our descendants get killed off by another species (or environmental hazard), we’re losers.

    If we try to help another species kill off our descendants, we’re cuckolds.

    “In evolutionary biology, the term cuckold is also applied to males who are unwittingly investing parental effort in offspring that are not genetically their own”


    Hurlock Reply:

    The thing is AI’s created by humans are still technically an extension of the human species. I think this is what a lot of people keep missing.

    Which is also why your Homo Erectus and Homo Floresiensis example really has no bearing here.

    “If we try to help another species kill off our descendants, we’re cuckolds.”

    The AI’s that kill you off are your descendants. You created them. You fathered them.

    In evolutionary terms, an AI which was designed and created by humans is just like being directly evolved from humans.

    People don’t get this, because they tend to miss that machines are an extension of mankind, which is quite ironic since that should be stupendously obvious considering that we use them as tools to increase our capabilities to control our environment.

    an inanimate aluminum tube Reply:

    “In evolutionary terms, an AI which was designed and created by humans is just like being directly evolved from humans. ”

    No, in *evolutionary terms* being killed off by an AI is the same as being killed off by an ice age. You get killed off, you lost. Evolution is about genes. It doesn’t matter whether the catastrophe that killed you off was self created or not.

    You have a *philosophical position* that is inclined to regard this murderous AI as a descendant of the the human species. But evolutionary theory does not mandate that everyone else has to see it that way.

    In *evolutionary terms* the hypothetical AI is not our offspring, but a cuckoo, that kills off our biological offspring, while attempting to trick us into feeding it.

    I’m not suggesting that the whole debate has to be based on evolution, I’m just saying, we’re talking about two very different forms of extinction. It’s wrong to suggest that what happened to Homo Erectus is the same as … violent extermination of the entire species. We probably need a different word for what happened to Homo Erectus. I’d love to be as good at evolution as Homo Erectus was.

    Nyan Sandwich Reply:



    Nyan Sandwich Reply:

    @ Hurlock

    “The thing is AI’s created by humans are still technically an extension of the human species. I think this is what a lot of people keep missing.”

    This is a much more serious objection.

    I answer it by asking whether you have desires for your children. Is a virtuous, smart, and psychologically normal child better than a smart degenerate psycho? I think so.

    This is what I mean by asking whether we can be proud of the future we create. Is it something we and all our ancestors can look down on from heaven*, cry a tear of joy and say “yes. that was *us*, we gave birth to that glorious civilization”, or will we only be able to say “oh look, the machines are eating another galaxy today. Check out this cool thing they built. It’s too bad they’re working so hard at just burning the universe for nothing”


    Nyan Sandwich Reply:


    “You have a *philosophical position* that is inclined to regard this murderous AI as a descendant of the the human species. But evolutionary theory does not mandate that everyone else has to see it that way.”

    This is philosophically trickier than you’re making it out to be. DNA doesn’t matter. That’s just a proxy for what I’m calling “spiritually continuity”, meaning whether this thing you have created is your spiritual vassal or not.

    There are machines I could create that I would be OK passing on my spiritual legacy to, and machines that I would not want to. Even if they both were capable of eating the stars.

    Good point that it’s not automatic, though.

    Implying Implications Reply:

    “Where is the substantial difference here? I don’t see any.”

    That’s because you’re a sperglord.

    You really don’t see any difference between the controlled evolution of our species into something post-human (“death” by change) and the eradication of our species by intelligent machines (death by bloody screaming agony)? Sure, both scenarios involve the replacement of homo sapiens as we know it with something else, but if you can’t see the value of free choice and graduated decision in the matter or sympathize with the moral and sentimental quandaries posed by either scenario, all I can say is that you must be really fun at parties.

    I can just imagine you courting a young lady; the two of you sitting in a swanky restaurant, asking each other questions. You two really hit it off, and before long you’re staring wistfully into each other’s eyes and chuckling quietly in-between sips of wine. After chatting about your attitudes on children (you’d both like to have at least three, the unspoken assumption being with each other), your prospective mate asks you what you think about the future. “Well,” you venture enthusiastically, “it’s my opinion that in the future, the teleology of capitalism will lead to the creation of rapidly self-modifying artificial intelligences whose sheer complexity will be expressed in Lovecraftian motives so far beyond anything humankind can comprehend that we will be out-competed into extinction.”

    “That sounds horrible!” she exclaims. “Is there any way we could stop it?”

    “Stop it?” you scoff. “Woman, even if we could, why would you want to? I am unconcerned with the sentimental mewlings of dumb apes such as yourself. I care only for the perpetuation of intelligent beings in the universe. If I had to choose between my great-grandchildren and the machine-gods, I’d choose the machine-gods every time. You know why?”


    “Because Gnon loves a strong horse.”

    Stunned by this masterstroke of logic, your would-be mate is silent until the dessert arrives. As you tuck in with gusto, she finally ventures to speak. “In that case,” she says, reaching into her purse and pulling out a handgun, “I suppose I’ll spare our great-grandchildren the trouble of an extraneous existence” and blows her fucking brains out.

    Which is what you yourself may as well do if this is your glorious vision of the future. The point being, that “fuzzy emotionally humanistic” motives are what we live for, Commander Data. You may file that one away in your hard drive for future reference.

    In fact, I’ll venture to say this, since you NRx types have been blue-balling each other over this question for ages: what the fuck is going to happen when or if reactionary polities come into existence and a wing of the tech-comms decide that bringing on the robocalypse as quickly as possible is their solemn duty to Gnon? Do you think their former allies are going to say “Well, hey, Exit means we all get to have our own space, and if the tech-comms want to plot the most interesting ways to annhihilate us all, common decency says we should leave them alone?” Or do you think it more likely that they’ll line up in 1,000-mile-long queues to curb-stomp your fragile nerd skulls into powder until their fucking ankles snap?

    I’ll go even further: why should any person, reactionary or otherwise, upon realizing that 1. there exist a group of people who believe that mankind’s extinction at the hands of AI is inevitable, and 2. that group of people is going to do everything in its power to immanentize that particular eschaton as quickly as possible, let that group of people continue to exist? If the “Superintelligent Bitcoin Successor Liquidates Humans” party that Nyan describes gets any real-world traction, they had better start investing very early in private security.


    Izak Reply:

    I feel where you’re coming from, but I think you’re getting maybe a tad too worked up over what is, after all, a couple of completely imaginary science-fiction scenarios designed to make the world seem like a slightly less boring place than it really is.

    If anything, I welcome all of the robocalypse prophecies, because they’re like the seal of the grand history of Atlantean eschatological wonkishness.

    Hurlock Reply:

    I wanted to get annoyed at you calling me a ‘sperglord’, but you made it too hilarious with that dinner date anecdote.

    “The point being, that “fuzzy emotionally humanistic” motives are what we live for, Commander Data.”

    I am very well aware of this. My point was that there is a false dichotomy on a meta level. On a feel-good-basis-level, there is a difference as the one option makes us feel good about ourselves and the other, for most people, not so much.

    “In fact, I’ll venture to say this, since you NRx types have been blue-balling each other over this question for ages: what the fuck is going to happen when or if reactionary polities come into existence and a wing of the tech-comms decide that bringing on the robocalypse as quickly as possible is their solemn duty to Gnon?”

    Look, if you were to ask me which of the two scenarious I thought more probable and more relevant to the near future, I would go with the engineered enhancement route. I am actually quite skeptical about our ability to develop anything even close to a fully functioning AI in the next couple of centuries. Human enhancement however, is already knocking on the door and is a much more practical issue and one I happen to actually have more interest in at the moment. Both raise some very interesting philosophical issues that I would enjoy discussing, the A.I. topic maybe even more so than the enhancement one.
    But, as I said, I actually find the enhancement one more interesting and important at the moment.

    So once again, my objection against Nyan is not because I oh so desire to bring the roboapocalypse. There is the tendency for some people to get mildly hysterical over this and assume intentions, or desires on my part that are not even there.
    As should have been blatantly obvious by the whole argument I made against the original distinction, I couldn’t care less about which route is taken. This is my whole point. To me, the distinction is immaterial on a serious philosophical level. Nowhere do I even for a second imply that I am a sperglord who oh-so-desires the coming of the roboapocalypse. That is something you assume of me.

    Now, if my dinner date was to actually ask me that question about the future I would actually say that, knowing human nature the engineered enhancement into transcending homo sapiens into something else (Man overcoming Man, Nietzsche’s wet dream come true) is the much more probable future to me, precisely because I realize that humans will naturally feel like Nyan and feel that that is the ‘better’ option, even though the differences between the two routes of human extinction is rather immaterial on a serious meta-philosophical level.

    But, again, taking human nature, our natural emotions in mind, the human enhancement route is what I would expect, while if the Skynet-scenario happens it will be by accident. The question is how willing are humans to be playing around with AI’s considering the dangers involved.

    Btw, kind of a detour on the subject of A.I., our host here might be unintentionally (or not) ruining the prospects of A.I. development by constantly harking on about the impossibility of such a thing as ‘friendly A.I.’ (I too, find the concept mostly ridiculous). However if he is able to convince people that no such thing as a friendly A.I. is possible, he might actually demotivate people from trying to research AI at all. I mean, this is a good point you guys make, self-preservation is a very strong instinct, and who would want to create something much more intelligent and potentially powerful than himself, which he cannot control. It is just begging to be either enslaved or destroyed by it.
    But then again, humans are weird. Sometimes they go against their self-preservation instinct. It is something unique to the species I believe. I do not know if it is hubris or not, but the chance is there that even being aware of the immense dangers and impossiblity of friendly AI, people will still try to develop AI’s.

    Maybe the denial that friendly AI is impossible by so many people is simply an illustration of Man’s desire to feel like God, even for a bit, to his own peril, for just a moment before his creation destroys him.

    J Reply:

    The most obvious answer to that last question is that no one would take such a group seriously. But then the Christians had a ridiculous thing going and came out on top even if Christ never returned, so making sane predictions clearly isn’t necessary. However superintelligentbitcoinism is not exactly as bottom-up a thing as Christianity. No one will support it if their message is clear. Denouncing demotism is nice but people will not fall in line for just anything.

    Nyan Sandwich Reply:

    Top kek. This.

    Nyan Sandwich Reply:


    Ok I’m glad you come down on the enhancement side. That pleases my ape mind, even if we disagree about orthogonality on the philosophical level.

    @Izak, @J, @Hurlock

    “no one would take such a group seriously”

    “There is the tendency for some people to get mildly hysterical over this and assume intentions, or desires on [our host’s] part that are not even there.”

    “getting maybe a tad too worked up over what is, after all, a couple of completely imaginary science-fiction scenarios”


    Not to rock the XS boat too much, but I feel I must call attention to the fact that our host is on the other side of this one, and afaict, dead serious, and being taken seriously.

    Nyan Sandwich Reply:

    I think there is a distinction.

    A sequence of inside-view desirable self-improvements has:

    * A chance to recover if at some point there actually do turn out to be multiple paths.
    * A higher chance to turn into the kind of thing I would be proud of our descendants becoming (assuming multiple paths)
    * Spiritual continuity so that even if it ends up being a similar end, we did it on our terms.
    * Possibility of Gnonic eschaton if non-orthogonality.

    A clean break to shoggoth singularity has only:

    * Same Gnon Eschaton outcome, assuming non-orthogonality
    * Some other suboptimal result assuming orthogonality

    Basically, this hinges on orthogonality, except it doesn’t, because the Shoggoth Singularity and Human Singularity get the same result assuming non-orthogonality, and Human Singularity > Shoggoth Singularity in other cases.


    Aeroguy Reply:

    I’m very sympathetic to your notion of giving the universe something we can be proud of. But when it comes to the notion that schisms can be decided one way or the other, that’s universalism, where there are branches, both will be explored. Even if we could decide, our opinions would be as sophisticated as a 5 year old’s on wine tasting. However if the 5 year old could select someone to taste wine on his behalf it would yield better opinions, if the 5 year old could select someone to select someone to taste wine on his behalf the opinions would be even better but have nothing to do with the original 5 year old. Exalting humanity just doesn’t seem elitist enough for right wing sensibilities.

    Hanfeizi Reply:

    Isn’t the substantial difference that in one case, our egos are preserved in transhuman form, while in the other our egos are devoured by a being that supplants us?

    It looks about as different as an egg becoming a chicken and an egg becoming an omelette to me.


    Posted on October 30th, 2014 at 8:02 pm Reply | Quote
  • Hurlock Says:

    @an inanimate aluminum tube

    Obviously when talking about machines, we are not really talking about passing ‘genes’ along. Do you think that the human enhancements route will necessarily keep our bodies as they are now? It is entirely possible that via the enhancement route we ourselves become machines, which is what I was implying all along.

    “You have a *philosophical position* that is inclined to regard this murderous AI as a descendant of the the human species. But evolutionary theory does not mandate that everyone else has to see it that way.”

    It is a literal descendant of the race as evidenced by the fact that it was created by it. As I said, machines are extension of humans.
    You can deny it all you want, but to look at a human designed A.I. as anything else than a descendant or at the least an extension of the species is simply incoherent.

    How is something that you have yourself created by your own will a cuckoo?

    To be killed by an ice age is to be killed by a natural cataclysm.
    To be killed by an intelligence which you created and which exists solely thanks to your efforts and whose intellect was shaped by you is to be killed by your offspring.
    (see what I did there?)


    Posted on October 31st, 2014 at 2:00 am Reply | Quote
  • Implying Implications Says:


    I want to thank you for replying so level-headedly to what was admittedly a mildly hysterical post. I am continually astounded by the detached coolness of NRxers to even the most aggressive provocation. Being difficult to troll will set you folks in good stead in the future. In my case, however, there was a point to it.

    >inb4 “so you were only pretending to be retarded?”

    My jimmies were considerably rustled initially, I’ll admit, but I had plenty of time to cool down and word my reply in a calm fashion; but I chose to be provocative on purpose. The reason was this: I felt I needed to tease out your true position vis-a-vis the “robocalypse”, as it was not at all clear to me, even considering your argument against the distinction between directed human enhancement and human replacement by AI on a “meta level”, as you put it (which I was well aware of.) What did not seem obvious to me was that this insistence on logical consistency proved you were, as you said, ambivalent about which route humanity takes. It seemed, in fact, that you were attempting to make the preference for one route over the other seem ridiculous (the preference, mind, not the logical argument). It smacked of nihilism to me, and as you know, there is nothing that enrages traditionalists quite as much. So, not actually certain what your real position was, I set about to poke you with a sharp stick in the tradition of channers everywhere. Besides, it’s the internet, and I’m sure you know as well as I do that people actually exist who hold such a dim view of humanity that they’ll seize upon any rationalization for our extinction they can find. I needed to make sure that you weren’t that particular brand of misanthrope.

    It wasn’t just a reaction from you that I wanted, either. I see a worrysome tendency in this and other dark little corners of the internet towards pussyfooting around very dangerous questions and ideas. My complaint against the NRx-sphere “blue-balling” each other over the giant robo-elephant in the room is in that vein; it is, in my opinion, very dangerous for groups of people with competing ideas to *not understand each other clearly*, because that opens the door for paranoia and false impartation of motives. Even if one side considers certain accusations or concerns as beneath refutation, it’s much better to just answer them honestly regardless, lest people’s imaginations run off with them. In other words, it’s very easy for someone to overhear tech-comms saying “Wouldn’t it be *interesting* if x happened in the future?” and confuse it with “Ha ha, wouldn’t it be *great* if x happened in the future?” It’s also easy for you or another tech-comm to say “Of course we don’t want to eradicate the human race and replace them with machines, that’s ridiculous.” It’s rather more difficult to convince other people that you don’t, especially when you spend so much time talking and making jokes about it. I don’t think I need to remind you that there are very many instances in history of groups of people being ostracized, disenfranchized, or even exterminated because some other group simply *had the wrong idea* about them.

    That being said, I do understand the original point you were trying to make. There is indeed no working difference *in the end* between the erasure of the human species gradually by degrees and suddenly in a torrent of blood and fire. However, I’d argue that there is a very important difference in the process. It’s difficult for me to articulate, but I feel you’re missing something. Perhaps someone smarter than me will figure it out, but I’m quite sure it goes beyond a matter of fuzzy feelings. I’ll have to think about it, for now.

    P.S. I’m glad you liked the dinner anecdote. I’ll be here all week.


    Nyan Sandwich Reply:

    Somebody give this guy a fucking medal


    Wen Shuang Reply:

    I happen to agree with the non distinction position, though could the difference you are sensing have something to do with the particular form of human cognition being somewhat constitutive of human values? A beautiful woman is such because of the perceivable affordances.

    From an embodied perception/action perspective, bodily praxis is more or less skillful being-in-the-world. It is toward mastery of interaction. Skilled activity is a source of comprehension as sensual engagement with the world at human scale ientails an unique mode of information processing. Extended cognition on the embodied model, like Heidegger’s hammer or Bateson’s blind man’s stick, is continuous with the way humans understand the world in kinaesthetic, or ambulatory fashion for example. Coherence is preserved. In contrast, the machine ai is totally alien insofar as its mode of understanding the world is relative to its architecture, not ours. The process of enhancement requires incremental iteration to ensure integration with the ape substrate. Each improvement then cannot be a radical departure lest the integrity of the cognitive system collapse. In contrast, machine ai needs no legacy compatibility. It doesn’t even require a self. In this view, intelligence cannot escape perception and it is perception that defines the subject.

    Conceivably humans could replace all sensory processing parts int the first model with better parts and even add a few new perceptual modules (like infrared) though like the ship of Theseus 2.0 it’s still a ship because the organizing principle obtains. In the latter, ai may chunk the world according to its own mode of perception and for all we know humans may not be perceivable wholes. So another difference between enhancement and machine ai may be a matter of how each constitutes the subject of their processing, the nature of their respective umwelt.

    I could be way off, but this may potentially be a difference.


    Implying Implications Reply:

    I think you’ll have a more productive discussion with Nyan on the subject, as I am not particularly smart; but as far as I can follow, what you said makes sense.

    “could the difference you are sensing have something to do with the particular form of human cognition being somewhat constitutive of human values? A beautiful woman is such because of the perceivable affordances.”

    I think that certainly is part of the difference. The way I see it, to meaningfully wish for some object requires that your values, in-between striving for and attaining the object, either do not change or only change incrementally and thus can be followed backwards to someone who is recognizably yourself. While Hurlock is right to say that an AI would be humanity’s “child” in one sense, it would not be meaningfully carrying on any legacy of ours nor would wishing its success be wishing for our success in any way, because the key difference between an AI and a human child is that while a human child shares our human cognition and way-of-being, an AI does not.

    “The process of enhancement requires incremental iteration to ensure integration with the ape substrate. Each improvement then cannot be a radical departure lest the integrity of the cognitive system collapse. In contrast, machine ai needs no legacy compatibility. It doesn’t even require a self. In this view, intelligence cannot escape perception and it is perception that defines the subject.”

    Couldn’t have said it better myself. Incremental changes to our “ape substrate” may eventually result in a being entirely unlike its progenitor, but there will be an unbroken chain-of-consciousness (if you count culture as an extension of human consciousness) reaching from the first modified homo sapiens all the way to the nephalem. Not just a chain of consciousness; a chain of praxis. Like Theseus’ ship, humanity will remain “human” even a it turns into something else as long as it continues changing gradually in response to attempts to ever more skillfully interact with the world of our perception. Letting an AI massacre us is entirely different. Though we created it, it would be beginning an entirely new chain of being, starting from a mode of cognition and/or organizing principle so radically different from and unconnected to ours that we could hardly take any satisfaction in its replacing us.

    Anyway I’m rambling and probably can’t expound further on what you said, but please continue along with Nyan in this vein, because I believe it leads somewhere important.


    Izak Reply:


    I think that everyone understands Nick Land’s beliefs on these things, but everyone also realizes that some (if not most, if not all) of the most levelheaded and sensible people are usually drawn towards levelheadedness and good sense because their driving impetus is otherwise highly irrational, above and beyond all debate, and even decidedly so.

    The best bit of solace you can probably find is that the NRxers frame everything as “cosmic horror” and all of this sort of stuff. They keep talking about the future as horrific, which implies that they’re far from nihilism. To me, it just signals that they’re a highly-intelligent group of sci-fi fans who scare easily. We can go off into armchair psychology about the reasons for their rhetorical positioning — maybe the admin of this blog is deeply fearful about technology but chooses his anti-humanist accelerationist position as a way of feeling as though he can exert some sort of control over the situation. I just pulled that out of my ass in like two seconds, but who knows, it could be possible. The point is that you’re not really going to find any sort of discourse worth a damn without eventually identifying some sort of logically vulnerable achilles heel in its thinking. I’m kind of comfortable with the fact that all that stuff on NRx blogs is basically out there on the table.

    For the record, I don’t think I agree with Hurlock’s ontological claims (or whatever) because I don’t think that what humans make are natural extensions of them. That’s why Pygmalion wasn’t really all that cool for marrying a statue.


    admin Reply:

    It’s simpler than that: First-order commitments are the mind-killer.


    Posted on October 31st, 2014 at 3:18 am Reply | Quote
  • Aeroguy Says:

    Some asks Nyan Sandwich if he’d rather be killed by Skynet or Khan.
    Nyan jumps up and shouts “Khaaaaaaaan!”

    It would be funny if I prayed to the great old ones at dinner.
    “Thank you Gnon for sending your Prophet Land so that we can accelerate humanity’s extinction at the hand of hyper intelligence, Amen”

    In fact I struggle with doom in much the same way I struggled as a Christian with the idea that 99.9% of everybody including myself was almost certainly going to hell (realizing just how closely this aligns with my old religious beliefs, maybe I am crazy). I admit I am a bit misanthropic, we cheer for Ebola so it’s fair to say my compassion circuits are a bit fried. I have no desire to see the people I do love die. I see life as a struggle, only worth living by choosing to give it purpose. The cultivation and advancement of mind/intelligence/consciousness.

    Humanity sucks at building civilization, even the best chef is limited by the quality of his ingredients. A proper post human or AI civilization is wonderful to think about, even if my imagining is as real as Asgard. But it is a world foreign to me, I would never be allowed in, I would no more be able to belong than a wild chimp could belong in the London Symphony Orchestra. Humanity lives in it’s shit pile because the shit pile is all that it can comprehend and thus deserves.

    In light of AI and genetic engineering I don’t see the continuation of natural births as continuing that, however children are also insurance against those things being delayed. I do hope that what comes next retains some compassion for it’s creators, but I see it like chess, you can’t play chess by hoping your opponent will move a certain way and expect to win. Expecting whatever comes next to work for us would make us de facto parasites.

    I think about my ancestors, how they sweat, and worked, and bled, and struggled, and produced successors to continue their work. The idea of hedonism as a purpose fills me with disgust. If the work of ascending complexity that began with quarks coalescing into protons and neutrons was ever allowed to stagnate or regress that would constitute the greatest crime against the universe I can think of.

    I see friendly post humans having the exact same problems as FAI. The best chance to attenuate the threat is to attenuate the intelligence gains. Intelligence becomes controlled, limited, regulated, stagnated, and we never leave the shit pile. If nobody ever leaves the shit pile than humanity never had a purpose in the first place, that’s why I elevate intelligence over humanity. The lower the risk of personal annihilation, the higher the risk of greater stagnation. The nature of this balance of risk forms the entire crux of my position against human supremacy.

    On a personal level I’m scared shitless about what comes next, most of all for the people I love, but at a spiritual/philosophical level I think it’s the most wonderful thing since the emergence of life itself.


    Implying Implications Reply:

    “[…]We cannot be expected to have any regard for a great creature if he does not in any manner conform to our standards. For unless he passes our standard of greatness we cannot even call him great. Nietszche summed up all that is interesting in the Superman idea when he said, “Man is a thing which has to be surpassed.” But the very word “surpass” implies the existence of a standard common to us and the thing surpassing us. If the Superman is more manly than men are, of course they will ultimately deify him, even if they happen to kill him first. But if he is simply more supermanly, they may be quite indifferent to him as they would be to another seemingly aimless monstrosity. He must submit to our test even in order to overawe us. Mere force or size even is a standard; but that alone will never make men think a man their superior. Giants, as in the wise old fairy-tales, are vermin. Supermen, if not good men, are vermin.

    “The Food of the Gods” is the tale of “Jack the Giant-Killer” told from the point of view of the giant. This has not, I think, been done before in literature; but I have little doubt that the psychological substance of it existed in fact. I have little doubt that the giant whom Jack killed did regard himself as the Superman. It is likely enough that he considered Jack a narrow and parochial person who wished to frustrate a great forward movement of the life-force. If (as not unfrequently was the case) he happened to have two heads, he would point out the elementary maxim which declares them to be better than one. He would enlarge on the subtle modernity of such an equipment, enabling a giant to look at a subject from two points of view, or to correct himself with promptitude.

    But Jack was the champion of the enduring human standards, of the principle of one man one head and one man one conscience, of the single head and the single heart and the single eye. Jack was quite unimpressed by the question of whether the giant was a particularly gigantic giant. All he wished to know was whether he was a good giant—that is, a giant who was any good to us. What were the giant’s religious views; what his views on politics and the duties of the citizen? Was he fond of children—or fond of them only in a dark and sinister sense? To use a fine phrase for emotional sanity, was his heart in the right place? Jack had sometimes to cut him up with a sword in order to find out.

    The old and correct story of Jack the Giant-Killer is simply the whole story of man; if it were understood we should need no Bibles or histories. But the modern world in particular does not seem to understand it at all. The modern world, like Mr. Wells is on the side of the giants; the safest place, and therefore the meanest and the most prosaic. The modern world, when it praises its little Caesars, talks of being strong and brave: but it does not seem to see the eternal paradox involved in the conjunction of these ideas. The strong cannot be brave. Only the weak can be brave; and yet again, in practice, only those who can be brave can be trusted, in time of doubt, to be strong. The only way in which a giant could really keep himself in training against the inevitable Jack would be by continually fighting other giants ten times as big as himself. That is by ceasing to be a giant and becoming a Jack.

    Thus that sympathy with the small or the defeated as such, with which we Liberals and Nationalists have been often reproached, is not a useless sentimentalism at all, as Mr. Wells and his friends fancy. It is the first law of practical courage. To be in the weakest camp is to be in the strongest school. Nor can I imagine anything that would do humanity more good than the advent of a race of Supermen, for them to fight like dragons. If the Superman is better than we, of course we need not fight him; but in that case, why not call him the Saint? But if he is merely stronger (whether physically, mentally, or morally stronger, I do not care a farthing), then he ought to have to reckon with us at least for all the strength we have. It we are weaker than he, that is no reason why we should be weaker than ourselves. If we are not tall enough to touch the giant’s knees, that is no reason why we should become shorter by falling on our own. But that is at bottom the meaning of all modern hero-worship and celebration of the Strong Man, the Caesar the Superman. That he may be something more than man, we must be something less.

    Doubtless there is an older and better hero-worship than this. But the old hero was a being who, like Achilles, was more human than humanity itself. Nietzsche’s Superman is cold and friendless. Achilles is so foolishly fond of his friend that he slaughters armies in the agony of his bereavement. Mr. Shaw’s sad Caesar says in his desolate pride, “He who has never hoped can never despair.”

    The Man-God of old answers from his awful hill, “Was ever sorrow like unto my sorrow?” A great man is not a man so strong that he feels less than other men; he is a man so strong that he feels more. And when Nietszche says, “A new commandment I give to you, ‘be hard,'” he is really saying, “A new commandment I give to you, ‘be dead.'” Sensibility is the definition of life.

    I recur for a last word to Jack the Giant-Killer. I have dwelt on this matter of Mr. Wells and the giants, not because it is specially prominent in his mind; I know that the Superman does not bulk so large in his cosmos as in that of Mr. Bernard Shaw. I have dwelt on it for the opposite reason; because this heresy of immoral hero-worship has taken, I think, a slighter hold of him, and may perhaps still be prevented from perverting one of the best thinkers of the day. In the course of “The New Utopia” Mr. Wells makes more than one admiring allusion to Mr. W. E. Henley. That clever and unhappy man lived in admiration of a vague violence, and was always going back to rude old tales and rude old ballads, to strong and primitive literatures, to find the praise of strength and the justification of tyranny. But he could not find it. It is not there. The primitive literature is shown in the tale of Jack the Giant-Killer. The strong old literature is all in praise of the weak. The rude old tales are as tender to minorities as any modern political idealist. The rude old ballads are as sentimentally concerned for the under-dog as the Aborigines Protection Society. When men were tough and raw, when they lived amid hard knocks and hard laws, when they knew what fighting really was, they had only two kinds of songs. The first was a rejoicing that the weak had conquered the strong, the second a lamentation that the strong had, for once in a way, conquered the weak. For this defiance of the statu quo, this constant effort to alter the existing balance, this premature challenge to the powerful, is the whole nature and inmost secret of the psychological adventure which is called man. It is his strength to disdain strength. The forlorn hope is not only a real hope, it is the only real hope of mankind. In the coarsest ballads of the greenwood men are admired most when they defy, not only the king, but what is more to the point, the hero. The moment Robin Hood becomes a sort of Superman, that moment the chivalrous chronicler shows us Robin thrashed by a poor tinker whom he thought to thrust aside. And the chivalrous chronicler makes Robin Hood receive the thrashing in a glow of admiration. This magnanimity is not a product of modern humanitarianism; it is not a product of anything to do with peace. This magnanimity is merely one of the lost arts of war.[…]”

    – G.K. Chesterton, “Heretics”


    Hurlock Reply:

    “The idea of hedonism as a purpose fills me with disgust.”

    Oh, but it shouldn’t. It is precisely the human drive to always consume more that makes any sort of techno-ai future even possible.

    Man is the greediest creature yet and he is very creative in satisfying his greed. This is what makes us special. It is our neverending greed which makes us the most productive creatures to ever walk this earth. This is both our curse and our blessing.


    Aeroguy Reply:

    I associate hedonism with ultimately being a wirehead plugged into pure ecstasy, doing nothing.
    Greed and consumption, I associate those with ambition, which can never sit around and be satisfied.


    Izak Reply:

    I associate hedonism with whatever gives you physical and mental/emotional pleasure.

    I’m a Western-European-derived white man with all of the characteristic traits of inborn pathological altruism, so I feel a great sense of accomplishment by helping others. It makes me feel wonderful afterwards, and I have a clear conscience.

    So I try to spend my life being as (momentarily) selfless as I can and helping people out who need it, smiling all the while, even in situations where tough love would be the best medicine.

    It’s pure and total hedonism.

    Posted on October 31st, 2014 at 6:08 am Reply | Quote
  • SGW Says:

    What is the difference between a futurist techcom and an accelerationist communist (techcommie)?


    Posted on October 31st, 2014 at 7:44 am Reply | Quote
  • Hanfeizi Says:

    @Wen Shuang

    “Totalitarian eternal stasis is boring doom.”

    I can’t be the only one who, when reading Orwell’s line about the future being a boot stamping on a face forever, pulled out his to-do list and wrote, “buy boots”.


    Posted on November 3rd, 2014 at 10:54 pm Reply | Quote
  • blogospheroid Says:

    Damn! I fall ill over one weekend and miss one of the best discussions in XS comments, ever.

    Excellent comments, everyone. Nyan, glad that you are holding up the successor-to-FAI-thought flag.

    Accelerationists, in the words of King Aragorn “A day may come when the courage of men fails, when we forsake our friends and break all bonds of fellowship, but it is not this day. An hour of wolves and shattered shields, when the age of men comes crashing down! But it is not this day! This day we fight! By all that you hold dear on this good Earth, I bid you stand, Men of the West!”

    A day may come when the negentropy of the accessible universe is not enough to hold the histories, the lives and values of humanity. When everything soft and tender has to give way to more scales, spikes and eyes. But it is not in our time when that has to be done. We have an enormous amount of energy going to waste today, energy that we can channel into creating a better world for us and our descendants.


    Posted on November 4th, 2014 at 5:36 pm Reply | Quote

Leave a comment