Gigadeath War

Hugo de Garis argues (consistently) that controversy over permitted machine intelligence development will inevitably swamp all other political conflicts. (Here‘s a video discussion on the thesis.) Given the epic quality of the scenario, and its basic plausibility, it has remained strangely marginalized up to this point. The component pieces seem to be falling into place. The true element of genius in this futurist construction is preemption. The more one digs into that, the most twistedly dynamic it looks.

Among the many thought-provoking elements:

(1) Slow take-off is especially ominous for the de Garis model (in stark contrast to FAI arguments). The slower the process, the more time for ideological consolidation, incremental escalation, and preparation for violent confrontation.

(2) AI doesn’t even have to be possible for this scenario to unfold (it only has to be credible as a threat).

(3) De Garis’ ‘Cosmist-Terran’ division chops up familiar political spectra at strange angles. (Both NRx and the Ultra-Left contain the full C-T spectrum internally.)

(4) Terrans have to strike first, or lose. That asymmetry shapes everything.

(5) Impending Gigadeath War surely deserves a place on any filled-out horrorism list.

nuclear-war-global-impacts_32431_600x450

De Garis’ site.

(Some topic preemption at Outside in here.)

August 22, 2014admin 19 Comments »
FILED UNDER :Technology , World

TAGGED WITH : , , , , ,

19 Responses to this entry

  • Solex Says:

    Amongst his other thought-provoking elements: “Im proposing the establishment of an organisation called “MJP” (MEjew Prosecutors) consisting of researchers and lawyers to research scientifically the massive crimes of the Jewish central banksters and then to prosecute them…” He also seems to find it rather unfair that men can’t have abortions. Clearly some super-size intelligence at work here!

    [Reply]

    admin Reply:

    Mix of intellectual strengths and weaknesses, certainly. He has a talent for starkly envisioning the unfolding of social trend-lines — his core scenario is a crucial part of the futurist furniture IMHO.

    [Reply]

    Michael Reply:

    seems im banned

    [Reply]

    admin Reply:

    You’re not banned, or even clipped.

    [Reply]

    Michael Reply:

    my posts wont post ill try a third time
    Karen Straughan – Girl Writes What you tube blog eloquently a theory Ive been arguing less well for decades that abortion rights are unnecessarily sexist . She terms, Males Legal Paternal Surrender the right of a young man to also have the right to party rather than facing the consequences of the last party to party on dude. its deliciously dark it exposes the absurd leftist reasoning behind her right to choose and empowers men . for those of you who dont know karen i recommend her

    [Reply]

    admin Reply:

    Sometimes things get hung up in the spam queue when I’m asleep. Try to chill.

    [Reply]

    Posted on August 22nd, 2014 at 11:42 am Reply | Quote
  • Bryce Laliberte Says:

    I propose a compromise: Keep Yudkowsky and his degenerate peers from developing an artilect, but let the Catholic Church build one. Anyone who would nuke the Vatican would have an artilect-enhanced holy Crusade called against them.

    So do we know anyone who we could make Pope.

    That said, I am still awfully skeptical of the production of superintelligence. Intelligence is a *really hard* thing to develop, and there would undoubtedly be diminishing returns to increasing complexity. I suspect that additional complexity for additional magnitudes of intelligence increases exponentially, so that even an artilect that has at its disposal a number of magnitudes more material resources for computation than a human brain wouldn’t be that many magnitudes more intelligent (though undoubtedly still really smart, at least for its particular function).

    Sign me up for the Cosmists, but I don’t expect the Cosmist/Terran split will be obvious until many centuries down the line, and that as a result of evolutionary selection on human beings rather than the activity of some artilect or war about them.

    [Reply]

    Posted on August 22nd, 2014 at 4:37 pm Reply | Quote
  • Aeroguy Says:

    My main criticism is that he uses more of a homo rationalis to make his predictions. It’s a nice outline but doesn’t take into account humans being human, which will poetically make it much easier for the Cosmists. (I’ll try to find time to write more)

    I’m more concerned about the people going overboard with the F in FAI. It’s like imposing a metal man’s burden, a self flagellating mind obsessed with the interests of lesser beings, what could possibly go wrong. Especially since there’s no talk about controlling the pet population, spay or neuter your human today, it’s like the AI would be the machine version of the crazy cat lady who refuses to have her own children because she’s also the ultimate progressive (how many humans does an AI need).

    [Reply]

    Erik Reply:

    Please stop applying human social heuristics to a machine. My car never riots because one of its compatriots was shot, my cellphone doesn’t suffer or object that I’m holding it upside down, my fan does not complain of the injustice that I have locked it in place to cool me and only me rather than let it turn from side to side to progressively distribute airflow across the neglected heat-ghettoes of my room.

    Current FAI spec is emphatically not to place a set of imposition, burdens, obsessions or controls on top of an egoistic heart, but to build an entity that is friendly to humans at heart. There is no “true self” further down. I have a lot of complaints with the FAI plan, but to describe it as self-flagellating is like saying it’ll observe kosher. Nonsense.

    [Reply]

    admin Reply:

    “… to build an entity that is friendly to humans at heart.” — In other words, they’re hubristic rationalists who expect to be able to specify the orientation of an alien soul. No awareness of spontaneous order in sight. It’s a full-on Soviet-style planning psychosis.

    [Reply]

    Erik Reply:

    Yes, I consider that to be one of the more serious problems. They have to get it right first time it’s deployed, testing is on a sliding scale between “doesn’t prove anything” and “terribly unsafe if we got it wrong”, writing an ironclad proof of friendliness is on the big To Do list of ludicrously hard things that would be nice to have.

    Funeral Mongoloid Reply:

    ‘My car never riots because one of its compatriots was shot, my cellphone doesn’t suffer or object that I’m holding it upside down, my fan does not complain of the injustice that I have locked it in place to cool me and only me rather than let it turn from side to side to progressively distribute airflow across the neglected heat-ghettoes of my room.’

    Some sort of bizarre, psychotic, minority-loathing, object displacement going on here, I think.

    [Reply]

    Funeral Mongoloid Reply:

    My mobile phone is a good chap because he stays in his proper place and does what he is supposed to do – not like those black folks down in Ferguson, etc.

    [Reply]

    Posted on August 23rd, 2014 at 6:46 am Reply | Quote
  • Ask me about by My Little Pony drone fanfiction Says:

    http://phys.org/news/2014-08-lockheed-martin-fully-autonomous-robot.html

    “Lockheed Martin, in collaboration with the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC), successfully conducted a fully autonomous resupply, reconnaissance, surveillance and target-acquisition demonstration using its Squad Mission Support System (SMSS) unmanned ground vehicle, K-MAX unmanned helicopter and Gyrocam optical sensor. ”

    Compare and contrast this bit of news with actual Less Wrong opinions:

    http://lesswrong.com/lw/hpb/nearterm_risk_killer_robots_a_threat_to_freedom/95uu

    Carl Shulman’s glib one-liner is great, “Yes, this is a problem.” As is Epiphany’s currently rated “1” with 67% approval (with no responses):

    “I absolutely scoured the internet about 6 months ago looking for any mention of checks and balances, democracy, power balances and killing robots, AI soldiers, etc. (I used all the search terms I could think of to do this) and didn’t find them. Is this because they’re miniscule in size, don’t publish much, use special jargon or for some other reason?”

    http://lesswrong.com/lw/hpb/nearterm_risk_killer_robots_a_threat_to_freedom/969r

    This is some studding prediction as well: “No, we already have those. The decision to kill has nothing to do with it. The decisions of where to put the robot, and its ammunition, and the fuel, and everything else it needs, so that it’s in a position to make the decision to kill, is what we cannot yet do programmatically. You’re confusing tactics and strategy. You cannot run an army without strategic decisionmakers. Robots are not in a position to do that for, I would guess, at least twenty years.” [Emphasis added.]

    Yudkowsky weighs in with typical arrogance: http://lesswrong.com/lw/hpb/nearterm_risk_killer_robots_a_threat_to_freedom/95um

    http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/c8a5vhi

    No idea what the original post is because it was deleted in the legendary basilisk thread, but the response is gold star arrogance and willful blindness of actual military developments: “You can’t. I can’t. But we can imagine Terminator, so that possibility immediately seems more threatening. When considering the future, imaginability is a poor constraint.”

    Yudkowsky and crew actually believe they can out-compete the Military-Industrial complex while totally ignoring it.

    [Reply]

    Aeroguy Reply:

    Skynet’s my bro, lay off.
    But seriously though, I wish they were building skynet at darpa but sadly I know better.

    [Reply]

    Posted on August 23rd, 2014 at 6:51 am Reply | Quote
  • bbq beast Says:

    In the end the answer will be to have your own killer robots to kill the rogue ones. If this escalates before terrans get the first strike, they will have to resort to robot troopers themselves.

    [Reply]

    Posted on August 23rd, 2014 at 12:15 pm Reply | Quote
  • Konkvistador Says:

    I agree.

    [Reply]

    Posted on August 23rd, 2014 at 4:41 pm Reply | Quote
  • Konkvistador Says:

    “”Multis and Monos : What the Multicultured Can Teach the Monocultured : Towards the Creation of a Global State””

    Is he pwned or already angling for his side in the gigadeath war?

    [Reply]

    Posted on August 23rd, 2014 at 4:44 pm Reply | Quote
  • Lightning Round – 2014/08/27 | Free Northerner Says:

    […] A gigadeath war. […]

    Posted on August 28th, 2014 at 3:01 am Reply | Quote

Leave a comment