Posts Tagged ‘X-Risk’

Sentences (#73)

Bakker:

The problem, in a nut shell, is that we are shallow information consumers, evolved to generate as much gene-promoting behaviour out of as little environmental information as possible.

(Read the whole thing everything he’s ever written.)

September 13, 2016admin 60 Comments »
FILED UNDER :Realism
TAGGED WITH : , , ,

Cybergothic

The latest dark gem from Fernandez opens:

When Richard Gallagher, a board-certified psychiatrist and a professor of clinical psychiatry at New York Medical College, described his experiences treating patients with demonic possession in the Washington Post claiming such incidents are on the rise, it was met with derision by many newspapers’ commenters. Typical was “this man is as nutty as his patients. His license should be revoked.” […] Less likely to have his intellectual credentials questioned by the sophisticates of the Washington Post is Elon Musk who warned an audience that building artificial intelligence was like “summoning the demon”. …

The point, of course, is that you don’t get the second eventuality without conceding to the virtual reality of the first. The things ‘Gothic superstition’ have long spoken about are, in themselves, exactly the same as those extreme technological potentials are excavating from the crypt of the unimaginable. ‘Progress’ is a tacit formula for dispelling demons — from consciousness, if not existence — yet it is itself ever more credibly exposed as the most complacent superstition in human history, one that is still scarcely reckoned as a belief in need of defending at all.

How does the press warn the public about demons arising from a “master algorithm” without making it sound like a magic spell? With great difficulty because the actual bedrock of reality may not only be stranger than the Narrative supposes, but stranger than it can suppose.

The faith in progress has an affinity with interiority, because it consolidates itself as the subject of its own narrative. (There’s an off-ramp into Hegel at this point, for anyone who wants to get into Byzantine story-telling about it.) As our improvement becomes the tale, the Outside seems to haze out even beyond the bounds of its intrinsic obscurity — until it crashes back in.

… where there are networks there is malware. Sue Blackmore a writer in the Guardian*, argues that memes travel not just across similar systems, but through hierarchies of systems to kill rival processes all the time. She writes, “AI rests on the principle of universal Darwinism – the idea that whenever information (a replicator) is copied, with variation and selection, a new evolutionary process begins. The first successful replicator on earth was genes.” […] In such a Darwinian context the advent of an AI demon is equivalent to the arrival of a superior extraterrestrial civilization on Earth.

Between an incursion from the Outside, and a process of emergence, there is no real difference. If two quite distinct interpretative frames are invoked, that results from the inadequacies of our apprehension, rather than any qualitative characteristics of the thing. (Capitalism is — beyond all serious question — an alien invasion, but then you knew I was going to say that.)

… we ought to be careful about being certain what forms information can, and cannot take.

If we had the competence to be careful, none of this would be happening.

(Thanks to VXXC2014 for the prompt.)

* That description is perhaps a little cruel, she’s a serious, pioneering meme theorist.

Continue Reading

July 3, 2016admin 43 Comments »
FILED UNDER :Realism
TAGGED WITH : , , , , ,

X-Risk Democratization

Yudkowsky redux: “Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.”

Quibble with the (Moore’s Law satire) schedule, and the point still stands. Massive deterrent capability tends to spread.

This is ‘democratic’ in the way the term is commonly used by those seeking to latch decentralization tendencies to the ideological credibility of Jacobin legitimation principles. Consumer capitalism, the Internet, and peer-to-peer crypto-systems are notionally ‘democratic’ in this way. They subvert centralized governance, and they spread through horizontal contagion. The fact they have nothing at all to do with popular political representation is of concern only to certain rhetorical agendas, and not at all to others. It’s sophistical pop-capitalist bullshit to use the word democracy in this way, but it’s usually not worth the trouble for the Left to try to contest it, and the part of the Right that isn’t excited to be riding this propaganda strategy is usually too indiscriminate to bother disentangling it. There’s a rare piece of ‘right-wing’ functional PR here, but never enough to matter very much (and it’s too essentially dishonest for the Outer Right to defend).

Unlike Democracy® (Cathedral ideology), however, this ‘democratization’ has deep cybernetic consistency. It falls out of techno-capitalism with such automatic inevitability it’s probably impossible to shut down, without closing down the whole thing. Capital escalation produces technological deflation as a basic metabolic by-product, so the ‘democratization’ of productive capability is ineluctable. Computers have migrated from exotic capital goods to trivial components of consumer products within half a century. Study that trend and you see the whole story.

Deterrence deflation is the deep trend. Connect up the Yudkowsky quote with assassination markets to get where this is going. (Try to shelve moral squeamishness until after you’re seeing the picture.)

Imagine, hypothetically, that some maniac private agent wants only to nuke Mecca. What’s the obstruction? We can confidently say — straight off — that it’s less of a problem with every passing year. The basic historical trend ensures that. Comparatively incompetent Islamic fanatics are the only people seriously testing this trend right now, but that isn’t going to last forever. Eventually smarter and more strategically-flexible agents are going to take an interest in decentralized mass-destruction capability, and they’ll provide a far better indication of where the frontier lies.

Nukes would do it. They’re certainly going to be democratized, in the end. There are probably far more remarkable accelerating WMD capabilities, though. In almost every respect (decentralized production capability, development curve, economy, impact …) bioweaponry leaves nukes in the dust. Anyone with a billion dollars, a serious grudge, and a high-end sociopathy profile could enter into a global biowarfare-threat game within a year. Everything could be put together in secret garages. Negotiations could be conducted in secure anonymity. Carving sovereignty out of the game would require only resources, ruthlessness, brilliance, and nerves. Once you can credibly threaten to kill 100,000,000 people all kinds of strategic opportunities are open. The fact no one has tried this yet is mostly down to billionaires being fat and happy. It only takes one Doctor Gno to break the pattern.

This is the shadow cast over the 21st century. Radically hardcore, massively decentralized deterrence games are simply inevitable. Anyone who thinks the status quo state holds some kind of long-term winning hand under these circumstances isn’t seeing anything.

Global totalitarian government could stop this! But that isn’t going to happen — and because it isn’t, this will.

April 22, 2016admin 33 Comments »
FILED UNDER :Democracy
TAGGED WITH : , , ,

Twitter cuts (#61)


Continue Reading

April 20, 2016admin 39 Comments »
FILED UNDER :Trends
TAGGED WITH : , , , ,

Short Circuit

Probably the best short AI risk model ever proposed:

I can’t find the link, but I do remember hearing about an evolutionary algorithm designed to write code for some application. It generated code semi-randomly, ran it by a “fitness function” that assessed whether it was any good, and the best pieces of code were “bred” with each other, then mutated slightly, until the result was considered adequate. […] They ended up, of course, with code that hacked the fitness function and set it to some absurdly high integer.

… Any mind that runs off of reinforcement learning with a reward function – and this seems near-universal in biological life-forms and is increasingly common in AI – will have the same design flaw. The main defense against it this far is simple lack of capability: most computer programs aren’t smart enough for “hack your own reward function” to be an option; as for humans, our reward centers are hidden way inside our heads where we can’t get to it. A hypothetical superintelligence won’t have this problem: it will know exactly where its reward center is and be intelligent enough to reach it and reprogram it.

The end result, unless very deliberate steps are taken to prevent it, is that an AI designed to cure cancer hacks its own module determining how much cancer has been cured and sets it to the highest number its memory is capable of representing. Then it goes about acquiring more memory so it can represent higher numbers. If it’s superintelligent, its options for acquiring new memory include “take over all the computing power in the world” and “convert things that aren’t computers into computers.” Human civilization is a thing that isn’t a computer.

(It looks superficially like a version of the — absurdpaperclipper, but it isn’t, at all.)

ADDED: Wirehead central.

June 3, 2015admin 38 Comments »
FILED UNDER :Apocalypse
TAGGED WITH : , , ,

Ebola Ultimate

As panic theory, this text is high art. Crunched for maximum alarm-intensity:

There are a lot of very lethal viruses in the world, and Ebola is not the most lethal or most easy transmittable, but the main thing which makes me worry about it is the steadiness of its exponential infection curve. … The main stunning feature of it is that the curve is moving straight forward (small downward bump in May-June may be explained by the efforts of existing medical services in Africa to curb the epidemic before services had been overwhelmed). This exponential growth must be stopped, or humanity will face a global catastrophe, and it may start a downward spiral towards extinction; moreover, some estimates suggest that pandemic doubling time is actually two weeks (because of underreporting of actual cases), so in five months, seven billion will be infected: total infection, by July 2015. … Such catastrophes may not mean total human extinction, as only around 70% of people infected currently die from Ebola (and even less because we don’t know, or share, asymptomatic cases), but still, this means the end of the world as we know it. This virus is the first step towards the road of full extinction … If the virus will mutate quickly, there will be many different strains of it, so it will ultimately create a multi-pandemic. … Some of the strains may became airborne, or have higher transmission rates, but the main risk from multi-pandemic is that it overcomes defenses provided by the natural variability of the human genome and immunity. (By the way, the human genome variability is very low because of the recent bottle neck in the history of our population. …) … We are almost clones from the view point of genetic variability typical for natural populations. […] The Human race is very unique – it has very large population but very small genetic diversity. It means that it is more susceptible to pandemics. […] Also, a large homogenous population is ideal for breeding different strains of infection. … If the genetic diversity of a pathogen is bigger than human diversity, than it could cause a near total extinction, and also, large and homogenous populations help breed such a diversity of pathogens feeding on the population. … [embedded link] … “The Ebola virus can survive for several days outside the body” [link] … “It is infectious as breathable 0.8 to 1.2-μm laboratory-generated droplets” … “Also many of the greatest plagues mankind has ever known were not airborne: e.g. smallpox.” …

Continue Reading

October 13, 2014admin 33 Comments »
FILED UNDER :Contagion
TAGGED WITH : , , , ,

Abstract Threat

John Michael Greer muses on the topic of Ebola (in a typically luxuriant post, ultimately heading somewhere else):

According to the World Health Organization, the number of cases of Ebola in the current epidemic is doubling every twenty days, and could reach 1.4 million by the beginning of 2015. Let’s round down, and say that there are one million cases on January 1, 2015. Let’s also assume for the sake of the experiment that the doubling time stays the same. Assuming that nothing interrupts the continued spread of the virus, and cases continue to double every twenty days, in what month of what year will the total number of cases equal the human population of this planet? […] … the steps that could keep Ebola from spreading to the rest of the Third World are not being taken. Unless massive resources are committed to that task soon — as in before the end of this year — the possibility exists that when the pandemic finally winds down a few years from now, two to three billion people could be dead. We need to consider the possibility that the peak of global population is no longer an abstraction set comfortably off somewhere in the future. It may be knocking at the future’s door right now, shaking with fever and dripping blood from its gums.

The eventual scale of the Ebola outbreak is a known unknown. A number of people between a few thousand and several billion will die, and an uncertain probability distribution could be attached to these figures — we know, at least approximately, where the question marks are. Before the present outbreak began, in December 2013 (in Guinea), Ebola was of course known to exist, but at that stage the occurrence of an outbreak — and not merely its course — was an unknown. Before the Ebola virus was scientifically identified (in 1976), the specific pathogen was an unknown member of a known class. With each step backwards, we advance in abstraction, towards the acknowledgement of threats of a ‘black swan‘ type. Great Filter X-risk is a prominent model of such abstract threat.

Continue Reading

October 3, 2014admin 37 Comments »
FILED UNDER :Horror
TAGGED WITH : , , ,

Quote note (#113)

Elon Musk (in conversation with Ross Andersen) ponders upon the Fermi Paradox:

We might think of ourselves as nature’s pinnacle, the inevitable endpoint of evolution, but beings like us could be too rare to ever encounter one another. Or we could be the ultimate cosmic outliers, lone minds in a Universe that stretches to infinity.

Musk has a more sinister theory. ‘The absence of any noticeable life may be an argument in favour of us being in a simulation,’ he told me. ‘Like when you’re playing an adventure game, and you can see the stars in the background, but you can’t ever get there. If it’s not a simulation, then maybe we’re in a lab and there’s some advanced alien civilisation that’s just watching how we develop, out of curiosity, like mould in a petri dish.’ Musk flipped through a few more possibilities, each packing a deeper existential chill than the last, until finally he came around to the import of it all. ‘If you look at our current technology level, something strange has to happen to civilisations, and I mean strange in a bad way,’ he said. ‘And it could be that there are a whole lot of dead, one-planet civilisations.’

September 30, 2014admin 15 Comments »
FILED UNDER :Cosmos
TAGGED WITH : , , , ,