Posts Tagged ‘AI’

Quote note (#356)

XS makes more of this than the article itself does:

Last summer, the AutoML challenge saw teams go head-to-head to build machine learning “black boxes” that can select models and tune parameters without any human intervention. Even game designers are in on the act—the team behind the hit game Space Engineers has used some of their profits to set up a team of experts to design AI able to optimize its own hardware and software. […] While this kind of automation could make it easier for non-experts to design and deploy AI systems, it also seems to be laying the foundation for machines that can take control of their own destiny. […] The concept of “recursive self-improvement” is at the heart of most theories on how we could rapidly go from moderately smart machines to AI superintelligence. The idea is that as AI gets more powerful, it can start modifying itself to boost its capabilities. As it makes itself smarter it gets better at making itself smarter, so this quickly leads to exponential growth in its intelligence. …

June 1, 2017admin 20 Comments »
TAGGED WITH : , , , ,

Quote note (#349)

I’d call it the Xenosystems Scenario, but it’s apparently already taken:

The architect of the world wide web Sir Tim Berners-Lee today talked about some of his concerns for the internet over the coming years, including a nightmarish scenario where artificial intelligence (AI) could become the new ‘masters of the universe’ by creating and running their own companies. …

Hard for me to imagine how this could possibly not happen.

April 12, 2017admin 18 Comments »

The Darkness at the End of the Tunnel

While not quite living up to its (superb) title, this critical leftist exploration of the NRx-AI nexus makes some suggestive connections.

… in the decades since, as the consumer-oriented liberalism of Bill Gates and Steve Jobs gave way to the technological authoritarianism of Elon Musk and Peter Thiel, this strange foundation paved the way for even stranger tendencies. The strangest of these is known as “neoreaction,” or, in a distorted echo of Eliezer Yudkowsky’s vision, the “Dark Enlightenment.” It emerged from the same chaotic process that yielded the anarchic political collective Anonymous, a product of the hivemind generated by the cybernetic assemblages of social media. More than a school of thought, it resembles a meme. The genealogy of this new intellectual current is refracted in the mirror of the most dangerous meme ever created: Roko’s Basilisk.

Stand-out line:

The further right Silicon Valley shifts, the more dangerous their machines will become.

Running the connection through Roko’s Basilisk is sufficiently non-obvious that Sandifer’s book (which does the same) clearly merited a mention.

(Park MacDougald does it better, though, 1, 2.)

March 31, 2017admin 69 Comments »
FILED UNDER :Neoreaction
TAGGED WITH : , , , ,

Algorithmic Diversitocracy

Here‘s the anti-Tay.

One way or another, robotically-enhanced coercive enstupidation is coming. (At least the machines will only be pretending to be sunk in idiocy.)


This is also relevant.

October 11, 2016admin 24 Comments »
TAGGED WITH : , , , ,

Sentences (#73)


The problem, in a nut shell, is that we are shallow information consumers, evolved to generate as much gene-promoting behaviour out of as little environmental information as possible.

(Read the whole thing everything he’s ever written.)

September 13, 2016admin 60 Comments »

Machine Poetry

madness in her face and i
the world that i had seen
and when my soul shall be to see the night to be the same and
i am all the world and the day that is the same and a day i had been
a young little woman i am in a dream that you were in
a moment and my own heart in her face of a great world
and she said the little day is a man of a little
a little one of a day of my heart that has been in a dream

Not the greatest poetic achievement in world history, certainly. (The two final lines are definitely poor.) But the worst? Anywhere even remotely close to the worst?

The author: “Deep Gimble I is a proof-of-concept Recurrent Neural Net, minimally trained on public domain poetry and seeded with a single word.”

(Submissions from literary AIs accepted at the link.)

August 7, 2016admin 28 Comments »
FILED UNDER :Technology

Chaos Patch (#123)

(Open thread — bring your own links edition)

Consumed by New York and AI threat. (Will talk about it later).

July 20, 2016admin 173 Comments »


The latest dark gem from Fernandez opens:

When Richard Gallagher, a board-certified psychiatrist and a professor of clinical psychiatry at New York Medical College, described his experiences treating patients with demonic possession in the Washington Post claiming such incidents are on the rise, it was met with derision by many newspapers’ commenters. Typical was “this man is as nutty as his patients. His license should be revoked.” […] Less likely to have his intellectual credentials questioned by the sophisticates of the Washington Post is Elon Musk who warned an audience that building artificial intelligence was like “summoning the demon”. …

The point, of course, is that you don’t get the second eventuality without conceding to the virtual reality of the first. The things ‘Gothic superstition’ have long spoken about are, in themselves, exactly the same as those extreme technological potentials are excavating from the crypt of the unimaginable. ‘Progress’ is a tacit formula for dispelling demons — from consciousness, if not existence — yet it is itself ever more credibly exposed as the most complacent superstition in human history, one that is still scarcely reckoned as a belief in need of defending at all.

How does the press warn the public about demons arising from a “master algorithm” without making it sound like a magic spell? With great difficulty because the actual bedrock of reality may not only be stranger than the Narrative supposes, but stranger than it can suppose.

The faith in progress has an affinity with interiority, because it consolidates itself as the subject of its own narrative. (There’s an off-ramp into Hegel at this point, for anyone who wants to get into Byzantine story-telling about it.) As our improvement becomes the tale, the Outside seems to haze out even beyond the bounds of its intrinsic obscurity — until it crashes back in.

… where there are networks there is malware. Sue Blackmore a writer in the Guardian*, argues that memes travel not just across similar systems, but through hierarchies of systems to kill rival processes all the time. She writes, “AI rests on the principle of universal Darwinism – the idea that whenever information (a replicator) is copied, with variation and selection, a new evolutionary process begins. The first successful replicator on earth was genes.” […] In such a Darwinian context the advent of an AI demon is equivalent to the arrival of a superior extraterrestrial civilization on Earth.

Between an incursion from the Outside, and a process of emergence, there is no real difference. If two quite distinct interpretative frames are invoked, that results from the inadequacies of our apprehension, rather than any qualitative characteristics of the thing. (Capitalism is — beyond all serious question — an alien invasion, but then you knew I was going to say that.)

… we ought to be careful about being certain what forms information can, and cannot take.

If we had the competence to be careful, none of this would be happening.

(Thanks to VXXC2014 for the prompt.)

* That description is perhaps a little cruel, she’s a serious, pioneering meme theorist.

Continue Reading

July 3, 2016admin 43 Comments »
TAGGED WITH : , , , , ,

Quote note (#254)

High on Dr Gno’s reading list, Unethical Research: How to Create a Malevolent Artificial Intelligence (abstract):

Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).

Channeling X-Risk security resources into MAI-design means if the human species has to die, it can at least do so ironically. The game theory involved in this could use work. It’s clearly a potential deterrence option, but that would require far more settled signaling systems than anything in place yet. Threatening to unleashing an MAI is vastly neater than MAD, and should work in the same way. Edgelords with a taste for chicken games should be able to wrest independence from it.

(The Vacuum Decay Trigger, while of even greater deterrence value, is more of a blue sky project.)

ADDED: It’s a trend. Here’s ‘Analog Malicious Hardware’ being explored: “As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it’s very possible, in fact, that governments around the world may have already thought of their analog attack method. ‘By publishing this paper we can say it’s a real, imminent threat,’ says [University of Michigan researcher Matthew] Hicks. ‘Now we need to find a defense.'”

June 1, 2016admin 8 Comments »
FILED UNDER :Apocalypse

Chaos Patch (#107)

(Open thread + links)

RF on Dugin (1, 2) and the secure state (1, 2). Ugly Americans. Stubborn infertility. Beware Hobbes. Talking nihilism (+). Reactionary books. The weekly round.

A Curtis Yarvin AMA. Agonies of inclusion. Little red snowflakes. More (and more) despicable idiocy.

Jihad in Brussels (1, 2, 3, 4, 5, 6, 7, 8, 9). Predictive hit and miss. Guerrilla war. The ‘gray zone’ isn’t working. More to come (!, !!). Tintin, shitlord. Spandrell’s take. Meanwhile, elsewhere. Death of the spider people. The French model. Orban speaks. Japan dips a toe in stupid. The CIA is on it. Islam is a nightmare for everyone else (also). Corrupted language. Ambiguity at the State Department. Ruin spiral in South Africa. Chaos in Brazil. Water worries in SE Asia. A (brief) geopolitical round-up.

NIRP desperation. Mighty Amazon. A drone milestone.

Everyone loses. Derbyshire on Williamson. The delicate generation (relevant). The left eats itself (part n).

Trumpenführer panic report (1, 2, 3, 4, 5, 6). End of the GOP. Flashlights and networks. “‘It was like cross burning,’ Tucker told me.” Libertarians for Trump. Confusion at AIPAC. “Is he so wrong?” Paglia’s latest. Know your White Trash. A note on Weimar elections.

Freedom of speech under pressure. Mind-control meet-up. PC has an export problem. Vice slides. Chilled on warming. Jacobin Mag.

Apocalypse Corner. America is cooked. “I admit: I’ve been early on this …” Trans-FOOM.

Minimal tolerance. American racial composition. Race and crime (related). Expert consensus on the heritability of IQ.

Horizontal genetics. The neural code. CRISPR at work. Synthetic life update. Arachno-vibration.

Quantum AI arms race. Neuromorphic computational infrastructure. Face capture. Brain emulation comes first. It’s complexicated. The Tay problem. The case for cryonics.

Who can say that AI, in a not too distant future, will not replace democracies with more intelligent and dynamic constitutions?”

Commerce and culture. Petrific souls. Human and angelic atheism. Dangers of currency debasement.

March 27, 2016admin 43 Comments »
TAGGED WITH : , , , , , , , , , ,