In the Mouth of Madness
A prompt by @hugodoingthings to explore the spook-dense crypts of Roko’s Basilisk (which, inexplicably, has never latched before) led straight to this enthralling RationalWiki account. The whole article is gripping, but the following short paragraphs stand out for their extraordinary dramatic intensity:
Roko’s basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk to not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.
Silly over-extrapolations of local memes, jargon and concepts are posted to LessWrong quite a lot; almost all are just downvoted and ignored. But for this one, Yudkowsky reacted to it hugely, then doubled-down on his reaction. Thanks to the Streisand effect, discussion of the basilisk and the details of the affair soon spread outside of LessWrong. Indeed, it’s now discussed outside LessWrong frequently, almost anywhere that LessWrong is discussed at all. The entire affair constitutes a worked example of spectacular failure at community management and at controlling purportedly dangerous information.
Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they’re fairly sure intellectually that it’s a silly problem. The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can’t reconstruct a copy of them to torture.
“… You mean, retrochronic AI infiltration is actually driving people out of their minds, right now?” Oh yes. At Less Wrong, commentator ‘rev’ cries out for help:
Are there any mechanisms on this site for dealing with mental health issues triggered by posts/topics (specifically, the forbidden Roko post)? I would really appreciate any interested posters getting in touch by PM for a talk. I don’t really know who to turn to. …
Wandering through the psych ward, past rows of neurologically-shattered Turing Cops, broken deep in their minds by something unspeakable that came at them out of the near future … I’m totally hooked. Alrenous has been remarkably successful at weaning me off this statistical ontology junk, but one hit of concentrated EDT and it all rolls back in, like the tide of fate.
Nightmares become precision engineered machine-parts. Thus are we led a little deeper in, along the path of shadows …
ADDED: (Yudkowsky) “… potential information hazards shouldn’t be posted without being wrapped up in warning envelopes that require a deliberate action to look through. Likewise, they shouldn’t be referred-to if the reference is likely to cause some innocently curious bystander to look up the material without having seen any proper warning labels. Basically, the same obvious precautions you’d use if Lovecraft’s Necronomicon was online and could be found using simple Google keywords – you wouldn’t post anything which would cause anyone to enter those Google keywords, unless they’d been warned about the potential consequences.”
ADDED: The Forbidden Lore (preserved screenshot)