FILED UNDER :Apocalypse
TAGGED WITH :America , Apocalypse , Images , World
The Canada Citizenship and Immigration site has crashed. pic.twitter.com/FOit6ZsMDZ
— Breaking News Feed (@pzf) November 9, 2016
One of the paradoxes — there are so many — of conservative thought over the last decade at least is the unwillingness even to entertain the possibility that America and the West are on a trajectory toward something very bad.
Dalrymple on visions of the Apocalypse:
Oceans of ink have been spilt on the attempt to estimate the true extent of the threat of Islam to the West, and the attempts range from the frankly paranoid to the most supinely complacent. For myself, I veer constantly between the two, hardly pausing in between. In the last analysis, the West has all the cards, intellectual and military; but if it refuses ever to play them, they are of no account.
If Islam destroys the West, it will only be in the role of a suicide weapon, deployed by the West against itself. The basis of the Apocalyptic case is that the West has been taught, very successfully, that it does not deserve continued existence. (“Better dead than rude” is John Derbyshire’s formulation.)
Islam is the Hell the West damns itself to, for its sins.
Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).
Channeling X-Risk security resources into MAI-design means if the human species has to die, it can at least do so ironically. The game theory involved in this could use work. It’s clearly a potential deterrence option, but that would require far more settled signaling systems than anything in place yet. Threatening to unleashing an MAI is vastly neater than MAD, and should work in the same way. Edgelords with a taste for chicken games should be able to wrest independence from it.
(The Vacuum Decay Trigger, while of even greater deterrence value, is more of a blue sky project.)
ADDED: It’s a trend. Here’s ‘Analog Malicious Hardware’ being explored: “As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it’s very possible, in fact, that governments around the world may have already thought of their analog attack method. ‘By publishing this paper we can say it’s a real, imminent threat,’ says [University of Michigan researcher Matthew] Hicks. ‘Now we need to find a defense.'”
John McAfee (“running for president on a cybersecurity platform”) has a way with words:
“The number one problem in the world today,” he said, “is America’s decline in its cybersecurity.” According to McAfee, we’re in a cyber war with the Russians, Chinese, and Iranians, and our technology is twenty years behind. […] “I think this is the greatest danger that America has ever faced,” he said gravely. “In a cyber war, the first thing we’re going to lose is our power. A month and a half ago, two fifteen-year-old boys hacked into the Ukrainian power grid. Do you think the Russians and Chinese cannot do the same thing with us? And without power, what happens? We have no power, we have no food.” McAfee’s voice rose in the middle of sentences, brimming with energy. “Half of us would survive a nuclear threat,” he said forcefully. “But no one would survive a cyber attack. No one. And if we do, we’re going to be in tatters on the street eating rats.” […] … “We are on the brink of devastation,” he warned me many times during our two days together. “It doesn’t even have to be me, but our country is lost if we do not have a cybersecurity expert as president.”
Vivos (purveyors of luxury survival bunkers):
The Vivos global network of hardened, deep underground, survival shelters is being built to survive virtually all future catastrophes and disasters. Co-own a membership interest in one of our community shelters or buy your own private underground shelter for the ultimate life-assurance, safety and security for your family. Vivos shelters are considered to be the strongest, most fortified, blast-proof shelters available. Our Quantum underground survival shelters and bunkers are hardened, underground shelters designed for installation on your private property – anywhere in the world. Then stock it with freeze-dried foods and canned meats, along with all of your survival gear to survive for one year or more.
Be ready for the predicted planetary alignment, doomsday, the Rapture, the end times, or Armageddon. If you believe in the prophecies and predictions of the Bible, Nostradamus, the Third Secret of Fatima, the visions of Edgar Cayce, and all of the current signs of an economic collapse, future nuclear war, WW3, a pandemic, an EMP power outage, a Yellowstone eruption, a potential asteroid collision, Nibiru, Planet X, Fukushima’s eventual meltdown and widespread global radiation, the coming pole shift and/or major earth changes, it is time to prepare!
Do try to keep up:
German authorities expect up to 1.5 million asylum seekers to arrive in Germany this year, the Bild daily said in a report to be published on Monday, up from a previous estimate of 800,000 to 1 million.
Whatever it is that’s happening here should be over fairly quickly.
Also worth noting: “The authorities’ report also cited concerns that those who are granted asylum will bring their families over to Germany too, Bild said. […] Given family structures in the Middle East, this would mean each individual from that region who is granted asylum bringing an average of four to eight family members over to Germany in due course, Bild quoted the report as saying.” (So we can crank the binary exponent up by another 2-3 notches straight away.)
The level of apocalypticism to be found in scientific abstracts rarely reaches the Dark Enlightenment threshold, but there are always exceptions. Here’s Olav Albert Christophersen, on ‘Thematic Cluster: Focus on Autism Spectrum Disorder’, originally published in Microbial Ecology in Health & Disease (2012). Indicatively, the paper is subtitled ‘Should autism be considered a canary bird telling that Homo sapiens may be on its way to extinction?’ The full abstract:
There has been a dramatic enhancement of the reported incidence of autism in different parts of the world over the last 30 years. This can apparently not be explained only as a result of improved diagnosis and reporting, but may also reflect a real change. The causes of this change are unknown, but if we shall follow T.C. Chamberlin’s principle of multiple working hypotheses, we need to take into consideration the possibility that it partly may reflect an enhancement of the average frequency of responsible alleles in large populations. If this hypothesis is correct, it means that the average germline mutation rate must now be much higher in the populations concerned, compared with the natural mutation rate in hominid ancestors before the agricultural and industrial revolutions. This is compatible with the high prevalence of impaired human semen quality in several countries and also with what is known about high levels of total exposure to several different unnatural chemical mutagens, plus some natural ones at unnaturally high levels. Moreover, dietary deficiency conditions that may lead to enhancement of mutation rates are also very widespread, affecting billions of people. However, the natural mutation rate in hominids has been found to be so high that there is apparently no tolerance for further enhancement of the germline mutation rate before the Eigen error threshold will be exceeded and our species will go extinct because of mutational meltdown. This threat, if real, should be considered far more serious than any disease causing the death only of individual patients. It should therefore be considered the first and highest priority of the best biomedical scientists in the world, of research-funding agencies and of all medical doctors to try to stop the express train carrying all humankind as passengers on board before it arrives at the end station of our civilization. [XS emphasis]
(Mutational load is, of course, genomic entropy — and the kind of ‘Social Darwinian’ or eugenicist mechanisms that might dissipate it are all, today, strictly unthinkable.)
Probably the best short AI risk model ever proposed:
I can’t find the link, but I do remember hearing about an evolutionary algorithm designed to write code for some application. It generated code semi-randomly, ran it by a “fitness function” that assessed whether it was any good, and the best pieces of code were “bred” with each other, then mutated slightly, until the result was considered adequate. […] They ended up, of course, with code that hacked the fitness function and set it to some absurdly high integer.
… Any mind that runs off of reinforcement learning with a reward function – and this seems near-universal in biological life-forms and is increasingly common in AI – will have the same design flaw. The main defense against it this far is simple lack of capability: most computer programs aren’t smart enough for “hack your own reward function” to be an option; as for humans, our reward centers are hidden way inside our heads where we can’t get to it. A hypothetical superintelligence won’t have this problem: it will know exactly where its reward center is and be intelligent enough to reach it and reprogram it.
The end result, unless very deliberate steps are taken to prevent it, is that an AI designed to cure cancer hacks its own module determining how much cancer has been cured and sets it to the highest number its memory is capable of representing. Then it goes about acquiring more memory so it can represent higher numbers. If it’s superintelligent, its options for acquiring new memory include “take over all the computing power in the world” and “convert things that aren’t computers into computers.” Human civilization is a thing that isn’t a computer.
ADDED: Wirehead central.