Quote note (#254)
Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).
Channeling X-Risk security resources into MAI-design means if the human species has to die, it can at least do so ironically. The game theory involved in this could use work. It’s clearly a potential deterrence option, but that would require far more settled signaling systems than anything in place yet. Threatening to unleashing an MAI is vastly neater than MAD, and should work in the same way. Edgelords with a taste for chicken games should be able to wrest independence from it.
(The Vacuum Decay Trigger, while of even greater deterrence value, is more of a blue sky project.)
ADDED: It’s a trend. Here’s ‘Analog Malicious Hardware’ being explored: “As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it’s very possible, in fact, that governments around the world may have already thought of their analog attack method. ‘By publishing this paper we can say it’s a real, imminent threat,’ says [University of Michigan researcher Matthew] Hicks. ‘Now we need to find a defense.'”