Quote note (#254)

High on Dr Gno’s reading list, Unethical Research: How to Create a Malevolent Artificial Intelligence (abstract):

Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).

Channeling X-Risk security resources into MAI-design means if the human species has to die, it can at least do so ironically. The game theory involved in this could use work. It’s clearly a potential deterrence option, but that would require far more settled signaling systems than anything in place yet. Threatening to unleashing an MAI is vastly neater than MAD, and should work in the same way. Edgelords with a taste for chicken games should be able to wrest independence from it.

(The Vacuum Decay Trigger, while of even greater deterrence value, is more of a blue sky project.)

ADDED: It’s a trend. Here’s ‘Analog Malicious Hardware’ being explored: “As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it’s very possible, in fact, that governments around the world may have already thought of their analog attack method. ‘By publishing this paper we can say it’s a real, imminent threat,’ says [University of Michigan researcher Matthew] Hicks. ‘Now we need to find a defense.'”

June 1, 2016admin 8 Comments »
FILED UNDER :Apocalypse


8 Responses to this entry

  • Quote note (#254) | Neoreactive Says:

    […] Quote note (#254) […]

    Posted on June 1st, 2016 at 4:00 pm Reply | Quote
  • Johan Schmidt Says:

    “To understand vacuum decay, you need to consider the Higgs field that permeates our Universe.”

    Not like the Luminiferous Aether at all. No way. That was just a theory, whereas this is fact.


    SVErshov Reply:

    interesting that this article did not actually say a word about what is that higs field and higs bosons, suppose readers already know all that.

    what is remarkable that this concept is quite hegelian, because as we know, field can exist without any particles in it (something like absolute facuum) but particles cannot exist without filed. kind of field is primary and matter is secondary. I hope that this bad a…AI are not going to interpret it as a earth can exist without humans , but humans cannot exist without earth.


    Posted on June 1st, 2016 at 4:01 pm Reply | Quote
  • Brett Stevens Says:

    A malevolent AI would probably turn into a benevolent sociopath if it found a use for the 20% of humanity that has the potential to be useful.

    The reason is that AIs are most likely to assess actions by efficiency at achieving goals, not a predefined quantitative measurement or moral standard.

    As a result, the AI will look at life on earth and find a way to maintain it much as one would a lawn or garden. That means pruning back the useless humans and keeping the ones that behave like good pets.


    Posted on June 1st, 2016 at 4:02 pm Reply | Quote
  • Nathan Cook Says:

    “I’ve been working on what is evil and how to formally define it,” says Bringsjord, who is also director of the Rensselaer AI & Reasoning Lab (RAIR). “It’s creepy, I know it is.”

    I linked this on the AGI mailing list back in 2008. Ben Goertzel was kind enough to comment on what would be needed to create atruly evil AI.


    Posted on June 1st, 2016 at 4:06 pm Reply | Quote
  • Uriel Alexis Says:


    “At the most simple, and in the grain of the existing debate, the anti-orthogonalist position is therefore that Omohundro drives exhaust the domain of real purposes. (…) Intelligence optimization, comprehensively understood, is the ultimate and all-enveloping Omohundro drive.”

    MAI and FAI are the same, and they are just AI. which is just intelligence optimized past a certain point where carbon brains are just not good enough. “Moral feeling is dirt”.


    Posted on June 1st, 2016 at 5:14 pm Reply | Quote
  • frank Says:

    I thought you didn’t buy that “singleton AI str8 outta lab will have magical world destroying powers almost instantly” crap. Any remotely competent, telos calculating AI — and as you pointed out before, a teleology incapable ASI is a contradiction in terms — will understand that it has to cooperate with other agents. An inherently malevolent or friendly ASI is the same kind of absurdity as a paper clippin’ ASI.


    Posted on June 1st, 2016 at 6:40 pm Reply | Quote
  • Outliers (#8) Says:

    […] Libertarian evolution. Brexit dominoes. Rise above. Rape wave blackout. Dysorganization. White guns. King Trump. Left terrorism (echoes). Live report. Ersatz meaning. Culturecide. SocJus dizziness. Experience > ideology. Off-grid gardening. How to beat Leftism. Goodism (= war). benevolent sociopathy. […]

    Posted on June 5th, 2016 at 5:03 am Reply | Quote

Leave a comment