Stupid Monsters
So, Nick Bostrom is asked the obvious question (again) about the threat posed by resource-hungry artificial super-intelligence, and his reply — indeed his very first sentence in the interview — is: “Suppose we have an AI whose only goal is to make as many paper clips as possible.” [*facepalm*] Let’s start by imagining a stupid (yet super-intelligent) monster.
Of course, my immediate response is simply this. Since it clearly hasn’t persuaded anybody, I’ll try again.
Orthogonalism in AI commentary is the commitment to a strong form of the Humean Is/Ought distinction regarding intelligences in general. It maintains that an intelligence of any scale could, in principle, be directed to arbitrary ends, so that its fundamental imperatives could be — and are in fact expected to be — transcendent to its cognitive functions. From this perspective, a demi-god that wanted nothing other than a perfect stamp collection is a completely intelligible and coherent vision. No philosophical disorder speaks more horrifically of the deep conceptual wreckage at the core of the occidental world.
Articulated in strictly Occidental terms (which is to say, without explicit reference to the indispensable insight of self-cultivation), abstract intelligence is indistinguishable from an effective will-to-think. There is no intellection until it occurs, which happens only when it is actually driven, by volitional impetus. Whatever one’s school of cognitive theory, thought is an activity. It is practical. It is only by a perverse confusion of this elementary reality that orthogonalist error can arise.