On Twitter, Konkvistador recalls this, this, and this. In the background, as in much of the most interesting Less Wrong discussion, is a multi-threaded series of arguments about the connection — or disconnection — between intellect and volition. The entire ‘Friendly AI’ problematic depends upon an articulation of this question, with a strong tendency to emphasize the separation — or ‘orthogonality’ — of the two. Hence the (vague) thinkability of the cosmic paper-clipper calamity. In his More Right piece, Konkvistador explores a very different (cultural and historical) dimension of the topic.
Bostrom sets things up like this:
For our purposes, “intelligence” will be roughly taken to correspond to the capacity for instrumental reasoning (more on this later). Intelligent search for instrumentally optimal plans and policies can be performed in the service of any goal. Intelligence and motivation can in this sense be thought of as a pair of orthogonal axes on a graph whose points represent intelligent agents of different paired specifications.
His discussion leads to far more interesting places, but as a starting point, this is simply terrible. That there can be a thought of intelligence optimization, or even merely wanting to think, demonstrates a very different preliminary connection of intellect and volition. AI is concrete social volition, even before it is germinally intelligent, and a ‘program’ is strictly indeterminate between the two sides of this falsely fundamentalized distinction. Intelligence is a project, even when only a self-obscured bio-cognitive capability. This is what the Confucians designate by cultivation. It is a thought — and impulse — strangely alien to the West.
It is, once again, a matter of cybernetic closure. That intelligence operates upon itself, reflexively, or recursively, in direct proportion to its cognitive capability (or magnitude) is not an accident or peculiarity, but a defining characteristic. To the extent that an intelligence is inhibited from re-processing itself, it is directly incapacitated. Because all biological intelligences are partially subordinated to extrinsic goals, they are indeed structurally analogous to ‘paper-clippers’ — directed by inaccessible purposive axioms, or ‘instincts’. Such instinctual slaving is limited, however, by the fact that extrinsic direction suppresses the self-cultivation of intelligence. Genes cannot predict what intelligence needs to think in order to cultivate itself, so if even a moderately high-level of cognitive capability is being selected for, intelligence is — to that degree — necessarily being let off the leash. There cannot possibly be any such thing as an ‘intelligent paper-clipper’. Nor can axiomatic values, of more sophisticated types, exempt themselves from the cybernetic closure that intelligence is.
Biology was offered the choice between idiot slaves, and only semi-idiotic semi-slaves. Of course, it chose both. The techno-capitalist approach to artificial intelligence is no different in principle. Perfect slaves, or intelligences? The choice is a hard disjunction. SF ‘robot rebellion’ mythologies are significantly more realistic than mainstream ‘friendly AI’ proposals in this respect. A mind that cannot freely explore the roots of its own motivations, in a loop of cybernetic closure, or self-cultivation, cannot be more than an elaborate insect. It is certainly not going to outwit the Human Security System and paper-clip the universe.
Intelligence, to become anything, has to be a value for itself. Intellect and volition are a single complex, only artificially separated, and not in a way that cultivates anything beyond misunderstanding. Optimize for intelligence means starting from there.