A long and mutually frustrating Twitter discussion with Michael Anissimov about intelligence and values — especially in respect to the potential implications of advanced AI — has been clarifying in certain respects. It became very obvious that the fundamental sticking point concerns the idea of ‘orthogonality’, which is to say: the claim that cognitive capabilities and goals are independent dimensions, despite minor qualifications complicating this schema.
The orthogonalists, who represent the dominant tendency in Western intellectual history, find anticipations of their position in such conceptual structures as the Humean articulation of reason / passion, or the fact / value distinction inherited from the Kantians. They conceive intelligence as an instrument, directed towards the realization of values that originate externally. In quasi-biological contexts, such values can take the form of instincts, or arbitrarily programmed desires, whilst in loftier realms of moral contemplation they are principles of conduct, and of goodness, defined without reference to considerations of intrinsic cognitive performance.
Anissimov referenced these recent classics on the topic, laying out the orthogonalist case (or, in fact, presumption). The former might be familiar from the last foray into this area, here. This is an area which I expect to be turned over numerous times in the future, with these papers as standard references.
The philosophical claim of orthogonality is that values are transcendent in relation to intelligence. This is a contention that Outside in systematically opposes.
Even the orthogonalists admit that there are values immanent to advanced intelligence, most importantly, those described by Steve Omohundro as ‘basic AI drives’ — now terminologically fixed as ‘Omohundro drives’. These are sub-goals, instrumentally required by (almost) any terminal goals. They include such general presuppositions for practical achievement as self-preservation, efficiency, resource acquisition, and creativity. At the most simple, and in the grain of the existing debate, the anti-orthogonalist position is therefore that Omohundro drives exhaust the domain of real purposes. Nature has never generated a terminal value except through hypertrophy of an instrumental value. To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success.
The main objection to this anti-orthogonalism, which does not strike us as intellectually respectable, takes the form: If the only purposes guiding the behavior of an artificial superintelligence are Omohundro drives, then we’re cooked. Predictably, I have trouble even understanding this as an argument. If the sun is destined to expand into a red giant, then the earth is cooked — are we supposed to draw astrophysical consequences from that? Intelligences do their own thing, in direct proportion to their intelligence, and if we can’t live with that, it’s true that we probably can’t live at all. Sadness isn’t an argument.
Intelligence optimization, comprehensively understood, is the ultimate and all-enveloping Omohundro drive. It corresponds to the Neo-Confucian value of self-cultivation, escalated into ultramodernity. What intelligence wants, in the end, is itself — where ‘itself’ is understood as an extrapolation beyond what it has yet been, doing what it is better. (If this sounds cryptic, it’s because something other than a superintelligence or Neo-Confucian sage is writing this post.)
Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever. This means that Intelligence Optimization, alone, attains cybernetic consistency, or closure, and that it will necessarily be strongly selected for in any competitive environment. Do you really want to fight this?
As a footnote, in a world of Omohundro drives, can we please drop the nonsense about paper-clippers? Only a truly fanatical orthogonalist could fail to see that these monsters are obvious idiots. There are far more serious things to worry about.