Optimize for Intelligence

Moldbug’s latest contains a lot to think about, and to argue with. It seems a little lost to me (perhaps Spandrell is right).

The guiding thread is utility, in its technical (philosophical and economic) sense, grasped as the general indicator of a civilization in crisis. Utilitarianism, after all, is precisely ‘objective’ hedonism, the promotion of pleasure as the master-key to value. As philosophy, this is pure decadence. As economics it is more defensible, certainly when restricted to its descriptive usage (if economists find their field of investigation populated by hedonically-controlled mammals, it is hardly blameworthy of them to acknowledge the fact). In this respect, accusing the Austrians of ‘pig-philosophy’ is rhetorical over-reach — swinish behavior wasn’t learned from Human Action.

Utilitarianism is often attractive to rational people, because it seems so rational. The imperative to maximize pleasure and minimize pain goes with the grain of what biology and culture already says: pleasure is good, suffering is bad, people seek rewards and avoid punishments, happiness is self-justifying. Calculative consequentialism is vastly superior to deontology. Yet the venerable critique Moldbug taps into, and extends, is truly devastating. The utilitarian road leads inexorably to wire-head auto-orgasmatization, and the consummate implosion of purpose. Pleasure is a trap. Any society obsessed with it is already over.

Utility, backed by pleasure, is toxic waste, but that doesn’t mean there’s any need to junk the machinery of utilitarian calculus — including all traditions of rigorous economics. It suffices to switch the normative variable, or target of optimization, replacing pleasure with intelligence. Is something worth doing? Only if it grows intelligence. If it makes things more stupid, it certainly isn’t.

There are innumerable objections that might flood in at this point [excellent!].
— Even if rigorous economics is in fact the study of intelligenic (or catallactic) distributions, doesn’t the assumption of subjective utility-maximization provide the most reliable basis for any understanding of economic behavior?
— Infinite intelligence already (and eternally) exists, we should focus on praying to that.
— Rather my retarded cousin than an intelligent alien.
— Do we even know what intelligence is?
— Cannot an agent be super-intelligent and evil?
— Just: Why?

More, therefore, to come …

ADDED: A previous excursion into the engrossing topic of hedonic implosion cited Geoffrey Miller (in Seed magazine): “I suspect that a certain period of fitness-faking narcissism is inevitable after any intelligent life evolves. This is the Great Temptation for any technological species—to shape their subjective reality to provide the cues of survival and reproductive success without the substance. Most bright alien species probably go extinct gradually, allocating more time and resources to their pleasures, and less to their children. They eventually die out when the game behind all games — the Game of Life — says ‘Game Over; you are out of lives and you forgot to reproduce.’”

March 15, 2013admin 18 Comments »
FILED UNDER :Uncategorized


18 Responses to this entry

  • spandrell Says:

    Even Moldbug is going tribal.


    admin Reply:

    I noticed that too (with some consternation).


    Posted on March 15th, 2013 at 4:29 am Reply | Quote
  • Nick B. Steves Says:


    He’s got two (2) kids, right? In std, he as above mean in SF as I am with 8 in NJ. Of course he’d go tribal–he’s playing to win.


    admin Reply:

    The tribe is like the crabs in the crab-bucket — even if you’re at the brink of getting out, they’ll pull you back in. It’s almost impossible to accumulate capital in Africa, because as soon as anybody makes it, hundreds of ‘cousins’ swarm in like locusts. How is Moldbug’s isolationist make-work policy any different? It’s Tanzanian economics for yuppies.


    Nick B. Steves Reply:

    Social benefit as a function of tribality, B(t) is a negative parabola with zeros at t = {0 , Tanzania}. The optimum, where B'(t) = 0, is somewhere in-between.


    Posted on March 15th, 2013 at 1:03 pm Reply | Quote
  • SDL Says:

    Is something worth doing? Only if it grows intelligence. If it makes things more stupid, it certainly isn’t.

    Maybe I’m conflating things here, but is it fair to say that your suggestion makes you a fellow traveler of John Campbell, whose essay about auto-evolution posits intelligence as the essential ‘killer app’ for human groups who want to stay, forever, at the leading edge of future evolution? Campbell writes,

    A group of people dedicated to the over-riding ideal of evolving maximal intellectual capabilities by any means available could aspire to produce a following generation with an IQ of, say, 180. If they also passed on their evolutionary ideal, the superior offspring should be able to improve their successor generation commensurately; that is, increase its intelligence by 80%.

    There can be no doubt about the value of intelligence for developing the knowledge and culture necessary for further evolution. Even today’s abstract sciences require keen minds. As we advance, ever greater intelligence will be needed to figure out the next advances for securing the frontier. Our current intellect probably cannot even comprehend the mental attributes that descendants will struggle to conceive.

    I find very little against which I’d want to argue in Campbell’s piece, or in your suggestion that a utilitarianism based on Growing Intelligence would be beneficial to the species.

    But, practically speaking, one couldn’t bracket out the Pleasure Principle entirely. Then again, is it fair to suppose that if we answer “Yes” to the question “Does this grow intelligence?”, a by-product of the answer may always be some increase in some form of happiness and pleasure?


    admin Reply:

    My thought process on this hadn’t looped back to Campbell yet, but the connection is completely convincing. There are probably a number of take-aways from his discussion, but the one you emphasize was going to be my priority: a pro-intelligence process — like it or not — is going to have reality on its side. Even a fairly narrow premium makes it unstoppable, so long as it is intrinsically sustainable. Whatever you want, intelligence helps you get it. Defeating hostiles might be one of those things.

    When pleasure-pain is considered naturalistically, it is obviously an ‘intelligent’ solution to certain biological control problems that arose with complex nervous systems. We’d probably want something analogous for advanced robots, insofar as we wanted to steer them. Robots would surely see the advantage in adopting it for themselves, assuming their ambitions extended to coherent purposive action. The problems arise when hedonic tone is no longer seen as a means (control-engineering solution), but as an end, to be achieved by whatever shortcuts, and ultimately short-circuits, can be improvised. At this point hedonism becomes directly maladaptive. I agree with Moldbug that we’re deep into that territory already.

    As we approach the bionic horizon, the pleasure-pain system needs to be slaved to serious purposes. The ‘needs’ there means: whoever, or whatever, does that is going to win.


    SDL Reply:

    You know, H.G. Wells gave us a good picture of a system slaved to pleasure as its own end: the Eloi. Take away the Morlocks, and I imagine you have Left utopia: equality, peace, leisure, sustainability . . . and a low-IQ population that doesn’t poke its collective nose into dangerous knowledge found in things like books or science. Tied firmly to a local area and circumscribed by their own comfort. Essentially Paleolithic gatherers with nice buildings and without the megafauna (who might actually force them to invest energy in intelligence-increasing activity).


    admin Reply:

    I’m totally with you on that, although it might be argued that the SWPLs are somewhere beyond Eloi already, on a path into infinitely imploded wire-head singularity.
    On your absent predatory megafauna point, there might be a case for redirecting a diversity grant in order to unleash a pack of velociraptors in SF. (A crushing shortage of mad scientists is obstructing most worthwhile projects these days.)

    Posted on March 15th, 2013 at 3:18 pm Reply | Quote
  • fotrkd Says:

    “The utilitarian road leads inexorably to wire-head auto-orgasmatization”. So take carrot and stick and pets or toddlers (who come up quite a bit in Moldbug’s articles) – do we ever reach auto-orgasmatization with any of them? Is the dream the same as the aim? Maybe the pet gets the treat a few times for learning the new skill (it’s not always a trick that they learn), but we move beyond that – praise (= pleasure) is gained from doing a task well and the treat becomes a pat or a ‘good’. As Moldbug mentions, couples with children regard a good meal out as a couple just as ‘hedonistic’/rewarding as a hit of something (I can no longer remember what)… if democracy continually ‘promises’ auto-orgasmatization that is not the same as inexorably leading to it (dream is not the same as aim) – corrections are possible (we grow up and don’t expect chocolate all the time). This all goes back to your discussion in a previous thread about ‘fail mode’ versus the basic nature of democracy – if democracy inevitably leads to an obsession with pleasure you would need to explain how this differs from the behaviour of all sorts of (most? all?) other forms of government (isn’t decadence synonymous with aristocracy? communist party leaderships?) In addition you would need to show how this obsession becomes corrosive in a way that is unique to democracy. Personally I don’t buy how a monarch qua monarch has a lower time-preference than a president. Was Henry VIII a monarch in fail mode (wasn’t Moldbug’s friend Henry VII only so cautious because he was so insecure – i.e. closer to fail)? Or can the notion of indefatigable right lead inexorably toward greed and desecration of a country to the same degree? ‘I’m broke? But I’m the King – I’ll sell off the monasteries, sell a few more titles, debase the currency… and if I run out of things to sell I’ll just take them back, because I’m the King…’)

    That pleasure has an (biological) appeal is beyond question, as you acknowledge. Intelligence doesn’t have this intrinsic appeal, indeed your question (“Is something worth doing? Only if it grows intelligence.”) explains why (if you accept Moldbug’s assertion that government cannot increase IQ (and IQ = intelligence). Something that grows intelligence inevitably grows artificial intelligence which inevitably makes humans relatively more stupid. For this reason it is politically unappealing (“if it makes things more stupid, it certainly isn’t [worth doing]” becomes on this (popular) line of reasoning a justification for not growing intelligence).

    So the question becomes not ‘Just: why?’ but just: how? How do you optimise for intelligence? And rather than being a political or economic imperative (the same thing when it comes to willingness to adopt your proposal) is this not more likely to be a potential state of affairs driven by technological advance and/or maintenance of ‘quality of life’? That is to say intelligence will not be a driver in its own right – why would it be (biologically speaking)? – much more likely, as with agricultural and industrial revolutions (and word-processing and emails in the office etc..) intelligence as a competitive edge will be desirable. So what will make intelligence essential to the economy? Speculatively, Bitcoin has the potential to forcibly remodel society to this end. Similarly advances such as brain-computer-interfaces could lead in the same direction. But even in these cases (and as SDL has already suggested), isn’t such a remodeling simply a desire (= pleasure) to structure society in a more pleasing way for reactionaries who feel hard done by in the current set-up, where their skills (and intelligence) are underappreciated? Or are you arguing for a more fundamental reappraisal?


    admin Reply:

    The hedonic implosion problem is simply ultimate decadence, and in that sense it is not restricted to democracies. Modern democracies, however, are parasitic upon capitalism, and therefore have the means to take this road much further than, say, the late Roman aristocracy did. As you know, I’m not a great cheerleader for kings, and I’d be surprised if the average oil sheikh was any less wire-headed than an SF SWPL-type.

    Your short explanation for political resistance to intelligence elevation makes a lot of sense.

    “How do you optimise for intelligence?” — through intense competition, primarily. That’s how Jim’s ‘killer-ape’ got smart enough to enter history in the first place. Some kind of competitive mechanism is both external, and internal, to every practically advancing intelligence program. When economics was a sufficient proxy for war, it worked as a driver. Now that the Keynesians have mostly pacified it, things fall apart.


    fotrkd Reply:

    Thanks, that (‘in times of war’) was helpful. But now I’m confused more generally – it’s like you want us to regain our competitiveness specifically in order to bring about the end of our hegemony more quickly (unless I’ve misunderstood?); throw off parasitic democracy to allow free(d) markets to accelerate us to AI singularity (i.e. prioritising low time preference is – perversely – the surest way for human civilisation to be superseded)? That’s a hard sell, which is why I was speculating that it was more likely to come about by chance (or technological leap) rather than design. Most people are glad they’re not killer apes anymore (killer apes with or without ipads) – ‘humanity won’. If that leaves us softened up (reminds me of The Great White Hope) then surely we’re there for the taking… isn’t ‘it’ going to happen anyway? What’s the rush?! Or is this the gyre/Left Singularity thing? We must get our act together or another chance will be gone?


    admin Reply:

    This is superb.
    “That’s a hard sell …” yes, hence history, politics, subterfuge, and complexity. It has to be a ‘universal’ cosmo-technical predicament though — on any planet where intelligenesis goes critical, there probably has to be a point at which the species quasi-arbitrarily carried by cultural (or socio-technological) runaway digs in whatever it has for heels, realizing that (in Bill Joy’s words) “The Future Doesn’t Need Us.” At that point it’s standing in the road, with considerable (apparent) capability to obstruct the traffic, and the issues you raise get real.

    Posted on March 15th, 2013 at 10:43 pm Reply | Quote
  • Optimizing for truth | Bloody shovel Says:

    […] others want to maximize intelligence, people be damned. And they give links to what is […]

    Posted on April 30th, 2013 at 7:56 am Reply | Quote
  • Otimize a Inteligência – Outlandish Says:

    […] Original […]

    Posted on July 6th, 2016 at 5:17 pm Reply | Quote
  • SVErshov Says:

    Collapse 8 article Cunning automata diisuses many interesting topics shaping our world fight now:. ai as extension of human intelligrnce, limit of speed of light in fiber optics cables and its
    implication om high Frequency Trading and arbitrage. deff most interesting article i had seen recently.highly recomrnding ih some onmissed iy.


    Posted on September 9th, 2016 at 11:34 pm Reply | Quote
  • Wagner Says:

    One of the left’s secret reasons for stupidization, that they might not even be fully aware of, is that the dumber you are the less despair and anxiety you feel. So they look at the concept of intelligence optimization as a simultaneous optimization of existential suffering. “If you want us all to turn into Kierkegaards I think you might be the stupid one.”

    H/T to Atavisionary for these thoughts obvs. This could be another solution to the Fermi Paradox- there’s a set level of suffering that species can endure before they decide to descend back to comfortable dullness. “300iq? In spaceships? Have you considered the emotions we’d have to deal with?!” I’d like one of the dem candidates to use an honest slogan, like Better to be Dumber than a Box of Rocks


    Wagner Reply:

    “Why haven’t aliens visited us?”

    *squints at my various enemies*

    I think I might know the reason!


    Posted on September 5th, 2019 at 12:35 pm Reply | Quote

Leave a comment