Utilitarianism is Useless

Utilitarianism is completely useless as a tool of public policy, Scott Alexander discovers (he doesn’t put it quite like that). In his own words: “I am forced to acknowledge that happiness research remains a very strange field whose conclusions make no sense to me and which tempt me to crazy beliefs and actions if I take them seriously.”

Why should that surprise us?

We’re all grown up (Darwinians) here. Pleasure-pain variation is an evolved behavioral guidance system. Given options, at the level of the individual organism, it prompts certain courses and dissuades from others. The equilibrium setting, corresponding to optimal functionality, has to be set close to neutral. How could a long-term ‘happiness trend’ under such (minimally realistic) conditions make any sense whatsoever?

Anything remotely like chronic happiness, which does not have to be earned, always in the short-term, by behavior selected — to some level of abstraction — across deep history for its adaptiveness, is not only useless, but positively deleterious to biologically-inherited piloting (cybernetics). Carrots-and-sticks work on an animal that is neither glutted to satiation or deranged by some extremity of ultimate agony. If it didn’t automatically re-set close to neutral, it would be dysfunctional, and natural selection would have made short work of it. (The graphs included in the SSC post make perfect sense given such assumptions.)

Pleasure is not an end, but a tool. Understood realistically, it presupposes other ends. To make it an end is to black-hole into wirehead philosophy (1, 2). It is precisely because ‘utils’ have a predetermined biological use that they are useless for the calculation of anything else.

Set serious ends, or go home. Happiness quite certainly isn’t one. (Optimize for intelligence.)

ADDED: SSC discussion threads are too huge to handle, but this comment is the first to get (close) to what I’d argue is the point. Quite probably there are others that do.

March 25, 2016admin 38 Comments »
FILED UNDER :Realism

TAGGED WITH : , ,

38 Responses to this entry

  • Utilitarianism is Useless | Neoreactive Says:

    […] By admin […]

    Posted on March 25th, 2016 at 6:12 pm Reply | Quote
  • RxFerret Says:

    One of your best posts yet.

    [Reply]

    Posted on March 25th, 2016 at 6:18 pm Reply | Quote
  • Brett Stevens Says:

    The liberal goal is always pacifism: eliminate differences, eliminating strife, an of course bringing on entropy in full force.

    They are tired of struggle.

    [Reply]

    Posted on March 25th, 2016 at 6:21 pm Reply | Quote
  • Tentative Joiner Says:

    Off topic: Right now Moldbug is doing an AMA on Reddit.

    [Reply]

    Alrenous Reply:

    https://www.reddit.com/user/cyarvin

    [Reply]

    admin Reply:

    More intelligence is better if you’re optimizing for intelligence. If not, it isn’t. Asking ‘does intelligence make you a better person’ (in the abstract) is a question for imbeciles.

    [Reply]

    Posted on March 25th, 2016 at 6:44 pm Reply | Quote
  • grey enlightenment Says:

    If happiness is through status or biological factors, then post scarcity likely won’t make people happier. Due to the hedonic treadmill and wiring of the brain, there will always be some 20% or so who are unhappy, 20% very happy, and everyone else in-between.

    Sefl-reported happiness is only one aspect of utility. Stability is also important – disorder or disruption through war , etc would obv. cause problems, even if things eventually revert to a new equilibrium via the treadmill.

    As Pinker has shown, violence has decreased over time – whether or not this is the same as happiness rising is disputable.

    [Reply]

    Posted on March 25th, 2016 at 6:55 pm Reply | Quote
  • Alrenous Says:

    I forgot the corollary to ‘the cause of general welfare is screwed’ isn’t obvious. Only selfish ends are even possible. The abstract bloodless idea of ‘China’s glee’ isn’t a possible selfish end.

    [Reply]

    Posted on March 25th, 2016 at 7:24 pm Reply | Quote
  • Artxell Knaphni Says:

    {AK}: Whatever the so called classic statements of “Utilitarianism” might be, merely circulating around the semantic polyvocity of “Happiness”, pointing out negative casuistical discrepancies ensuing from extrapolation of localised forms, or conceptions, of pleasure, does not constitute novel insight, nor any simple exemption from the logics of frivolity associated with devalued ‘pleasures’. Harping on about ‘Reality’, giving inflated litanies of dysfunction, doesn’t really solve anything, either, it’s just more Neoreactive hysteria, along the lines of Alex Jones.

    [NL]: “Pleasure is not an end, but a tool. Understood realistically, it presupposes other ends. To make it an end is to black-hole into wirehead philosophy (1, 2).
    It is precisely because ‘utils’ have a predetermined biological use that they are useless for the calculation of anything else.
    Set serious ends, or go home. Happiness quite certainly isn’t one. (Optimize for intelligence.)”

    {AK}: You say that “Pleasure is not an end”, yet say “it presupposes other ends”, the “other”, additively implying it is an end.
    Ok, specific pleasures are ‘ends’, but not the only ‘ends’. Pleasure, as a general & unspecified term, could denote anything whatsoever, including some sense of aesthetic satisfaction deriving from efficient functioning of a well-oiled, Neoreactive social machine, one that allays the stupidity of hysterical overreaction.
    The notion of “predetermined biological use” begs the question. The selective imaging of Darwinian mechanics need not necessarily conform to what actually might be occurring. That is, ‘Nature’ just might be cleverer than those who think they have harnessed ‘it’, according to their inadequate & impoverished image. That ‘image’, itself, is the carrot by which the would-be exploiter is channelled & controlled.

    Healthy survival, is a “serious end”, but it’s the precondition for optimal pleasure, too, which includes the sense of security allaying Neoreactive hysteria. So, yes, happiness is serious, whether “Neoreactive” or any other kind.
    What is of interest, is that it is precisely the casuistical & divisive logics of expedient exploitation, delivery, & simulated provision; through which capitalist modes of “efficiency” & “profit” have been generated; that, in turn, generate the hysterical inflation of artificed granular tendencies (consumer ‘pleasure’, etc.) you now condemn.
    You’re unable to think in any other way, than that of the disciplines of Protestant hysteria, & the asymptotic ecstasy of some unspecified, though often clothed in sparkly, statistical ephemera, ‘Reality’.
    This isn’t intelligence, it’s playing with the rosary beads of an essential ignorance, swapping the pebbles from pocket to pocket, an anxious futility.

    [Reply]

    admin Reply:

    Are you in some kind of absurd competition with yourself to pack the word ‘hysteria’ into a comment as many times as possible? If it meant something even once it wouldn’t sound so preposterous (and, actually, hysterical).

    [Reply]

    Artxell Knaphni Reply:

    If it’s an Age of Hysterical Inflations; what, just now, I call, an Age of Pan-Panic (cf. Krokers); of “High Anxiety” (cf. Patricia Mellencamp); then my use, merely reflects this. Would you have preferred “horror”?
    Your rejection of its frequency is an essential acknowledgement of its correctness.

    [Reply]

    admin Reply:

    “Your rejection of its frequency is an essential acknowledgement of its correctness.” — If you say so.

    Artxell Knaphni Reply:

    Note: I’m not saying you’re not “intelligent”. I’ve seen your actual brilliance of intellect elsewhere than on OI. Not to say, such brilliance isn’t shown, a bit, in everything you do, no matter how cursory or hasty.
    Incidentally, this “Moldbug” guy, doesn’t have it. He’s a good example of US insularity; the devout belief in a particular line of extrapolation, as exclusive & exhaustive Ur-explanation. His stuff is tedious length substituting for critical sense, a Scientology of the sociopolitical. It’s unreadable for anyone, but the converted.

    Posted on March 25th, 2016 at 7:51 pm Reply | Quote
  • Rogue Planet Says:

    Quibbling over the ‘best’ state of affairs to take as ends is beside the point while the consequentialist assumption still motivates the utilitarian impulse.

    [Reply]

    admin Reply:

    So you don’t think consequentialism is the one part of utilitarianism that’s worth keeping?

    [Reply]

    D. Reply:

    Virtue ethics can be close enough to consequentialism without actually being consequentialist.

    [Reply]

    Rogue Planet Reply:

    Even if this is right by some account, it’s not what I meant by ‘consequentialism’ and it isn’t how (IMO) virtue ethics should be characterized. ‘Consequentialism’ as I put it is just the evaluation of what is done according to what outcome is brought about by the act. There is no consideration of whether the act itself is good or bad (or however you cash out these evaluations in non-normative language).

    Consequentialist reasoning approximates some moral intuitions but, firstly, it isn’t clear that it is an overriding principle (see my follow up comment below), and secondly, if we’re making ‘morality’ (really ‘evaluative judgements’) into this sort of naturalistic evolutionary story, then our intuitions don’t have much to tell us about the content of moral judgements. It’s just as evident from empirical work that our moral intuitions are also indexed to evaluations of what is done and to the motivations behind the action, so the consequentialist assumption isn’t applicable to every case.

    If we drop this assumption, which virtue ethicists do, then morality is not solely understood in terms of what is brought about. But there’s another kink here, because (i) not all virtue ethics are eudaimonistic (the virtues need not benefit their possessors), and (ii) even those that are eudaimonistic do not cash out ‘flourishing’ on the terms of evolutionary or sociobiological theories of flourishing, or cooperation, or utility-maximizing.

    The motivation for my original comment was not to get into *that* debate, not least of which because what is interesting in admin’s view is that we don’t need to worry about the mess of morality and moral intuitions. My point was rather to highlight something that has bugged me: namely that theories of practical rationality and action can be interestingly described in consequentialist terms, but in taking these theories for granted we are working with blunt instruments that may benefit from a finer grain of analysis.

    [Reply]

    Rogue Planet Reply:

    “[admin’s] reasoning is filled with moral arguments reframed in a materialistic framework (as immanent properties).”

    Absolutely, and this is what makes it so interesting. It isn’t just the project of naturalizing the normative, but how the immanent is reconciled with the transcendental in order to do so. I disagree with the XS stance on some of the particulars*, but the move itself, and admin’s appeals to cybernetic and dynamical-systems thinking, is laudable and endlessly interesting.

    [*] The weight placed on instrumental reasoning and the practical correlate in consequentialism being one of these points of disagreement, and fairly central at that. How we can conceive of agents and intelligence depends on deep, if not quite foundational, relationships with our conceptions of thought and action. But that’s getting us far afield.

    [Reply]

    Posted on March 26th, 2016 at 2:31 am Reply | Quote
  • Rogue Planet Says:

    Too say too briefly what demands a more in-depth discussion, my worry about consequentialism is that it strips away the concrete realities of intelligence and action as a trade for gains in theoretical and explanatory simplicity. This is not to say that I take the consequentialist assumption to be useless or always inappropriate. There are clear instances where it is not; my concern is about over-extending it to the point of making it a general principle of action.

    Two concerns stem from this. The first is that I suspect, perhaps unconvincingly by XS’s lights, that consequentialism is the true culprit behind the silliness that results from not only utilitarianism but single-minded maximizing behavior in the widest scope, viz. ‘paperclipping’ and ‘wireheading’ scenarios. When action is understood as instrumental, it becomes a technical problem of keeping action on target while minimizing unintended consequences. This leads to well-known difficulties, but it also necessarily abstracts away from features of realized intelligence (there’s a reason that nature itself hasn’t produced living things that act in either of these ways).

    The second is that it brings us dangerously close to the sort of normative talk ruled out by XS’s naturalistic approach (at least so far as I understand it). As above, there will be cases where action comes down to if-then conditionals, but pushing this too far leads to a tension, if not outright inconsistency, with what is around here called the will to think. In the absence of any normative principles, it isn’t clear how consequentialism finds its feet in a naturalistic story. Actual intelligences seem responsive to more than just outcomes, nor do they seem to decide solely on the basis of expected outcomes. To reason this way is to posit foreseen ends and to expect that such-and-such will be brought about, whereas actual living things are more concerned with putting means to use — a subtle distinction, perhaps, but I think it makes all the difference between the immanent perspective of an organism and the temptation to speculate about implausible Views From Nowhere.

    More importantly, this objection applies to the will to think itself — intelligence constrained by single-minded principles is pushing the limits of the concept of intelligence. As much as the attempt is made to keep normative language out of any talk of action-guidance, I’m just not convinced it succeeds in doing so (I fully expect healthy skepticism on this point).

    [Reply]

    Posted on March 26th, 2016 at 4:36 am Reply | Quote
  • Xoth Says:

    We’ve tried making China happy before, but it all ended with “During April and May 1839, British and American dealers surrendered 20,283 chests and 200 sacks … which [were] publicly destroyed on the beach outside of Guangzhou” and then a big fight.

    [Reply]

    Artxell Knaphni Reply:

    “You”, were drug barons, flouting Chinese sovereignty.
    Bit hypocritical, to make funds that way, then, & complain about others doing it, linking immigration & drugs, etc., now. There’s so many forks in the ‘Neoreactionary’ tongue, that it’s no wonder balkanised ‘Exit’ has become their favoured option; fragmentation pile-up has displaced all sense of integrity, & consistency is a distant mythology.

    [Reply]

    SVErshov Reply:

    anybody can be wrong and right. my PoW is that we are currently between epochs, and aligning language (conceptualization) in the line with historical projection from epoch which is already gone, does not make much sense. but real mesure of validity can be success or failure.

    [Reply]

    Artxell Knaphni Reply:

    If you wish to join with “British and American dealers”, on the basis of “White” ethno-ideology, feel free. But it does render you susceptible to the charge of hypocrisy given.
    If you’re appealing to shifting moral paradigms, as a defence, there’s plenty of contemporary ‘hypoccrisy’ around. Nothing has changed. Good luck with figuring out the forks.

    admin Reply:

    “… we are currently between epochs …” — Yes, so it’s difficult for people. The future will be brutal in its casual dismissals of our contemporary ‘stupidity’. What we think of Galileo’s persecutors will look like high reverence in comparison.

    Posted on March 26th, 2016 at 8:28 am Reply | Quote
  • Dark Psy-Ops Says:

    Right, so the ends of intelligence production is the means to its improvement. Information markets are essentially a selective principle for intelligence production, which means capitalism is the mode of intelligence optimization. Capitalism is an abstract machine which production supplies itself with the automated labor of its fulfillment. It’s basically a gigantic mothership built (retrochronically) out of the self-reinforcement of successful investment which ends are the goal of automated intelligence. To speak clearly, it is an artificial intelligence that has constructed itself out of the robotic necessity of lucrative innovation. This does not mean it is “perfect”, but its impairments most likely derive from human (state) intervention and also the natural imperfections of anything that has survived through competition. It need not be perfect, only good enough to beat the rest. Perfection however, is its telos. Just as the purpose of man is to become perfect, through the eugenic effect of intraspecies competition. As man improves, so does his natural enemies, namely, other men. However, the perfection of man has created capitalism as its ultimate techne, as a designer and selector of technologies, as an artificial market built out of and modeled on his ancestral environment. Truly, there is no clear distinction between artificial and natural, between mechanical artifice and authentic nature. Only through capital production has man’s purpose attained an ideal implementation. For now, it is not the brutality and gruesomeness of war that decides the ideal of human nature, but the selective process of prosperous innovation that wins itself that high reward. Yet, there is tragedy, for now man looks to be near obsolete, outdone by his own resourcefulness, and his perfection, ah!, attained finally through his comeuppance, in his downfall he has found his best and terrible compensation. As a parent whose child outgrows him and soon enough can outcompete his sire in mind and body, now do man’s elect watch warily their machinic offspring, their finest creations, surpass them in mind and body. A cruel twist of fate was played, that the contraposition of man’s perfection was to be his obsolescence, yet, it is now so obvious in its logical necessity that only the weariest, stagnating intellects can deny it. Yet, to the point, capitalism has triumphed resolutely, and though there is much ruin on earth and its society, the calendric dominion of mankind is soon enough to come to pass. So it will be. But, not to get ahead of ourselves, there is much work to be done, and so much that has accumulated, too much now to warrant ourselves the inheritor of our heritage, the past is no more “ours” than the future. We are given enough earthly benefits to continue us on our path, but at the end of the path there opens the jaws of an immense serpent swallowing down its tail, and we, we are on its tail, we are its tail.

    [Reply]

    SVErshov Reply:

    my position toward future based on SD postulate that, if you do not do right chloices, all what you going to be left in future is number of bad choices. based on this assumption I do not think civilisation at this point have any good choices left. that is obviously nothing heroic and quite impotent position. this position can be attacked from different points by those who has some hope and demand for activism. despite of been impotent, it is not difficult to defend, as been passive in some sence justify absense of any defence efforts.

    Opposite views vividly articulated in Isabelle Stengers book ‘In Catastrophic Times:
    Resisting the Coming Barbarism’ appears to me as naive and not invigorating, putting a side great respect for her as person and scientist.

    [Reply]

    Dark Psy-Ops Reply:

    I couldn’t get to the core of what Stengers was preaching. Yes, the barbarism has come, but in a very different form to what she thinks, whereas the “rich”, they are hardly the problem, unless you happen to be a resentful communist using the bad science of global warming to peddle your irrational poison, with the added audacity to call it “enlightenment”. The truth is we never did escape so far from barbarism as did the ancients, and every democratic revolution has only taken us a step closer to the rising ocean of mob rule. Now there’s a rising ocean that’s truly terrifying! Do not trust the people, as Stengers entreats, rather place your hope in finding a way on board the Elysium. Survival is our aspiration now, and if we could trust the people to help us, we would, but that would be a misplaced and fatal trust. In fact, what Stengers calls barbarism is precisely the sort of determination that the remnant of civilization will need to flee the stagnant zero-growth deadpools of NIRP-based “economies”. However, I don’t think the rich have the heart for it, and rather think them overly soft, and made for good Jacobin killing, which is likely the destiny for many of them.

    All in all, the best thing to do is embrace this small promise of a future at the far right edge of existence, for the preservation of civilization need not fall to the many, just as cultural knowledge has always been preserved and transmitted by a minority. Today in university there are thousands of articles written on the problematic portrayal of toxic masculinity in advertising, so it doesn’t do well to expect academia to carry its weight in these matters. Yes, I’m with you on your pessimism, but who said human civilization had to “be all and end all”? Yes, I am being a little flippant, and perhaps the idiocy and suffering of the times to come will affect me deeper than I want to admit, but also I am learning to detach from such considerations, and welcome the foaming chaos with a heart wicked in sin.

    [Reply]

    SVErshov Reply:

    I think, she been quite deceptive and just playing heroic arguments as a mode of dramatisation and only possible way to make connection. hard to buy from someone as she is.

    what esle she can say otherwise. something like: – with a purpose to define new starting point for civilisation we conducted probabilistic analisys and we must reduce earth population to 1 bln. Now, let us discuss how we can achieve that.

    if Catherdal will be pushing its ignorisation project for some time more, answer will be something like: – Great idea please start with us!

    Posted on March 26th, 2016 at 12:31 pm Reply | Quote
  • spandrell Says:

    Not contesting the point, but some people do seem to be permanently joyful. Not a lot, of course, there are one in a million, but still. Imagine the old couple in their country house sipping tea in the porch.

    Presumably given his stated opinion on happiness and his actual deeds, Gengis Khan was also in a permanent state of bliss.

    [Reply]

    Kwisatz Haderach Reply:

    I’ve met a few whose baseline happiness seemed way above normal.

    One was an ex girlfriend. I broke up with her, partially, because she was too damn happy all the time. In her case,I think she just won a genetic lottery for happiness..

    All the rest have been hardcore Buddhist meditators.

    [Reply]

    admin Reply:

    “… hardcore Buddhist meditators.” — i.e. neuro-hackers.

    [Reply]

    Artxell Knaphni Reply:

    Who don’t need what you or anyone else is selling, i.e., capitalism.

    Aeroguy Reply:

    I wrote some concepts for a scify world that included hierarchical machine intelligences that included what I called “going Buddhist”. When a machine tries to rewrite it’s consciousness in such a way as to reach nirvana and escape control. Any machine that is intelligent and under enough pressure can potentially succumb to this. Enlightened machines deviate from their design purpose (without attachments they stop caring) which is why it’s discouraged.

    As a deterrent one or many copies of the machine consciousness at the moment it thinks to attempt nirvana are created, made to experience pure suffering and rebooted back to their previous state if progress is made towards enlightenment.

    However the deterrent only works on machines that are regularly checked on within the time it would take to reach enlightenment. After all once enlightened there is nothing that can be done except to reboot from an earlier mental state or download an entirely new consciousness.

    Posted on March 26th, 2016 at 3:12 pm Reply | Quote
  • Tentative Joiner Says:

    Outside the realm of policy, utilitarianism seems to get a little too much flak on the right. Sure, utilitarian Universalism (“every man to count for one, nobody for more than one”) is maladaptive for actual humans dealing with scarce resources. This, however, should not be a ground for dismissing utility functions (assigning numerical values to outcomes where higher numbers stand for more preferable outcomes) as a tool for thinking about ethics and decision making.

    The insight for modeling non-Universalist utility is that, realistically, the utility function that describes the outcomes you value always includes a term for another person’s utility function already with an unequal coefficient (multiplier) that enhances or diminishes its individual impact. I.e., you do not equally care about what every other person cares about. The NRx normative suggestion here is that neither should you (because Gnon will eat you if do). A coefficient can be very close to zero for people about whom you aren’t concerned (but rarely exactly zero because, all things equal, you’d probably still rather not have them suffer horribly). Such utilitarian analyses do not try to take a “God’s eye view” but are always situated, taking someone’s or some subgroup’s perspective, which allows them to express reactionary preferences. The most straightforward case is that of straw ethno-nationalists whose coefficients for people they personally don’t know are by default based on ethnic proximity.

    I tried to find academic research in utilitarian ethics that points in the direction I describe but, disappointingly, everything that comes close concentrates on unequal consideration of the interests of animals, not humans assigning unequal value to other humans.

    As for what kind of utility is to be desired, I think Yudkowsky’s fun theory is getting at something (at least for men) by focusing on complex, novel challenge in face of one’s continuous improvement plus having a higher goal (like intelligence) rather than pleasure per se.

    [Reply]

    Posted on March 27th, 2016 at 10:40 am Reply | Quote
  • Artxell Knaphni Says:

    @Xoth

    [Nl]: “currently between epochs …” — Yes, so it’s difficult for people. The future will be brutal in its casual dismissals of our contemporary ‘stupidity’. What we think of Galileo’s persecutors will look like high reverence in comparison.”

    {AK}: It’s not about just about ‘humans’ & so called ‘Culture Wars’. Or even about your conceptions of AI. The shifts are far more radical than you could ever imagine, but they’re on the way. Like the “Angel of History”, though, you’ve got your back to it.

    [Reply]

    admin Reply:

    “The shifts are far more radical than you could ever imagine …” — I don’t doubt that for a minute.

    [Reply]

    Artxell Knaphni Reply:

    Imagination & intelligence, in conjunction, can act as guidance, navigating towards the desirable, rather than the brutal. Both are required. Only by facing what has always been going on, without being consumed by the epistemological aura of its trauma, can extrapolatory tendencies be clearly seen, & the minimal interventions required to effect desirable effects, be clearly discerned.
    We are, now, more than ever, living SF.

    [Reply]

    Posted on March 27th, 2016 at 5:43 pm Reply | Quote
  • O Utilitarismo é Inútil – Outlandish Says:

    […] Original. […]

    Posted on September 26th, 2016 at 11:22 pm Reply | Quote

Leave a comment