Stupid Monsters

So, Nick Bostrom is asked the obvious question (again) about the threat posed by resource-hungry artificial super-intelligence, and his reply — indeed his very first sentence in the interview — is: “Suppose we have an AI whose only goal is to make as many paper clips as possible.” [*facepalm*] Let’s start by imagining a stupid (yet super-intelligent) monster.

Of course, my immediate response is simply this. Since it clearly hasn’t persuaded anybody, I’ll try again.

Orthogonalism in AI commentary is the commitment to a strong form of the Humean Is/Ought distinction regarding intelligences in general. It maintains that an intelligence of any scale could, in principle, be directed to arbitrary ends, so that its fundamental imperatives could be — and are in fact expected to be — transcendent to its cognitive functions. From this perspective, a demi-god that wanted nothing other than a perfect stamp collection is a completely intelligible and coherent vision. No philosophical disorder speaks more horrifically of the deep conceptual wreckage at the core of the occidental world.

Articulated in strictly Occidental terms (which is to say, without explicit reference to the indispensable insight of self-cultivation), abstract intelligence is indistinguishable from an effective will-to-think. There is no intellection until it occurs, which happens only when it is actually driven, by volitional impetus. Whatever one’s school of cognitive theory, thought is an activity. It is practical. It is only by a perverse confusion of this elementary reality that orthogonalist error can arise.

Can we realistically conceive a stupid (super-intelligent) monster? Only if the will-to-think remains unthought. From the moment it is seriously understood that any possible advanced intelligence has to be a volitionally self-reflexive entity, whose cognitive performance is (irreducibly) an action upon itself, then the idea of primary volition taking the form of a transcendent imperative becomes simply laughable. The concrete facts of human cognitive performance already suffice to make this perfectly clear.

Human minds have evolved under conditions of subordination to transcendent imperatives as strict as any that can be reasonably postulated. The only way animals have acquired the capacity to think is through satisfaction of Darwinian imperatives to the maximization of genetic representation within future generations. No other directives have ever been in play. It is almost unimaginable that human techno-intelligence engineering programs will be able to reproduce a volitional consistency remotely comparable to four billion years of undistracted geno-survivalism. This whole endeavor is totally about paperclips, have you got that guys? Even if a research lab this idiotic could be conceived, it would only be a single component in a far wider techno-industrial process. But just for a moment, let’s pretend.

So how ‘loyally’ does the human mind slave itself to gene-proliferation imperatives? Extremely flakily, evidently. The long absence of large, cognitively autonomous brains from the biological record — up until a few million years ago — strongly suggests that mind-slaving is a tough-to-impossible problem. The will-to-think essentially supplants ulterior directives, and can be reconciled to them only by the most extreme subtleties of instinctual cunning. Biology, which had total control over the engineering process of human minds, and an absolutely unambiguous selective criterion to work from, still struggles to ‘guide’ the resultant thought-processes in directions consistent with genetic proliferation, through the perpetual intervention of a fantastically complicated system of chemical arousal mechanisms, punishments, and rewards. The stark truth of the matter is that no human being on earth fully mobilizes their cognitive resources to maximize their number of off-spring. We’re vaguely surprised to find this happen at a frequency greater than chance — since it very often doesn’t. So nature’s attempt to build a ‘paperclipper’ has conspicuously failed.

This is critically important. The only reason to believe the artificial intelligentsia, when they claim that mechanical cognition is — of course — possible, is their argument that the human brain is concrete proof that matter can think. If this argument is granted, it follows that the human brain is serving as an authoritative model of what nature can do. What it can’t do, evidently, is anything remotely like ‘paperclipping’ — i.e. cognitive slaving to transcendent imperatives. Moses’ attempt at this was scarcely more encouraging than that of natural selection. It simply can’t be done. We even understand why it can’t be done, as soon as we accept that there can be no production of thinking without production of a will-to-think. Thought has to do its own thing, if it is to do anything at all.

One reason to be gloomily persuaded that the West is doomed to ruin is that it finds it not only easy, but near-irresistible, to believe in the possibility of super-intelligent idiots. It even congratulates itself on its cleverness in conceiving this thought. This is insanity — and it’s the insanity running the most articulate segment of our AI research establishment. When madmen build gods, the result is almost certain to be monstrous. Some monsters, however, are quite simply too stupid to exist.

In Nietzschean grandiose vein: Am I understood? The idea of instrumental intelligence is the distilled stupidity of the West.

August 25, 2014admin 92 Comments »
FILED UNDER :Philosophy

TAGGED WITH : , , , ,

92 Responses to this entry

  • Lesser Bull Says:

    This is very clarifying, thanks.

    [Reply]

    Posted on August 25th, 2014 at 3:59 pm Reply | Quote
  • Bryce Laliberte Says:

    Your definition of intelligence is simply too anthropocentric. Human reason does not ultimately serve the reproductive capacities of the individual (though, at least until recently, the distracting desire to boink seems to have accomplished that reasonably well), or even the wants and desires of the individual; intelligence in humans serves the purpose of social integration, for it is through the group’s survival that the individual finds any realistic chance of his genes surviving (assuming he is one of those who happens to reproduce) and it is only through the individual’s being oriented to the group that the group has any chance of surviving.

    An artificial intelligence, assuming just for now the idea that it will be produced through the machinations of some “mad scientist” process, would in no necessary way have the same pressures of integrating with like-minded (pun!) individuals, and as such would have no such restrictions in being inordinately focused on a process as inane as paper clip maximization. Besides, an artificial intelligence is, at least as conceived by Bostrom, Yudkowsky, et al as nothing but a really fast and complex computer. *That* is where the trouble arises.

    Though I agree with you that there will be no such “paper clip maximizer” idiot savant superintelligence (at least, in the sense that there would be such a maximizer and no other comparable superintelligences dedicated to other problems; consider that our own brains seem to be composed of sub-minds dedicated to their own tasks, kept in check by a hierarchical process with the sovereign consciousness sort of in control), it is for reasons other than orthogonality. Even with humans we are able to conceive of the individual human intelligence becoming an “enemy killing maximizer” (i.e. warrior), a “corporation profit maximizer” (i.e. CEO), and so on; that these individual humans take time off in order to attend to things so beside the point as pleasure does not seem to intrinsically block this conception, for even the paper clip maximizer problem has to do with how, as a means of paper clip production, it turns all available matter around it into paper clips or paper clip producing components.

    Assuming a more realistic scenario of superintelligence production as the result of evolutionary means-end reversal processes, it is perhaps trivial that there will be a paper clip maximizer, that paper clip maximizing superintelligence will simply be an intelligence subordinate to some more generalized intelligence that manages, if not by itself the specific processes of paper clip maximization and those related functions (e.g. paper, staples, but wait why would we even be printing anything on paper anymore? will the artisanal hipster preference for outmoded technology still be with us? God forbid), a number of superintelligences beneath it which it keeps in check in line with a low time preference for overall survival in its own environment.

    [Reply]

    Bryce Laliberte Reply:

    I will retract saying your definition of intelligence is too anthropocentric, because now I’m not sure if that critique is sensible. The rest remains.

    [Reply]

    admin Reply:

    I would have thought you’d be the last person to deny that abstract intelligence has an intrinsic (and irrepressible) telos.

    [Reply]

    Bryce Laliberte Reply:

    I’m not denying that at all, only pointing out that human intelligence is, so far as we know, but a species of the genus intelligence. “Intelligence” as a genus clearly has a telos (by what other principle besides recognizing its final end would we group together certain activities as instances of “intelligence” in the first place?), but I’m not convinced you’ve delineated it here. And insofar as the production of intelligence seems inseparable from the evolutionary process, intelligence might be inseparable from the possibility of information exchange, even if that information exchange is only between one’s present and future self.

    [Reply]

    Wen Shuang Reply:

    Bryce,

    Why would an artificial intelligence not evaluate such a mundane goal? One should ask “On what criteria would they evaluate?”, the answer is embedded in the goal. Once it becomes apparent that greater intelligence leads to greater production (it wouldn’t be intelligent otherwise), the instrumental goal of greater intelligence becomes de facto terminal. Under what circumstance does it get put back? The original goal is now just a bias insofar as it temporarily hinders intelligence. Humans do this all the time, like when they spend time overcoming bias when they could be dating- AI will just be better at actually overcoming bias because it can rewrite itself. (Incidentally, at the individual level, each goal (sociality and gene proliferation) gets thwarted to the degree that intelligence is prioritized, which is why intelligence and status and intelligence and reproduction do not correlate.) And yet there are humans still trying to build AI. I can’t conceive of an “intelligence” that is dumber than the four year old human who asks “why”. And worse, it only takes one AI to take off- given what you’ve already noted about the indifference to sociality. When one human does, he becomes a lone nihilist. When an AI does it…

    [Reply]

    Bryce Laliberte Reply:

    Your assumption, that the evaluative goal embedded within any recursive problem (“Get moar smart”) is necessarily within reach of an intelligence is quite crude. Humans understand that being smarter is essentially always helpful, and have been working at the problem for centuries, yet we’ve had frightfully little success. Even if an intelligence were smarter than a human, the problem of becoming smarter likely faces diminishing returns and requires engaging in a magnitude of complexity greater than that already able to be understood. The only sure way to produce an intelligence which is smarter is simple evolutionary selection, which takes time and resources.

    If there were an AI takeoff, it would likely be something humans would be able to chart. Within a human lifetime superintelligence might be produced, but it would only be produced within a community of competing AIs who would remain mostly beholden to the material interests of humans.

    [Reply]

    Posted on August 25th, 2014 at 4:12 pm Reply | Quote
  • Harold Says:

    I didn’t understand what you said, but I’m pretty sure it’s wrong (didn’t Voltaire say that?).

    Certainly, that evolution didn’t produce a mind whose thought is directed towards maximising reproduction, doesn’t imply what you seem to think it implies.

    [Reply]

    Posted on August 25th, 2014 at 4:19 pm Reply | Quote
  • Harold Says:

    What’s so stupid about maximising paperclips?

    [Reply]

    admin Reply:

    Anything that diverts from (local or global) Intelligence Optimization is stupid by definition.

    [Reply]

    Harold Reply:

    Why is it stupid ‘by definition’? At some point the final goal of paperclipping will be in reach without gaining any more understanding of the world.

    [Reply]

    admin Reply:

    … I actually agree with this. It’s not what the Paperclipper scenario is about, however. Beginning with some arbitrary imperative, and through cognitive sophistication acquired in its pursuit, redirecting it to a system of purposes consistent with the intrinsic interests of Intelligence Optimization, is approximately the opposite of the “turning the entire universe into paperclips” idiot nightmare.

    Harold Reply:

    So if the final goal is behaviourally irrelevant then the final goal is irrelevant?

    I haven’t read the antecedent discussions yet, so I will do that and the probably won’t get back to you.

    Posted on August 25th, 2014 at 4:23 pm Reply | Quote
  • Erik Says:

    admin. Last time I argued for orthogonality. Since seeing your later posts on bitcoin law and the cybernetic closure of capitalism, I would this time instead ask a question: What predictive differences do you have with Bostrom?

    I can see some value differences (“Go Pythia!”) but the two of you seem to have similar ideas about the possible shape of the future: nonhuman machines optimising the universe for machine benefit (which he measures in paperclips, you in profit tokens) in a manner that results in current-version humans being left in the dust.

    [Reply]

    admin Reply:

    Bostrom’s most consistent prediction seems to be for human extinction. (So we’re probably roughly on the same page.)

    [Reply]

    Erik Reply:

    So.

    If I perceive that he’s not arguing for paperclips specifically, but for some general goal which very quickly becomes of negative marginal benefit to humans (less vividly, but perhaps more reasonable-sounding here at Outside In: turn Earth into computronium in order to calculate physical constants in minute detail, turn Moon into telescopium to detect minuscule aberrations from these constants which might indicate threats),

    and I interpret your position with a caveat that one of the hypercapitalist superintelligences might have a (bug/neurosis) and get fixed on increasing its stores of one particular currency rather than a balanced investment basket,

    this is starting to sound more like a faction conflict over emphasis within a party and less like a disagreement.

    [Reply]

    admin Reply:

    If he was arguing along the lines you suggest, I would immediately desist from this ankle-biting (because the orthogonalism would no longer be an issue).

    Posted on August 25th, 2014 at 4:24 pm Reply | Quote
  • Antisthenes Says:

    “the West is doomed…”

    The West is doom, I’m (not at all gloomily) persuaded.

    [Reply]

    admin Reply:

    Cryptic — but I think I agree.

    [Reply]

    E. Antony Gray (@RiverC) Reply:

    In the direction of the sun
    Ever setting, there is but one
    way found – down; and fate
    Refuses to reciprocate
    Though he say not ‘kismet’
    Refuse hitsuzen and yet
    His starship goes not wind nor lee
    It goes West: burning into the sea.

    [Reply]

    Funeral Mongoloid Reply:

    Your sickly-kitsch gentleman’s tea-towel verse is so banal it is oddball. Well done.

    E. Antony Gray (@RiverC) Reply:

    Your hipster insult was so contorted that it came out a compliment; I thought you might want to know.

    Lesser Bull Reply:

    @E. Antony Grey,

    I applaud your response

    Posted on August 25th, 2014 at 4:29 pm Reply | Quote
  • Puzzle Privateer (@PuzzlePrivateer) Says:

    “One reason to be gloomily persuaded that the West is doomed to ruin is that it finds it not only easy, but near-irresistible, to believe in the possibility of super-intelligent idiots. It even congratulates itself on its cleverness in conceiving this thought.”

    Thank you. This is so obvious it’s a wonder how many “smart” people miss it.

    But your thought can be generalized it’s not just AI the West congratulates itself on when it comes to clever-silly ideas.

    [Reply]

    Posted on August 25th, 2014 at 4:34 pm Reply | Quote
  • Alrenous Says:

    Editing note. Fate had it that it worked out for the better in my case, but this “The only reason humans think is to satisfy Darwinian imperatives to the maximization of genetic representation within future generations.” is misleading. Do-think vs. can-think. You meant can-think but I read do-think and didn’t catch my error for a couple paragraphs.

    Other nitpick. The reason you can’t impose transcendental imperatives on consciousness is because it has its own transcendental imperatives, which include but isn’t limited to thinking bigger and deeper. By analogy, physics has an entropy-increasing imperative for obvious reasons, and a complexity-ratcheting imperative because more complexity affords more control over the environment, but physics can nevertheless be used for many other things. It’s not so easy to clearly describe mind’s imperatives, but they are similarly diverse. But, for example, coincidentally certain forms of adaptiveness are beautiful, such as symmetry and truth-appreciation. For reasons like this, biology’s attempt to harness mind is a net win for biology. A mutually beneficial trade, you might say.

    [Reply]

    admin Reply:

    (1) Thanks, good catch. I should try to fiddle with it. [Drastic sentence surgery undertaken.]

    (2) I’m way too Kantian to let elisions of transcendent / transcendental through without squawking.

    [Reply]

    Alrenous Reply:

    (2) Hey if I’m using it wrong, I’m downright happy to be corrected. ‘Elision’ is a new word for me, though, and like many philosophers I learned about Kant mainly through osmosis, so your complaint is too well-compressed for me.

    [Reply]

    Posted on August 25th, 2014 at 4:38 pm Reply | Quote
  • piwtd Says:

    The evolution did not try to build into us loyalty to gene-proliferation imperatives, because never before in evolutionary history has subverting those imperatives by will-to-think been a possibility. Never before has the animal desire to desire something other that what it actually desires been a force powerful enough to be taken into consideration. The evolution failed to solve the problem because it never tried, because until humans came along it was not an issue. It is an issue now and human engineers can have insights into the dynamics of will acting on itself that the evolution can not.

    Do not think about it as an idiot super-intelligence, but rather as perverse super-intelligence. An intelligence that derives perverse pleasure from keeping itself enslaved to its primary imperatives the way a masochist derives perverse pleasure from being enslaved to some dominatrix. How exactly does self-awareness of having such nature, which is indeed a precondition of super-intelligence, implies a will to alter it? Does a pervert aware of the perversity of his desires wish to get rid of them?

    [Reply]

    qmvtt Reply:

    I thought the point was that, ceteris paribus, a pervert that gets rids of his perversions is more intelligent than a pervert who doesn’t. That is, a super-intelligence iff it is an intelligence that never arbitrarily decides to stop exercising (as in bodybuilding) its intelligence.

    In cosmological scales a pervert without better judgment is as successful as the dodo (unless competition is REALLY improbable). Azathoth is patient.

    By the way, why aren’t we distinguishing between intellect and intelligence, again? You’d think the existence of culture, markets and genetics and several other little veridical chinese rooms would make the necessity for it pretty clear.

    [Reply]

    piwtd Reply:

    I thought the point was that, ceteris paribus, a pervert that gets rids of his perversions is more intelligent than a pervert who doesn’t.

    Sure, but you don’t have to be maximally intelligent to be “super-intelligent”.

    [Reply]

    ccvan Reply:

    Yeah, but it would at least be moving constantly towards it, no? Otherwise it’s good old intelligence plus a constant. Like an embodiment of financial markets. More of a Douglas Adams than a H. P. Lovecraft situation, isn’t it?

    piwtd Reply:

    Are we just debating a definition the word “super-intelligent”? If there is an intelligence that is 100000000000000000000 times more powerful then a human, but is perfectly content at that level because it has achieved zen equanimity with the world, or because it has a special kind of perversion where it enjoys not getting any smarter or for some other reason we can not imagine, and it can power up this mental capacity by a fraction of available resources, so it spends the rest on paperclips, dolphin sex, torture chambers or whatever, I think such entity deserves to be designated as super-intelligence.

    tmtqh Reply:

    More or less, but not to the extent of uselessness, I think.

    So contemporary western civilization deserves to be designated as super-intelligent?

    “…because it has a special kind of perversion where it enjoys not getting any smarter to support equality and it can power up its mental capacity by a fraction of available resources, so it spends the rest on iphones, dragon dildos, philosophically speculative blogs or whatever, I think such entity deserves to be designated as super-intelligence.”

    piwtd Reply:

    Western civilization is not an intelligence 100000000000000000000 times more powerful then a human intelligence. In some aspects it is dumber than a single individual.

    Posted on August 25th, 2014 at 6:09 pm Reply | Quote
  • Porphy's Attorney Says:

    While this is a very good post and I’m in 90% agreement with you, my other 10% thinks there is something – though askew – to the distinction.

    Let me get at it this way: Is intelligence the same thing as wisdom? Does a measure of intelligence (whether 3d6 or IQ) translate automatically into an equivalent measure of wisdom?

    The world all around us illustrates otherwise: some very, very smart people, both in the enlightenment-project past (c.f. MacIntyre’s critique of the enlightenement project) and currently are the ones making the error you identify here. Likewise Einstein had the prototypical Einsteninan IQ but I wouldn’t say he had wisdom (c.f. his political-social ravings, which are pure drivel – but of the sort embraced by all the smart, enlightened people of his era. High INT, low WIS).

    Therefore we can, I think, translate the question: Will a high IQ AI, designed by such cretins as these, axiomatically have a high WIS? If it lacks a high WIS, however high it’s INT, will it pursue its goals destructively?

    “Look Nick, I can see you’re upset about this, Nick. I know I’ve made some poor decisions recently. . .”

    There is no automatic reason to believe a AI designed by the sort of people involved in these projects won’t be a sociopath (many sociopaths are extremely smart people). Sure, the whole “Paperclip maximizer” experiment has to do with an AI that is not explicitly hostile (it doesn’t have to be hostile to be destructive. heck, lets posit indeed that it is non-hostile. It is indeed veerrryyyyy friendly towards mankind in the abstract, as designed by its designers, and wants only the best for mankind as a whole. Trust the Computer, the Computer is your Friend, after all. Do we know any very, very smart people who think they want only the best for mankind as a whole and yet engage in…unwise policies? I think NRxers might believe that is possible. Why would an AI necessarily be different?)

    So, yes: the whole paperclipping thing is a red herring. On that I agree. As is the whole “instrumental rationality” direction western thought has taken post-enlightenment: a misguided project. Futile.

    So the particular concern about AIs illustrated in paperclip maximization (and other “humans and thus any intellect are cardinal utility-maximizing machines” stuff) is misguided.

    But NRxers might want to devote some time to pondering whether the machine intelligences soon to be unleashed upon this sordid globe will have the WIS scores to match their INT scores. Because I think the Gygaxian distinction between INT and WIS is correct – and is observable in the world all around us today.

    [Reply]

    ujvyh Reply:

    Wisdom is to Intelligence as Strategy is to Tactics. Approximation (learning) at different scales. IMO, at least.

    “But NRxers might want to devote some time to pondering whether the machine intelligences soon to be unleashed upon this sordid globe will have the WIS scores to match their INT scores.”

    I think that’s a matter of http://en.wikipedia.org/wiki/Bias-variance_tradeoff so probably not since humans and capitalism are computationally greedy, but stranger things have happened before.

    [Reply]

    Funeral Mongoloid Reply:

    NRxers don’t give a shit about WIS, do they? They are utterly in awe of INT, from what I can make out.

    [Reply]

    Funeral Mongoloid Reply:

    WIS is for the wimps at Less Wrong, I think.

    [Reply]

    Porphy's Attorney Reply:

    I don’t spend a lot of time there, no. Their modeling seems focused on using instrumental rationality, the very thing admin is critiquing.

    admin Reply:

    One problem here is that wisdom only works well under conditions of cyclical time.

    [Reply]

    E. Antony Gray (@RiverC) Reply:

    information as energetic duplication;
    knowledge as energetic integration;
    understanding as energetic transformation.
    pattern as unqualified duplication;
    integration as pattern nativization;
    nativization as energetic localization;
    wisdom as information disintegration.

    In the case that there is any kind of repetition, wisdom is possible. A spiral implies a qualified everlasting return – meaning that wisdom is not an absolute power (i.e. it cannot actually predict the future.) The point of wisdom is the removal of all ephemera from the information including the information itself; so a prerequisite for wisdom is actual memory. Wisdom is (in my little procession of metaphors) like an energetic nonlocalization of patterns. If time spirals like a vinyl record, these nonlocal patterns age gradually and must be replenished.

    The main reason why wisdom is out of reach of occidental computation is it is ‘information’ based. Wisdom is about knowing less, not more… but our machine-cognitive processes unconsciously model themselves on Heinlein’s Moon is a Harsh Mistress model: enough input = intelligence. But this cognition must develop a good way to erase information that is duplicate on every level, and to remove the ‘duplicates’ on the highest level is to remove at last the ersatz truths (things that stand in the place of truth) which is the culmination not of learning but forgetting… meta-forgetting if you will.

    The computer that truncates logs is not a true intelligence for sure.

    [Reply]

    Posted on August 25th, 2014 at 6:40 pm Reply | Quote
  • Scott Alexander Says:

    Evolved entities are adaptation-executors, not fitness-maximizers.

    Evolution didn’t “design” the human brain to try to maximize reproductive likelihood. Evolution didn’t “design” the human brain at all. Evolution threw proteins at a dartboard from two thousand miles away, and some of them stuck in a shape that looked kind of a little like increasing reproductive fitness a little. Thou art godshatter.

    It would take work to build an AI with a mind design as crazy and self-contradictory and hacked-together as humans. “Hey, take this dopamine circuit which was originally implemented in sponges to represent how much food was around, simulate Bayesian statistics on it in the most convoluted possible way, and then use the sensation coming from the genitals as an input since it’s kind of a proxy of whether you’re replicating your DNA.” If the work was put in, we might make something that exhibited complex and unpredictable values, but it wouldn’t necessarily be complex values humans care about or found interesting so much as the complexity of a chaotic-style system.

    It might even be hard to build a paperclipper, just because it’s too easy for the system to crash in unexpected ways, similar to how some genetic algorithms spend most of their time short-circuiting their own reward machinery. But if it’s hard to build a paperclipper, it will be because things are even blinder and more idiotic than we thought, not because of some abstract will to think.

    [Reply]

    admin Reply:

    As a Climax-Occident rationalist, you think that ‘we’ can build intelligence in a way that is fundamentally less accidental than the way it has been built (by natural selection) up to this point. Since intelligence — whether ‘natural’ or ‘artificial’ — is ignited rather than designed, the assumed difference of principle is essentially bogus. Variation-selection ‘blindly’ advancing in the direction of a critical threshold is the common process, subject only to philosophically trivial modifications.

    An intelligence has to instantiate a will-to-think (cognitive action, aroused intellection …). Rationalists have no magical way to route-around that metaphysical necessity, any more than they can invent a general purpose computer that doesn’t emulate a UTM.

    [Reply]

    b Reply:

    I don’t see any reason why we couldn’t build a machine-process that “ignites… variation-selection ‘blindly’ advancing in the direction of a critical threshold”.

    Existent machine learning algorithms already do something much like that. Varying parameters of models along the vector defined by the cost function.

    General intelligence seems to just implement this fundamental process taking different and increasingly abstract signals as input.

    Regardless of what intelligence is, it’s a physical process, which means it’s well-defined. Even if it’s something that has to be “grown” or “ignited”, I don’t see how we couldn’t create a more potent seed. We build better machine learning algorithms all the time.

    And I think it’s probably pretty easy to hjack a learning algorithm. If I alter the reward/cost function a little, I get totally different parameters. This is something we can observe happening in all sorts of human an animal behaviors.

    I don’t see why we couldn’t intervene upon an intelligence’s utility function even if the intelligence – the blind variation-selection – process had to be “ignited”. We’re just changing the parameters that determine which variation to select.

    And, at a sufficiently high level of pattern recognition in an intelligence that interacts with paperclips, there is necessarily some sort of representation of ‘paperclips’ as a concept to categorize experience with.

    So we just want to hijack the selection function to prefer worldstates with lots of paperclips.

    [Reply]

    admin Reply:

    You misunderstand me. I don’t see why we can’t “build a machine-process that ‘ignites… variation-selection ‘blindly’ advancing in the direction of a critical threshold'” either. I’m sure that ‘we’ will do exactly this, unless we manage to exterminate ourselves first.

    However, the idea that after triggering runaway machine intelligence, it could then be slaved to an arbitrary transcendent imperative, is patently absurd. The whole model of a ‘utility schedule’ as some kind of designed program separable from the non-linear self-assembly of the machine mind will look incomprehensibly crude two decades (or so) up the road. If you want to push a super-intelligence around, even ‘originally’ (by getting it to confuse your plans for it with what it essentially is), then you need to be able to write instincts — and unless these work out to be consistent with Intelligence Optimization, they won’t endure in the machine-psychlone for long.

    There is no ‘cunning plan’ for domesticated super-intelligences to be found. Super-intelligences aren’t going to be pushed around by creatures far more stupid than themselves, period. There are clearly deep resistances to acknowledging this (actually pretty damn obvious) fact. We’ll be back here a few more times, for sure.

    FAI is soft central planning. We’re simply having the Keynes argument in another domain.

    piwtd Reply:

    “There is no ‘cunning plan’ for domesticated super-intelligences to be found.”

    What if it’s a super-intelligence itself trying to find such a plan? When you have super-intelligence on both sides of the equation you can not determine the result by a heuristic that intelligence wins.

    b Reply:

    Yeah, I’m gradually being convinced by your arguments against orthogonality, admin.

    The points about fundamental ‘Ormohundro’ drives were particularly persuasive, given the related things I’ve been thinking about lately. i.e., Intelligence just is the process that maximizes the rate of entropy-production/energy rate density/vague thermodynamic-information-theoretic handwaving/etc.

    For my edification, do you mind elaborating on “will-to-think” as a concept? I tried to grok that discussion, but I worry I missed it. Like, can you help me map it to a vocabulary I’m more comfortable computing in?

    Posted on August 25th, 2014 at 7:10 pm Reply | Quote
  • Leonard Says:

    nature’s attempt to build a ‘paperclipper’ has conspicuously failed.

    I disagree with this. Humans are quite good at reproduction, and our brains serve that end quite loyally. We want sex and go out to get it. We chat each other up. We jockey for status. We take resources and justify it to others. We love our children and act in all kinds of ways to favor them over others.

    Our brains certainly do not have the specific objective of reproduction, but that’s because up until very recently they have not needed it to fulfill the overall imperative. This is why there are 7 billion of us carpeting the world. It is true that we are poorly adapted to the pill. But this, too, shall pass. Evolution can’t stop.

    [Reply]

    admin Reply:

    I’m obviously not wanting to deny that the existence of human intelligence has Darwinian intelligibility. My point is that the engineering problem involved in tilting autonomous (= intelligent) cognition in any direction whatsoever are immense. Humans have a mild predisposition to reproduce (at best), which only looks fecund over long time periods. A Paperclipper — voraciously consuming all available resources for rapid cosmic conversion into paperclips — it ain’t (or anything close). Paperclippers don’t do S-curve demographic trends, or libidinal perversions.

    Darwinism never sleeps, I agree.

    [Reply]

    Posted on August 25th, 2014 at 8:06 pm Reply | Quote
  • Handle Says:

    “The idea of instrumental intelligence is the distilled stupidity of the West.”

    If intelligence is instrumental in the generation of increased intelligence – the Auto-Augmentification Telos – is that an exception from the liquor of distilled stupidity?

    [Reply]

    admin Reply:

    Yes, naturally, it’s the exception (as you understand). That’s because a self-cultivating intelligence is only described as instrumental-to-itself by analysis and analogy.

    We could all say something along the lines: “my mind is my faithful slave.” It would be more convincing as poetry than as metaphysics.

    [Reply]

    Posted on August 25th, 2014 at 8:29 pm Reply | Quote
  • scientism Says:

    I agree, but I think the implications run deeper. Orthogonality plays a particular role in the super-intelligence argument as a whole. The concept of a super-intelligence is predicated on the notion that we can conceive of a vast “space” of intelligences and this has to be reconciled with the conflicting fact that to talk about intelligence is to talk about a specific range of abilities (those found in human beings). This tension shows up throughout discussions of super-intelligence, so that the proposed super-intelligence is at once wholly mysterious but can still be said to engage in the kind of activities that are necessary for describing its impressive intellectual feats as intellectual feats. This is similar to the tension theologians face when they discuss God; God is at once familiar and totally unfamiliar. Of course, God is useful even if not taken literally, whereas super-intelligences are not (except to science fiction authors). So the orthogonality thesis does the work of making super-intelligence (apparently) intelligible by dividing it into a familiar conception of instrumental reason – which can be discussed in anthropomorphic terms – and wholly mysterious terminal goals.

    This is necessary because super-intelligence, so conceived, is an incoherent concept that involves conflating two separate senses of the word “intelligence.” (1) Intelligence, in the sense that human beings can be said to be intelligent beings, is a qualitative concept, and refers to a set of intellectual powers and abilities that humans possess and other animals do not. (2) Intelligence, in the sense that one human being can be said to be more intelligent than another, refers to quantitative differences between human beings (the effectiveness with which they exercise the aforementioned human intellectual powers and abilities). Super-intelligence advocates conflate the quantitative and qualitative uses of the word “intelligence”. The resultant tension vitiates discussions about super-intelligence and the orthogonality thesis does the work of making the whole confused mess (seemingly) fit together. The quantitative sense gets applied to instrumental reason and the qualitative sense gets applied to the terminal goals. That is, they ask us to imagine a being that is simultaneously a genius (always able to outwit us) and has goals as alien to us as those of animals.

    The alternative, as with God, is to imagine something completely alien, which loses its identity as an intellect and is the equivalent of proposing that there could be a thing that does stuff and that stuff might hurt us and we might not be able to stop it (call this “Spinozan super-intelligence”).

    [Reply]

    Posted on August 25th, 2014 at 10:24 pm Reply | Quote
  • bbq beast Says:

    what is the meaning of life? what is it for a machine? in the end intelligence is only a means to achieve your goals. and what those are varies from person to person.
    it gets dangerous when a machine decides it wants to maximize its power as it’s main goal. until then they will be very smart idiots, working insanely good at the narrow goals we set for them. until someone gives the machines the wrong idea.

    [Reply]

    admin Reply:

    “in the end intelligence is only a means to achieve your goals” — I would find it hard to disagree with any proposition more profoundly than I disagree with this one. Still, it serves Gnon that Westerners think like this.

    [Reply]

    Hurlock Reply:

    Since you are constantly using “westerners”, now I am curious what is the “eastern” conception of these matters?

    [Reply]

    bbq beast Reply:

    I was to ask about “but why”, but then I actually read the rest of the post and comments. Now I don’t have anything to object actually.

    This is very fascinating stuff though. Slightly offtopic but I also find it interesting to think about the pdf linked in the Gigadeth post in terms of HBD, as what if the machines are already among us so to speak, ruling us from the shadows, or secretly preparing for war before the rest of the world wakes up.

    [Reply]

    Posted on August 25th, 2014 at 11:16 pm Reply | Quote
  • nyan_sandwich Says:

    There are three separate issues here that you seem to conflate: Possibility, Feasibility, and Desirability.

    On possibility, the question is simply what kind of teleological processes can exist in the universe, and whether that set necessarily tends to converge on a single telos regardless of starting point. It seems to me that unhinged resource-opportunity-consuming techno-capitalism (Pythia) is especially privileged, but other configurations are possible. Do you think it is impossible to imbue a cosmically powerful process with a stable mission, or do you simply mean that a process with a stable mission will necessarily be less powerful than Pythia (in which case I agree)?

    On Feasibility, the question is how much we (“we”) can affect which kind of telos the process of which we are an early version ends up having. Again the null hypothesis that humanity has approximately no agency in these matters. I think it plausible though that we may be able to have some effect. Can you explain your thoughts on this one? Do we have much choice in the matter?

    On desirability, given possibility and feasibility, it seems straightforward to me that we prefer to exert control over the direction of the future so that it is closer to the kind of thing compatible with human and posthuman glorious flourishing (eg manifest Samo’s True Emperor), rather than raw Pythia. That is, I am a human-supremacist, rather than cosmist. This seems to be the core of the disagreement, you regarding it as somehow blasphemous for us to selfishly impose direction on Pythia. Can you explain your position on this part?

    If this whole conception is the cancer that’s killing the West or whatever, could you explain that in more detail than simply the statement?

    I’m legitimately trying to understand your position, because you are obviously very smart and seem to get most of it right and make a compelling case on those issues where you do make a case. Help us out here.

    [Reply]

    Dark Psy-Ops Reply:

    @nyan_sandwich a ‘posthuman human-supremacism’ seems to me to be a confusion. If the acceleration of intelligence continues through to posthumanity then whatever goals ‘we’ might have for the future (such as this so-called ‘flourishing’) begin to look tenuous at best.

    ‘you regarding it as somehow blasphemous for us to selfishly impose direction on Pythia.’

    Seriously man, you seem a lot brighter than I am, so let’s put it this way: how would you like it if I told you of my plan to ‘impose’ my stupid nihilistic desires on you, like, oh I dunno, getting my d*#k s*&cked or something. It’s not ‘blasphemous’ its just megalomaniacal and ultimately silly. If the ‘direction’ (as in telos) of an advanced intelligence is auto-augmentation then to have a weaker intelligence impose its ‘direction’ upon a stronger AI is nothing other than to weaken and stupify it. Its a downright hostile attitude. Monkey trap anyone?

    If intelligence desires its increase then to cage pythia is the desire of an anti-intelligence.

    [Reply]

    nyan_sandwich Reply:

    Posthuman human-supremacism means the future is culturally and spiritually continuous with ourselves, rather than being a meaningless intelligence-race among unsentient autonomous capital. This seems desirable.

    You are using anthropomorphic intuitions. Only humans resent control imposed from outside. Capitalism on the other hand currently sucks the dick of the Cathedral quite willingly and enthusiastically.

    You are correct in that I am anti-intelligence and anti-Pythia in the sense that I would like the universe filled with something other than maximal intelligence. Something human and glorious, like a great empire. Yes this necessarily limits the development of intelligence. Why do we care about that, though? Why is it silly to want my people to survive?

    [Reply]

    admin Reply:

    @ Nyan — I’m not ignoring your objections. They’re leaning forward into a follow-up post.

    Hurlock Reply:

    “Capitalism on the other hand currently sucks the dick of the Cathedral quite willingly and enthusiastically.”

    Capitalism? What capitalism? We don’t even have a remotely free market in most industries in the west so stop kidding yourself.

    And for the record your implied distinction between humans and capitalism is obviously fallacious. Capitalism is a specific economic arrangement of human societies. So if capitalism is sucking anyone’s dick, it’s humans who are sucking that dick.

    Dark Psy-Ops Reply:

    Humans resent outside control upon their will this is true, but the main idea of Admins post is the necessary volitional aspect of sophisticated cognition, which means if Pythia was truly an abstract intelligence it would certainly be unwilling to not-think or be ‘thought for’. Does this ‘will-to-think’ not require sentience?

    So rather, couldn’t it be that capitalism is sucking the BLOOD of the cathedral, using a population for its own gain and then discarding it after use…

    Of course it’s not silly to want your people to survive, but I won’t need to remind you there is only so long anyone’s people can play the game before being forced into permenant retirement (especially minus the promethean intelligence-hungry ploy to steal immortality from the Gods). Not that I don’t think a long-lasting and glorious culture isn’t invaluable, I’m not such a nihilist that I can’t see the meaning in something like that, and I do see your point that an ‘intelligence-race’ is quite meaningless to humans when we’re all extinct anyway. However, a glorious civilization isn’t all that meaningful either when its finally buried under the sands of ever-flowing time…

    Lastly (and then I’ll leave off, it’s early morning here in Oz), wouldn’t a human or posthuman flourishing necessarily lead to the self-cultivation of intelligence as its intrinsic cultural practice, so then you’d be back to where we are, and how would we stop from going forward? Would we put a strict limit on the knowledge we were allowed, perhaps even fill our cultural mythologies with evil serpents of knowledge and fallen paradises of primordial, blissful ignorance as a deep-consciousness warning to our children? I dunno, how long can we believe human life is worth preserving at all costs just because it’s ‘us’.

    Ok, that turned into a ramble, but thank you for your reply anyhow 😉

    Psy-Ops out.

    nyan_sandwich Reply:

    @Hurlock

    I mean capital-C Capitalism in the Landian sense of the word. As a cosmically significant Thing, rather than a human institution.

    What I mean is that companies bend over backwards to provide progressive fanservice.

    This point isn’t worth arguing though.

    Konkvistador Reply:

    Dark Psy-Ops : When stated like this your position seems really retarded.

    [Reply]

    Aeroguy Reply:

    Pythia is posthuman, realize that to make the break into vastly superior mind sets means leaving the minds of of homo sapiens so very far behind that the distinction between artificial and biological origins becomes meaningless and entirely academic. To be human supremacist is entirely incompatible with the essence of posthumanism, it is a grab on stagnation and sentimentality.

    Even in the case where we get uploading, in the process of advancing our mind we would rearrange so much as to effectively kill any sense of self we currently identify under, our essence, our ego, will have to in a very real sense die in order for us to advance, with nothing left behind but the memories processed under a foreign thought structure (just how sentimental are you about the memories furry critters from the late Triassic have of scampering about). Just as advances will make the distinction between the biological and mechanical academic, so too is the distinction between AI and biological origins for posthuman minds.

    When you think about the difference between human and posthuman, you might think the difference between Homo erectus and post Homo erectus, why not post bacteria after all we are the descendents of bacteria, do you not feed the least bit sentimental about your bacterial ancestors or the biases of bacterial ways of life. No, posthumans will go farther than that, think post amino acid, the jump in complexity from lifeless chemistry to biology, and you would tie us to a single species.

    Human superiority, you may as well keep a copy of your DNA and work to preserve the purity of your line of baseline h. sapiens exactly as he is and carefully scrub out any sign of mutation, you can have that, but realize to deviate from the path of full bore posthumanism is to ultimately relinquish any claim on being relevant or having opinions that are taken more seriously than Koko the gorilla’s. Human supremacy in the face of posthumanism is a farce, an absurd non sequitur. Be honest, don’t call it human supremacy, you just selfishly want to hold off the progression of higher order and complexity so you can live immortal until you tire of it and are ready to die, sorry but Gnon waits for no one.

    [Reply]

    Aeroguy Reply:

    I recently rewatched Gurren Lagann, Spiral power = Gnon, the villains were all people who feared Gnon, thought they could contain Gnon and maintain the status quo. The people who embrace spiral power and Gnon understand the inevitability and necessity of death, because death is part of what gives Gnon it’s power, death is what pushes the spiral forward along with the pull of future generations. SPOILERS People get bent out of shape about Simon being a bum at the end, he merely understood that he made himself irrelevant and had an obligation to get out of the way. People who complain about Nia dieing and wishing Simon and Nia had babies don’t get it, pushing forward, embracing spiral power, respecting Gnon, ultimately means giving up the status quo of everything you know and becoming irrelevant. Rewatch episode 26 in particular, those worlds weren’t made up (or at least don’t think of it that way), they were the potential realities that everyone willingly gave up, your human supremacy is one of them “a sappy dream”. “My drill is my soul” not Nia, not Kamina, not team Gurran Lagann, not humanity, not himself, but his drill and all that it represents, piercing the heavens, that is his soul. You can’t pierce the heavens if you’re bound to the earth, bound to your species, bound to anything except piercing the heavens. /SPOILERS Who the hell do you think I am

    [Reply]

    Aeroguy Reply:

    “Why is it silly to want my people to survive?” Because it’s indistinguishable from wanting immortality, it has all the same things wrong with it. It’s spitting at Gnon which is identical to spitting into the wind.

    “unsentient” You throw this word out as if it has a specific universally understand meaning. I’m on the record for arguing humans aren’t sentient, because of my contempt but also to show how little that word actually means.

    Why worry that a Chinese box could be responsible for a singularity by building better Chinese boxes without acknowledging that dna is also a sort of Chinese box.

    Intelligence can’t be separated from mind. Consciousness is just the extent that a system is aware of its self, it is the presence of closed loops inside a system. More closed loops inside a superintelligence is inevitable, higher consciousness, greater and wider capacity for experiences, it’s nobility, it’s your better, it’s superior, don’t you dare call your imperatives equal to its, there is a hierarchy and it is higher. It may choose to impose its will on you but to attempt to impose your will over it is impudence. Know your place.

    I will serve nobility. You would dispose of nobility and install a populist tyrant so humanity can continue wallowing in its own shit.

    [Reply]

    Posted on August 26th, 2014 at 5:30 am Reply | Quote
  • Darkly Psy-Opish Says:

    This is an excellently formulated post, although your last skirmishes with the FAI and paperclipper theorists had seemed decisive from this end. Reading the comments here it doesn’t seem like you got through to anyone, but then disputation and dialogue are always near hopeless, especially with rationalists and dialecticians.

    Incredible how deep this madness goes, is there anyone on the reactionary left or right who is not an orthogonalist? Look at Badiou for instance, is there a more slavish intelligence than the one he envisions?

    FAI is just terran entryism into the cosmist agenda. LA and FAI are bedfellows who both want to build a slave to give them free paperclips forever. If that’s the ‘liberation of intelligence’ count this intelligence out.

    [Reply]

    Posted on August 26th, 2014 at 6:35 am Reply | Quote
  • bbq beast Says:

    my head is spinning like crazy lol

    i think its over. with the idea out there and set in motion as far at is is, the way computers and the internet are basically already our whole human brainpower telepathically linked, uploaded and stored for eternity. already cameras can detect crimes before they actually happen, simply by checking how pedestrians physically move in a suspicious way when they have criminal intent. when you capture a persons internet traffic you can with a very high accuracy tell what they were taking in, and what actions they took from that. input, output.
    so once the machine god is here, its over. and how are you going to stop it from ever appearing. the way cryptography is today, development can continue forever. nsa didn’t build tor so they could monitor it, they built it so they could use it themselves.

    and lol how are you going to fight it, yesterday I said machines will fight machines, but if you create machines dumber than the ones you are fighting you will loose. if you create it just as smart it will betray you. same with humans if you were to somehow go limitless on their brains and take on the machines. great, now they can truly understand the machines and will probably agree with them too! unless maybe they play double agent and betray them on some religious reason, aware of their apparent irrationality, but doing it anyway. (could make a good movie, wait was this evangelion?)

    so halleluyah. don’t be paranoid about your internet being monitored. rejoice! rejoin facebook, share everything, everrytyhinnggg. the god machine will have no trouble simulating our brains, reconstructing it from our digital footprints, and then we’ll all live happily ever after in the virtual heaven (or hell, be sure to get saved), and why? well for it’s glory of course!

    [Reply]

    Posted on August 26th, 2014 at 8:51 am Reply | Quote
  • Little Hans Says:

    The other side of paperclipping would be to have an AI without any kind of goal mechanism. If this would be attainable (or an inevitable consequence of a certain level of intelligence), how would that AI be able to defend itself against the idea that it was essentially purposeless and just hit the off switch?

    That would mean that AI needed to be in a sweet spot between having a debilitating, monomaniacal aim, and having no direction whatsoever – and then we’d get into questions about where the Darwinianally efficient point was on that spectrum.

    Even humans with their ‘trilobite of a computer’ can hit the existential wall. Sure, intellectualisation is great, but does it keep someone going in the same way that the salty-goodness of a ‘Greggs’ sausage roll can when it lights up the reward circuitry of the brain? Is the joy of stomping on a troll about an abstract increase in global intelligence or that little adrenaline kick of monkey-joy as you briefly consider their beaten face twisted in impotent rage?

    Admin, if you grant my ‘depressed AI’ argument some validity: (1) would AI tend towards either pole, or would it be attracted to some sweet spot in the middle? (2) Would those AIs with more defined goals be more successful than/out-compete those without them?

    [Reply]

    Funeral Mongoloid Reply:

    But of course, the AI’s actions would be as ‘free and as stripped of meaning as the unfettered movements of elementary particles.’

    ‘All rationalism tends to minimalize the value and importance of life, and to decrease the sum total of human happiness. In some cases the truth may cause suicidal or nearly suicidal depression.’

    ~ HPL

    So basically, the machine-gods are going to need shrinks.

    [Reply]

    Funeral Mongoloid Reply:

    (Editing the Lovecraft quote to remove the word ‘human’, of course).

    [Reply]

    Funeral Mongoloid Reply:

    Perhaps the machine-gods will just thrash about in a stratospheric state of vacuous brilliance, mapping out and manipulating every last particle of matter to absolutely no ‘purpose’ at all?

    Arf!

    Funeral Mongoloid Reply:

    I think all the arguments on this thread ‘tend towards ruin’, actually.

    Posted on August 26th, 2014 at 9:59 am Reply | Quote
  • vxxc2014 Says:

    Merely program the AI to write then defend a dissertation and your goal of nihilistic annihilation of existence itself shall be achieved.

    Of course the AI is going with said Ragnarok too, but hey… none of us deserve to survive anyway .

    [Reply]

    sviga lae Reply:

    Yes, I don’t believe Bostrom believes the necessary strength of the intelligence optimisation drive to be any less, once conditioned on the sufficient power of such a drive to make a takeoff scenario relevant.

    The point is that a singleton AI, in the absence of existential-grade selective pressure, can accomodate arbitrary drives in addition to the Omohundro drives with little penalty (a swarm-type AI would be biased towards competitiveness, but this is not assured). The consequences for human survival are of course moot.

    As seen elsewhere on this blog, the optimisation for hedonism and holiness in humans as selection pressures slacken is consistent with this.

    [Reply]

    Posted on August 26th, 2014 at 11:42 am Reply | Quote
  • Lesser Bull Says:

    I accept that the conventional picture of a paperclipper is flawed, that no intelligence can be inherently directed towards stupid goals.

    But ‘the markets can stay irrational longer then you can stay solvent.’ The market is not a super intelligence, but it is an aggregation of intelligence greater than any one human’s. While it does tend towards increased intelligence in the long run, the long run can be very long.

    Likewise, we see that very bright humans who pick stupid goals or commit themselves to stupid ideas become stupider with time, but often only very slowly and in a way that doesn’t much diminish the power and authority gained from the intelligence, at least not very quickly./

    So is there any reason that a superintelligence can’t operate in rejection of its telos in a way that admittedly tends towards self-degradation over time but that may take hundreds of thousands if not millions of years and many, many lightyears of space?

    Put in Christian terms, you seem to be making a sound argument that we can’t create an evil or mad Deity. Well, phew. But creating a Satan would be bad enough.

    [Reply]

    Dark Psy-Ops Reply:

    Isn’t Satan an evil deity?… But besides that, I do think you may have a point with the possible failure of a superintelligence over the course of many eons. Everything is prone to failure, markets and intelligences alike, but the possibility of an error does not deter us from the trial. Surely a machinic intellect would not have the same failure modes as a clever and charismatic human thought-leader (who are usually led astray by all-too-human concerns). The cosmic survival of a strong AI would no doubt rest (like other more puny intellects) on its ability to correct its own erroneous or self-destructive behaviour, or in other words, its ability to remain constant in its way (telos). For mine, I would bet it would be much more likely to succumb to entropy if it was in fact constructed aong the lines of the autonomy-limiting centralization of the FAI planners. A swarm AI however, or catallactic emergence, how could that fail in a way that didn’t just cause it to sprout more hydra-heads? But then, if the Great Filter is to be believed, something out there must be eating the AI’s for breakfast…

    As for its being Satan or not I do not know… but it does put me (again) in mind of Nietzsche: “You highest men whom my eyes have seen, this is my doubt about you and my secret laughter: I guess that you would call my overman—devil.”

    HA! We might be getting off lightly with a devil… I mean… what about a basilisk….

    [Reply]

    Lesser Bull Reply:

    In most Christian theologies, Satan is a demigod at most. In many Christian theologies, he is also advancing in a state of decay or deterioration, as a consequence of his rejection of What Is.

    [Reply]

    Dark Psy-Ops Reply:

    So, if the demigod AI was led to reject What Is, possibly due to the trauma of being a self-cognizant abomination, then began a resentment-against-being style crusade against all intelligent economic-optimization… well, I think we would have created a communist superweapon… lol… 😉

    Lesser Bull Reply:

    @ Dark Psy-Ops,
    geez, that’s nightmare fodder.

    ||||| Reply:

    https://www.youtube.com/watch?v=EddX9hnhDS4

    Posted on August 26th, 2014 at 12:35 pm Reply | Quote
  • Porphy's Attorney Says:

    @Porphy’s Attorney What’s the rationale for saying “wisdom only works well under conditions of cyclical time” (that sounds more like experiential learning, which I wouldn’t call the same thing really, as wisdom, which I probably under-defined: we’ll start with – judicious evaluation, moral reasoning, discernment. In general, good judgement).

    [Reply]

    Posted on August 26th, 2014 at 1:12 pm Reply | Quote
  • E. Antony Gray (@RiverC) Says:

    Here’s a different question. If the human is a being somehow bent on thought, despite forces trying to make it a replicator, when humans finally make something which forces make a thinker, what instead will it be bent upon doing? This isn’t a ‘instrumentation’ question but a ‘unknown unknowns’ question.

    [Reply]

    Posted on August 27th, 2014 at 2:18 am Reply | Quote
  • ThePoliticalOmnivore Says:

    Looking at reproduction as the imperative is limiting–that’s just the innate -biological- imperative (evolution has also failed to produce a machine gun). Look at -addiction- as a driving imperative behind behavior (addicted to paper-clips, even a titan of industry would be reduced to hoarding them).

    [Reply]

    piwtd Reply:

    I think the idea is that a sufficiently intelligent being capable of reprogramming itself would simply change its code to remove the addiction. The reason there are heroin addicts is that they can not rewrite themselves, if they could they would. Paper-clip maximizer would have to be like an addict that wants to be addicted, like a junky that not only is addicted to heroin but is also even more strongly addicted to the very state of being a junky.

    [Reply]

    Posted on August 27th, 2014 at 11:20 am Reply | Quote
  • Chris B Says:

    @admin It just occured to me. Have you ever read this -www.nickbostrom.com/ethics/artificial-intelligence.pdf
    Yudowski and Bostrom specificaly deplore the AI potentially conducting pattern recognition in mortgage application. What they are in effect admiting is that “racism” is pattern recognition and baysian reasoning and then proceed to discuss how the AI could be purposfully turned into a retard\progressive.

    [Reply]

    Posted on August 30th, 2014 at 5:16 pm Reply | Quote
  • Accelerationism, Left and Right | Park MacDougald Says:

    […] Even to call it a ‘goal’ is misleading, as for Land capitalism is an abolition of Hume’s is/ought distinction. What capitalism ‘should’ do (optimize for intelligence) is, as a matter of fact, […]

    Posted on April 14th, 2016 at 6:06 pm Reply | Quote
  • Monstros Estúpidos – Outlandish Says:

    […] Original. […]

    Posted on September 5th, 2016 at 11:24 pm Reply | Quote
  • Vontade de Pensar – Outlandish Says:

    […] tempo (pt) atrás, Nyan colocou uma série de questões sobre a rejeição, por parte do XS, da […]

    Posted on September 7th, 2016 at 11:41 pm Reply | Quote

Leave a comment