The Dark Forest

Volume two of Cixin Liu’s science fiction trilogy.

The universe had once been bright, too. For a short time after the big bang, all matter existed in the, and only after the universe turned to burnt ash did heavy elements precipitate out of the darkness and form planets and life. Darkness was the mother of life and civilization.

The dark forest is the universe, but to get there — with insight — takes a path through Cosmic Sociology:

“See how the stars are points? The factors of chaos and randomness in the complex makeups of every civilized society in the universe get filtered out by distance, so those civilizations can act as reference points that are relatively easy to manipulate mathematically.”
“But there’s nothing concrete to study in your cosmic sociology, Dr. Ye. Surveys and experiments aren’t really possible.”
“That means your ultimate result will be purely theoretical. Like Euclid’s geometry, you’ll set up a few simple axioms at first, then derive an overall theoretic system using those axioms as a foundation.”
“It’s all fascinating, but what would the axioms of cosmic sociology be?”
“First: Survuival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant.”

“Those two axioms are solid enough from a sociological perspective … but you rattled them off so quickly, like you’d already worked them out,” Luo Ji said, a little surprised.
“I’ve been thinking about this for most of my life, but I’ve never spoken about it with anyone before. I don’t know why, really. … One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion, and the technological explosion.”

The derivation from these axioms is the Exterminator. Resource conflicts between civilizations follow strictly from the two axioms. Game-theoretic tension is added by irreducible suspicion, and technological explosion.

“That’s the most important aspect of the chain of suspicion. It’s unrelated to the civilizations’s own morality and social structure. … Regardless of whether civilizations are internally benevolent or malicious, when they enter the web formed by the chains of suspicion, the’re all identical”

Which is to say, they are all threats to each other, intrinsically, and irresolvably. Technological explosion means that any civilization represents a potential menace of inestimable potential, escalating massively within a span of mere centuries, and “On the scale of the universe, several hundred years is the snap of a finger.” An intolerable danger, then.

“That’s … that’s really dark.”
“The real universe is just that black.” Luo Ji waved a hand, feeding the darkness as if stroking velvet. “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him. If he finds other life — another hunter, an angel, or a demon, a delicate infant or a tottering old man, a fairy or a demigod — there’s only one thing he can do: open fire and eliminate them. In this forest, hell is other people. An eternal threat that any life that exposes its own existence will be swiftly wiped out. This is the picture of cosmic civilization. It’s the explanation for the Fermi Paradox.”

October 1, 2015admin 34 Comments »

TAGGED WITH : , , , ,

34 Responses to this entry

  • michael Says:

    by that logic the most successful civilizations greatest threat is itself


    michael Reply:

    Unless it figures out expansion is not necessarily good, perhaps even an Achilles heal.Maybe a civilization with the edge that’s thinking expand or die ought instead think wipe out the competition while I have this edge then find the sustainable sweet spot.What advantage is there to out competing the rest of the world/ universe to then compete with myself and a finite universe. How would a world with say only europe occupied and the rest a resource bank and natural playground. What loss is there to the people that are never born -none. To those that exist its a sustainable paradise. it could improve but not expand sure theres no longer a feel of unlimited capitalist possibilities but thats an illusion we feel because of the capitalist gap between cultures and because we haven’t hit the natural resource limits yet but somewhere out there is the same limit that a smaller civilization would have just not as pleasantly.Even then “limit ” is not quite the term exploitation dividend is more like it. .The socialists argue the zero sum as does the scientist philosopher above. theres plenty of universe and plenty of market just not infinite.and infinity doesnt actually exist. Stop living for theories, live for the pleasure of living.Somewhere out there is the limit of starbucks franchises but already behind starbucks are better local alternatives are they less or more capitalist? more they are competing starbucks is rent seeking.Is there a minimal size a society needs to be to do optimum science or industry or banking? I bet its not that large.


    Aeroguy Reply:

    China practiced your strategy for a while, they were the middle kingdom and outside was only barbarians not to be concerned about. Rome also eventually practiced it, the northern part of Europe were too cold to bother with, let the barbarians have it. Where there exists an outside, there exists the possibility of an expansionist power that could eventually consume your own.

    You are right about it wrapping back around where a universe under one civilization quickly fractures. What we are left with is this dark truth, “only the dead have seen the end of war”.


    michael Reply:

    my strategy is a bit more proactive and final theres no need for a wall because all competitors are eliminated or reduced to a theme park sized population.It might not solve the problem of social cohesion but it certainly would have a better chance. The real point though is you eventually arrive at this point anyway you can plan to live comfortably below the malthusian threshold or you can wing it and wait to be circumscribed.The less difference the less conflict and wasted resources. Does it eventually fracture well i have been saying what HBD may ultimately be telling us is we have not evolved to a point of sustainable governance monoculture may mitigate this but I would guess the first order of crispr ought to be addressing this making uber mensch without manners is a bad idea.

    Exfernal Reply:

    The greatest threat is self-fragmentation.


    Aeroguy Reply:

    Self-fragmentation looks an awful lot like the branching in evolution. The emergence of life from replicating organic compounds, life feeding on itself, species infighting, civilizations infighting forming ever more intricate order with the matter in the universe.

    There is enormous order (useful energy) in the universe, but it is diffuse. The second law of thermodynamics cries out for that order to be consumed and entropy produced in turn. Engines emerge, they feed on the order to produce a new more complex localized order even as they accelerate the production of entropy, localized order in the service of entropy. As the localized order becomes grander and more complex it is able to access ever greater planes of diffuse order in the universe. Order feeds upon order, entropy accelerates as the concentration and expansion of localized order increases. The universe is being ordered until in a final spasm of entropy production the grandest localized order consumes itself and finally finishes the mission the 2nd law of thermodynamics set out for it; total entropy in the universe is maximized. Unless there exists frontier outside the known universe, with order to be consumed, as long as more entropy can be produced, life will go on.


    Lucian of Samosata Reply:

    Cool story bro.

    Posted on October 1st, 2015 at 2:23 pm Reply | Quote
  • The Dark Forest | Neoreactive Says:

    […] By admin […]

    Posted on October 1st, 2015 at 2:40 pm Reply | Quote
  • Denswend Says:

    I did not read the book (though I am planning to, now that it has come to my attention) but there is something that I have to put out here. I (with a dose of uncertainty, mind you) disagree with the assertion that civilizations are the greatest threats to each other, but instead propose that civilizations are the greatest threats to themselves (or at least that intercivilizational threat is of the same magnitude as intracivilizational one).

    The most original (and therefore primitive) Malthusianism postulates that populations expand beyond their ability to sustain themselves, and therefore experience mass deaths in various kinds. However, since variability between populations (a most overlooked component of HBD, especially in WN circles) exists, not all segments of population will die off in an equal rate. Thus, constant Malthusian catastrophes will inexplicably alter population in such a manner that certain traits will evolve.

    Now this is my personal speculation, and one of which I too am skeptical of, but given enough alterations and populations will change in that way that they are capable of producing and maintaining a system for which the sole purpose is to delay the Malthusian catastrophes or negate them fully – civilization. Populations which are unaltered in such a way will never amount to anything – and any Golden Age they experience will be shortlasting, with lower intensity genious and followed by intense fragmentation periods. Population altered in such a way will necessary birth systems designed to ward off M. catastrophes and subsequently remove the very same evolutionary pressures which made them capable of creating and maintaining complex systems – and that is the brighter scenario, one only possible when population exist without any contact with other populations. The darker scenario is that cooperation and altruism necessary for development (hawk-dove game works best when all are doves) turns pathological when introduced to hawkish populations.

    Thus, bursts of development of “civilization” (I have no word for group of people guided by evolutionary principles in such a manner that they prosper in evolutionary sense, not the piggish social signaling) will be rare.

    Honestly, the only way complex life could work is if it evolved in such a way that it can instantly flip between hawk and dove mentality – dove for prosperity, hawk for protecting that prosperity from outer and inner hawks.


    Dark Psy-Ops Reply:

    “Honestly, the only way complex life could work is if it evolved in such a way that it can instantly flip between hawk and dove mentality.”

    Brilliant. Dove civilization can maintain its prosperity by breeding dove-hawk hybrids to eliminate hawks on condition that the hawk-hunters are abundantly provided for by the doves, presumably by being offered juicy slabs of tender dove meat. This could benefit the doves by sacrificing only the most pigeon-like among them, thereby creating eugenic effect. There’ll be a beautiful race of tasty doves guarded securely by their benevolent dove-hawk overlords… what’s not to like?


    Denswend Reply:

    I do not see alternatives. Too much hawk and cooperation is impossible – and therefore any system which ensures cooperation and punishes defection is impossible (or perverted in the form of “anarcho-tyranny”). Too much dove, and you get “cuckservatives”.

    What you propose is a differentiation (as in physical differentiation) between hawk and doves. I do not see that working. Eventually, one side will prosper. European doves used to be hawks – then the pressures of civilized cooperative society bred them into this.

    I proposed a hybrid, but it must be ubiquitous with no pressures for each side to prosper.

    Hell, this isn’t anything new. Ingroup/outgroup mentality is what makes this hawk-dove mentality possible. The trouble is, really, finding a sufficient criteria for what constitutes “ingroup”.


    Posted on October 1st, 2015 at 5:43 pm Reply | Quote
  • scientism Says:

    “Even breathing is done with care” is such an eloquent explanation of why we can’t see them.

    I wonder if we’re being careful enough.


    Erebus Reply:

    That’s one possible explanation. There are dozens of others. (Most plausible, to me, for a number of reasons, is that we live in an early-universe simulation.) In any case, there could be aliens a few light years away — and we’d have absolutely no hope of detecting them. SETI wastes time looking for radio transmissions, which are declining into total obsolescence even here on Earth, and new initiatives seeking to find anomalies in the IR spectra of distant star-systems are looking for Godlike, galaxy-spanning civilizations, solely. If there are Earths there, we’d certainly not be able to see them. (It’s worth noting that the IR search has uncovered a few interesting systems for future review.)

    The Fermi Paradox and The Great Filter are, emphatically, not clever thought experiments. They’re great fodder for Sci-Fi books, but you’d have to have a seriously deficient imagination to take them seriously for even a moment, and they make a number of really wrongheaded assumptions.


    Dark Psy-Ops Reply:

    “The Fermi Paradox and The Great Filter are, emphatically, not clever thought experiments.”

    Isn’t MAD strategy an expression (perhaps not quite ‘clever’) of this thought experiment on the concrete level? We have a technological explosion (quite literally, in the form of WMDs), that in turn creates chains of suspicion between “superpowers”, and this results (historically) in an “overkill redundancy” arms race where destruction of either side is assured 1000x times over. And this considered the only promising foundation for diplomatic relations. In this scenario disarmament will get you killed, but the reverse may well get everyone killed. It’s classic prisoners dilemma. (Also proxy wars have their place here, where SPs will fight on alien terrain, using relatively harmless low-tech civs as pawns.)

    The Fermi Paradox could also be called the “Catch 22” of Universal Darwinism. In short, the extinction of advanced civilizations has been hard-coded into the vector of escalating technical proficiency. Eros and Thanatos are the same engine of annihilation. In life’s excess resides its destined demise, as like a heart that dies young having been full of pity. Once one civ acquires the means to conquer or destroy an other with only acceptable cost to itself, anything less is “sheer procrastination”, and in the Dark Forest thinking twice is a fatal sin. It’d be a safe bet to say that MAD strategy (as the inevitable game-theoretic scenario of competing tech SPs) will result in the tragic ruination of all actors every single time without exception.


    Dark Psy-Ops Reply:

    “this results (historically) in an “overkill redundancy””

    The Seven Kill Stele is instructive here.

    Erebus Reply:

    “Isn’t MAD strategy an expression (perhaps not quite ‘clever’) of this thought experiment on the concrete level”

    No, because neither The Fermi Paradox nor The Great Filter state anything at all with respect to warfare. All they state is: “We can’t see anybody out there. Where is everybody?” The rest is simply conjecture, which tends to be utterly ridiculous. We can’t see any civilizations out there because space is a big place, and we have no way of resolving even relatively close details, to say nothing of distant ones. We can’t detect Earthlike civilizations at any distance. Anything we might be able to see from here would, by definition, be a Godlike society of almost unimaginable technological prowess — and it should not be taken for granted that even those massive-scale societies are observable /w the current tools at our disposal.

    …And if you’re dealing with a truly advanced civilization, the rules no longer apply. “Civilization continuously grows and expands, but the total matter in the universe remains constant.” Even this should not be taken for granted; it is theoretically possible to convert energy to matter, and this could explain why we don’t see distant civilizations in the IR spectrum: They might be converting their waste-heat/energy to matter.

    The Fermi Paradox/Great Filter then asks: “Why hasn’t anybody contacted us?” Which happens to be the very definition of a stupid question.

    So the proper response to The Fermi Paradox is: This is not a true paradox. There is as yet insufficient data for a meaningful answer. I reckon we’ll need a few more centuries or millennia of technological development before we can attempt to answer it.

    Besides, how is “survival is the primary need of civilization” working out for the disgraceful Swedes? It looks to me like Western civilization is quite capable of committing suicide. That’s what NRx is for, isn’t it? I mean, the point of NRx is to see the world as it really is, beneath the veneer, and to discover and support stable societal forms.

    Anyway, The Dark Forest was an interesting book. I thought that it was inferior to the first book in the series, but I really liked that alien teardrop part. I’ll say no more. Wouldn’t want to spoil it for anybody.

    Aeroguy Reply:

    “it is theoretically possible to convert energy to matter” True
    “They might be converting their waste-heat/energy to matter” Empirically false, that would be a violation of the laws of thermodynamics, though to be fair instead of matter if they could access pocket/alternate universes then it could be possible to hide the entropy production. If that tech were possible it would be very hopeful and could even contradict the finite matter/energy assumption.

    Erebus Reply:

    Fair enough. I’d only note that a SETI paper — “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. I. Background and Justification” — explicitly suggests that advanced extraterrestrial civilizations may be “expelling their waste heat as neutrinos, efficiently using their energy supply to emit low-entropy radiation, employing energy-to-mass conversion on a massive scale, or [are] violating conservation of energy.”

    It also states: “A null detection would implicate either the violation by alien technology of physical laws we consider fundamental, or the non-existence of ETI’s with large energy supplies and mid-infrared waste heat in our search volume.”

    I guess my point is that we can’t really make any assumptions. We have no firm basis for supposing anything at all, at this stage — so that we’re even suggesting a paradox or a filter is hubris. What we consider to be inviolable laws could mean nothing at all to beings as far above us as we are above gnats. (Which is, explicitly, what these sorts of searches are looking for.)

    Dark Psy-Ops Reply:

    Sorry for the long comment, (half is merely quotes)…

    “No, because neither The Fermi Paradox nor The Great Filter state anything at all with respect to warfare.”

    I guess I’m going on what was in this post directly.

    “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost … It’s the explanation for the Fermi Paradox.” (Liu)

    “Technological explosion means that any civilization represents a potential menace of inestimable potential, escalating massively within a span of mere centuries … An intolerable danger, then.” (Land)

    And finally, “To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion, and the technological explosion.” (Liu)

    I’d think these last two concepts evoke war as something of a social cosmological constant, and Liu suggests they condition the emergence of all life in the universe to an unavoidable doom. I was merely attempting to further Liu’s explanation, to see if it had any muscle.

    Your points against our competence at detecting alien civ is a strong one, and I don’t have much of a response, other than to change the question subtly from “where are they?” to “what are they doing?”. We might not need to know the potential tech of truly advanced civilizations to guess at the kind of pressures exerted on them (at least at first, before they manage to transcend physical laws). If we can manage to think (if you’ll forgive a joke) what could actually prosper in a black universe, given the constraints of Gnon, then we might well give up looking for ETI altogether, and definitely not try to attract attention, and instead focus all our energies on cryptographic black-out. It could be a strange irony that our universe is abundant with advanced civs that were driven into perfect concealment due to “seeing the world as it really is”. Gnon selects for cunning above all, what looks to us like a Great Filter (or Exterminator) may be the necessary, stringent mechanics to catalyze God-like SI.

    If nothing else, the Fermi Paradox/Great Filter horror duo provides us with an abstract space of concepts to preemptively explore anxieties hinted at by futurist threat analysis. We get the unsettling feeling that whatever survives “out there”, it won’t be “us”. We know Godlike complexity is possible (as of the universe itself) but we also know that life starts basic, and grows incrementally (and exponentially) more complex. We want to explore the leap from base-line tech competence to God-like complexity and ask, “is it possible?”… Though I agree evidence in support of any answer is laughably thin. We can’t see Godlike civilizations, but as you say, that is no refutation to their existence, but given what we know about the entangled web of war, technology, Darwinist propagation, and cosmic horror, can we realistically expect any civilizations to develop past the point of singularity tech without destroying the known universe in consequence? Advanced tech civ will need to overcome many previous (and branching) variations of itself to continue the upward march of complexity. An interstellar civ’s path to the stars will be painted red with the blood of its close relatives. Obsolescence is the flip side of optimization.

    This is all shamelessly wild speculation, and I’m deliberately talking from the perspective of a “contact pessimist”, but there could be an important discussion buried somewhere beneath all the silent horror….

    Here’s two relevant quotes from WaitButWhy:

    “Possibility 4) There are scary predator civilizations out there, and most intelligent life knows better than to broadcast any outgoing signals and advertise their location. This is an unpleasant concept and would help explain the lack of any signals being received by the SETI satellites. It also means that we might be the super naive newbies who are being unbelievably stupid and risky by ever broadcasting outward signals. There’s a debate going on currently about whether we should engage in METI (Messaging to Extraterrestrial Intelligence—the reverse of SETI) or not, and most people say we should not. Stephen Hawking warns, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.” Even Carl Sagan (a general believer that any civilization advanced enough for interstellar travel would be altruistic, not hostile)called the practice of METI “deeply unwise and immature,” and recommended that “the newest children in a strange and uncertain cosmos should listen quietly for a long time, patiently learning about the universe and comparing notes, before shouting into an unknown jungle that we do not understand.” Scary.

    Possibility 5) There’s only one instance of higher-intelligent life—a “superpredator” civilization (like humans are here on Earth)—who is far more advanced than everyone else and keeps it that way by exterminating any intelligent civilization once they get past a certain level. This would suck. The way it might work is that it’s an inefficient use of resources to exterminate all emerging intelligences, maybe because most die out on their own. But past a certain point, the super beings make their move—because to them, an emerging intelligent species becomes like a virus as it starts to grow and spread. This theory suggests that whoever was the first in the galaxy to reach intelligence won, and now no one else has a chance. This would explain the lack of activity out there because it would keep the number of super-intelligent civilizations to just one.”

    Of course, superpredators don’t arise in a vacuum, so there couldn’t so much be a “first to reach intelligence” as there would be a very slow, gradual climb upon the piling carcasses of yesteryear’s more primitive “firsts”. War was not invented by the devil in solitude, auto-militarization is triggered by an arms race. We might hope the affirmative forces of life will propagate faster than they exterminate, but in truth, the faster they propagate, the more certain is their extinction, like in the ancient stories of parricidal Gods.

    Erebus Reply:

    I would say that war is probably not a social cosmological constant, but the fact of the matter is that we can’t assign probabilities to anything at all. The only thing we can state with a reasonable amount of certainty is this: Any ETI that we are able to detect with the crude means available to us is very likely to be unimaginably advanced from a technological standpoint. These supercivilizations are likely able to manipulate energy and matter in ways which are entirely beyond our ken. Even in the very darkest forest, under Gnon’s bloodshot gaze, tigers don’t seek to destroy worms.

    WaitButWhy drew an interesting analogy: “when Pizarro made his way into Peru, did he stop for a while at an anthill to try to communicate? Was he magnanimous, trying to help the ants in the anthill? Did he become hostile and slow his original mission down in order to smash the anthill apart? Or was the anthill of complete and utter and eternal irrelevance to Pizarro? That might be our situation here.”

    I don’t think that speculation, or trying to formalize axioms, is useful at this stage. “The Technological Explosion” axiom cannot be taken for granted, and the “civilization always grows” axiom is difficult to take seriously, given that a human retreat into simulated worlds looks a lot more likely than the physical colonization of the galaxy, and given that population growth in advanced human societies is negative already. (What we really need is a techno-transcendentalist religion.)
    …We can’t even assume that life starts basic. The nearest race of ETIs could be a crystalline Boltzmann brain hivemind, for all we know. Or it could be an organically networked race of nematode-like creatures which can only communicate slowly, over century-long timelines. Would such creatures necessarily be interested in war? Would even the most bloodthirsty or cautious humans seek to destroy them upon first contact? I am not certain that we can answer those questions in the affirmative.

    In my opinion, there is only one thing we can learn from Fermi’s Paradox and the Great Filter hypothesis: There exists a group of people who don’t understand that we are really terrible at looking for life in outer space, and who are capable of extrapolating wildly from very poor and limited evidence.

    …Which is not to say that they’re wrong. For all I know, they could be right. But to use the words “paradox” and “filter” is plain hubris, and to pretend that these are super-serious and important thought-experiments is plain bullshit. (They may be important and serious a few centuries from now, optimistically, if we manage to improve the hell out of our tools and generate much more high-quality data.)

    Dark Psy-Ops Reply:

    An advanced ETI is going to be fully aware of the goings-on in its locale (intelligence collects information as a rule). A tiger doesn’t study worms, humans do. The ETI may well view a species like humans as worms, but I doubt it’ll be so blase in regards to the machines, which pose a singular threat as agents capable of accelerating self-modification (and extreme destructiveness). Are you willing to bet against singularity tech being constructed on earth in the next 500 yrs? Or 5000 yrs? Why would an alien predator take that chance? In fact, we can be certain the Great Filter is real because if it wasn’t, there’d be a super-civ out there, and we’d be dead by now, 100% guaranteed.

    The nematode would be harmless it we could be certain it would never achieve technological lift-off with any serious effectiveness, but then, as you say, our ‘probabilities’ in these matters are not trustworthy, so it’s definitely logical to exterminate the nematode before it invents a method to speed-up communications and achieves accumulating technical competence.

    As for a crystalline Boltzmann Brain I’m pretty sure even that will need to grow and won’t just pop into existence fully formed…

    Tiger’s might not notice worms, but it’s sure going to be on the look-out for other freaking tigers, or tiger-cubs, and when it spots them, it will wipe them out, and if it doesn’t, it’s a fool, and will be killed in turn. Welcome to the jungle.

    Dark Psy-Ops Reply:

    It’ll be difficult to kill an intriguing, ambiguous Ocean Sentience if we ever come across one…

    Dark Psy-Ops Reply:

    “I don’t think that speculation, or trying to formalize axioms, is useful at this stage. “The Technological Explosion” axiom cannot be taken for granted, and the “civilization always grows” axiom is difficult to take seriously, given that a human retreat into simulated worlds looks a lot more likely than the physical colonization of the galaxy.”

    A retreat into simulated worlds requires the axioms of technological explosion and growth (in computation power). Full upload into virtual reality will take enormous energy, and whatever simulations are ‘run’ will depend on whoever controls the nodes. Military and game industries will be the first adopters, but eventually we’ll be seeing entire environments built and maintained by programmers employed by billionaires to create DisneyLand 3000. There’ll probably be virtual estates of ghastly inequality. And imagine the politics. People won’t be able to afford Basically Just Like Heaven Sim and will still be living in old versions of This Is Pretty Much Heaven Sim. They’ll say, “how is it fair only a few can afford Actually Heaven Sim? And what’s more, they have the power to hack our more modest levels of virtual living standards with simulated apocalypse whenever they desire, and if it wasn’t for our alliance with Google’s International Community of Heaven Sim they’d have done it by now for sure…”

    And then consider #GG, the Cathedral will attempt to ban all White Male CisHeteroPatriarchy Hate Simulations. The horror never ends.

    Erebus Reply:

    We don’t know whether or not supercivilizations exist. We have not yet conducted a serious and exhaustive search for evidence of their existence, so we have no basis for the assumption that they don’t exist.

    If they exist in our corner of the universe, we can’t assume that they would have exterminated us by now. What if, for sake of argument, they prefer to keep us in a zoo — or manipulate our perception of the universe, so that we think we’re alone? (The “zoo” and “planetarium” hypotheses are both clever solutions to stupid questions.) What if they know more about the nature of reality than we do, and for some cryptic reason decide that it would be better to keep us alive for some reason? What if they have weapons ready under the cloud cover of Venus, are observing our progress, and are simply waiting to see if we become a meaningful threat? What if we’re living in a simulation run by a race which has already colonized the universe?

    Furthermore, we don’t know whether Earthlike civilizations exist. In fact, we have no way of detecting such civilizations at any meaningful distance. Thus we can’t assume that they’re being exterminated or eliminated by an outside force. In fact, we can assume nothing. We don’t know whether extraterrestrial life is rare or downright common.

    Aside: Your argument would be stronger if the universe weren’t such a young place. It is hypothesized that galaxies like ours will still be around for 1-2 trillion years, if not longer than that. Our galaxy is 13 billion years old. This means that only 1% of its “lifespan” is spent. It is not unthinkable that the first supercivilizations arose only quite recently, in cosmic terms. Perhaps they simply haven’t had the time to colonize the universe in ways which would be obvious to us.

    …Or perhaps they’ve already colonized the length and breadth of it, and, in their unfathomable complexity, are somehow eluding us. The stars look like they’re burning off their energy to no useful end, the nanobots don’t seem to be incoming, and we haven’t yet been able to spot any supercivilizations, but we shouldn’t jump to any conclusions just yet. There’s a hell of a lot we don’t know, especially when it comes to creatures to whom our physical laws might mean nothing.

    As for axioms:
    1. We have very limited means for detecting ETIs. We can only hope to detect stable supercivilizations which are not breaking any of our physical laws, which are not travelling or expanding at lightspeed, and which have established themselves in our little corner of the galaxy.
    2. It follows that we don’t know whether or not ETIs are common.
    3. We don’t know whether ETIs are subject to our laws of physics. (Could be that they hacked this simulation a long time ago.)
    4. Given the above, we cannot state that there is a paradox or a filter. We don’t have the data. Not by half.
    5. We don’t know how ETIs would behave upon contact with humanity. They might exterminate us, turn us into computronium, observe and manipulate us, or even uplift us. It is impossible to say.
    6. The fact that we still exist as a civilization would tend to imply — if only very slightly — that we would not be destroyed upon first contact.
    7. We cannot know, at this stage, whether or not we are the first technological civilization in our galaxy.

    My point is: Conjecture without data is a waste of time. “The Great Filter” has no lessons to teach us besides “avoid existential risks” and “life might be very rare” — so it’s a pointless thought experiment. Fermi’s paradox is, strictly speaking, no paradox. It’ll become a paradox if we conduct an exhaustive, effective, and thorough search of our little light corner, and if no ETIs are detected. Until that time….

    Lucian of Samosata Reply:


    The Paradox Paradox: “Most things that are called paradoxes actually aren’t.”

    Posted on October 1st, 2015 at 6:08 pm Reply | Quote
  • The Dark Forest | Reaction Times Says:

    […] Source: Outside In […]

    Posted on October 1st, 2015 at 6:13 pm Reply | Quote
  • SanguineEmpiricist Says:

    I bought the first book you recommended have to get home and read it, fuckkkk. Chapter one was a little slow.


    Posted on October 2nd, 2015 at 1:08 am Reply | Quote
  • Alrenous Says:

    I instantly think it would make for a great sequel to Master of Orion 2.


    Exfernal Reply:

    That made me curious of the book, if there is any similarity to the MoO2 universe. MoO3 had terrible mechanics, but alien civ designs were interesting as well.


    Jefferson Reply:

    Wait…there was a plot to MoO2?


    Posted on October 2nd, 2015 at 1:25 am Reply | Quote
  • outsider Says:

    Maybe quantum or singularity hypercomputers can give civilizations the illusion of unlimited resources, but some replicating robot is always likely to try to fill the universe with copies of itself.


    Lucian of Samosata Reply:

    “some replicating robot is always likely to try to fill the universe with copies of itself.”



    Posted on October 2nd, 2015 at 6:47 am Reply | Quote
  • This Week in Reaction (2015/10/04) | The Reactivity Place Says:

    […] Over at Nick Land’s, the mainstream media engages impressive Mental Gymnastics to admit to some racial differences too obvious to ignore, while ignoring others too obvious to ignore. Also an illustrative analogue in The Dark Forest. […]

    Posted on October 6th, 2015 at 5:19 pm Reply | Quote
  • quation Says:

    Basically what has been said is that we know almost nothing about ETI. But that doesn’t mean we shouldn’t be suspicious.


    Posted on April 21st, 2016 at 1:37 pm Reply | Quote

Leave a comment