Posts Tagged ‘Cybernetics’

Quote note (#356)

XS makes more of this than the article itself does:

Last summer, the AutoML challenge saw teams go head-to-head to build machine learning “black boxes” that can select models and tune parameters without any human intervention. Even game designers are in on the act—the team behind the hit game Space Engineers has used some of their profits to set up a team of experts to design AI able to optimize its own hardware and software. […] While this kind of automation could make it easier for non-experts to design and deploy AI systems, it also seems to be laying the foundation for machines that can take control of their own destiny. […] The concept of “recursive self-improvement” is at the heart of most theories on how we could rapidly go from moderately smart machines to AI superintelligence. The idea is that as AI gets more powerful, it can start modifying itself to boost its capabilities. As it makes itself smarter it gets better at making itself smarter, so this quickly leads to exponential growth in its intelligence. …

June 1, 2017admin 20 Comments »
TAGGED WITH : , , , ,

Quote note (#319)

Greer’s analysis has its questionable idiosyncrasies, but at its level of maximum abstraction it’s hard to contest:

As 2017 dawns, in a great many ways, modern industrial civilization has flung itself forward into a darkness where no stars offer guidance and no echoes tell what lies ahead. I suspect that when we look back at the end of this year, the predictable unfolding of ongoing trends will have to be weighed against sudden discontinuities that nobody anywhere saw coming. We’re not discussing the end of the world, of course; we’re talking events like those that can be found repeated many times in the histories of other failing civilizations.

He systematically underestimates the contribution of unprecedented positive-feedbacks, in the opinion of this blog, but — perhaps ironically — factoring those in only strengthens the broad prognosis. It’s mostly night now.

January 2, 2017admin 12 Comments »

Quote note (#251)

From Niven and Pournelle’s The Mote in God’s Eye (end Chapter 3):

“They used to teach us that evolution of intelligent being wasn’t possible,” she said. “Societies protect their weaker members. Civilizations tend to make wheel chairs and spectacles and hearing aids as soon as they have the tools for them. When a society makes war, the men generally have to pass a fitness test before they’re allowed to risk their lives. I suppose it helps win the war.” She smiled. “But it leaves precious little room for the survival of the fittest.” […] …
“You were saying about evolution?”
“It — it ought to be pretty well closed off for an intelligent species,” she said. “Species evolve to meet the environment. An intelligent species changes the environment to suit itself. As soon as a species becomes intelligent, it should stop evolving.”

It makes you think (or rather, the opposite). The original sin of intelligence — falling back in blind homeostatic antipathy against its own conditions of emergence — isn’t so hard to see.

May 18, 2016admin 36 Comments »

The Sex Trap

More malignant cybernetics, this time outlined by Janet L Factor in a brilliant essay at Quillette. The basic grinder:

Because the human population sex ratio is normally 50/50, when one man takes on an extra wife, another man is deprived of the opportunity to have one at all. So if just one man in ten takes a single extra wife, a very modest degree of polygyny, that means fully 10% of men are shut out of the marriage market entirely. This sets off a mad scramble among young men not to end up in that unfortunate bottom 10%. There, the options for obtaining sex (at least with a woman) are reduced to two: subterfuge or rape.

Now, think about the reproductive numbers. Say a woman can be expected to successfully raise ten children in her lifetime. But a man can have that 10 times the number of wives (or concubines) he obtains. What does this mean for parental investment? Parents can hope for only a small number of grandchildren from daughters, but a large number from sons. Selection will favor parents who favor sons by granting them the means necessary to obtain wives. Daughters will suffer neglect; some desperate man will likely take them anyway.

In fact, the reality is even worse than this, because the relatively low biological value of daughters encourages female infanticide. So the number of women available for marriage actually becomes less than that of men even in theoretical terms, yet the number of children each of them can have does not increase. It’s a vicious circle that escalates sexual conflict — a trap.

Gnon’s sense of humor is not always easy to appreciate.

(Previous harsh trap-circuits at XS here, and here.)

January 13, 2016admin 55 Comments »
TAGGED WITH : , , , ,

The Basics

The fundamental insight of the West is tragedy. It cannot be cognitively mastered, assimilated, or overcome. At the end it will be as unsurpassed as it was at the beginning. The essential insight is already fully achieved within the fragment of Anaximander, at the origin of Occidental philosophy.

There are English translations of the fragment here, and here. A definitive version still awaits us. This is the Wikipedia rendering:

Whence things have their origin,
Thence also their destruction happens,
According to necessity;
For they give to each other justice and recompense
For their injustice
In conformity with the ordinance of Time.

Continue Reading

January 12, 2016admin 52 Comments »
FILED UNDER :Philosophy

Economic Horror

H.P. Lovecraft and the global financial system have finally converged.

From the Artemis Capital Management letter to investors (seriously): “Volatility is about fear… but extreme tail risk is about horror. The Black Swan, as a negative philosophical construct, is when fear ends and horror begins. … Fear is something that comes from within our scope of thought. True horror is not human fear in a definable world, but fear that comes from outside what is definable. Horror is about the limitations of our thinking. … Cthulhu is a black swan.”

Abundant Gothic cybernetics complete the nightmare. (“Shadow short convexity describes an immeasurable fragility to change introduced when participants are encouraged to behave in a way that contributes to feedback loops in a complex system.”)

Halloween arrives early this year.

October 17, 2015admin 12 Comments »
TAGGED WITH : , , , ,

Quote note (#182)

A dynamic cultural analysis of the immigration mess from Ed West:

The downside to guilt culture is that social justice politics, having evolved from Christianity, often sounds sanctimonious – a deeply unattractive trait. In particular, Christianity’s universalism, referencing St Paul’s idea that there is no distinction between Jew and Greek, can often lead to pathological altruism. This is problematic, especially when it involves integrating people from a shame culture into a guilt culture, and in particular the second generation when the restraints of the former are lifted. The Syrian war is like a positive feedback loop of migration and misery, with alienated second-generation Muslim immigrants leaving Europe to fight jihad in the Middle East, which in turn ruins the lives of middle eastern Muslims, who are forced to settle in Europe. […] It is because of Europe’s previous immigration problems that many people are reluctant about accepting more people from the Middle East. In recent days, however, their reservations have been overruled by our culture of guilt and the silent triumph of Christianity.

September 5, 2015admin 48 Comments »
FILED UNDER :Discriminations
TAGGED WITH : , , , ,

Short Circuit

Probably the best short AI risk model ever proposed:

I can’t find the link, but I do remember hearing about an evolutionary algorithm designed to write code for some application. It generated code semi-randomly, ran it by a “fitness function” that assessed whether it was any good, and the best pieces of code were “bred” with each other, then mutated slightly, until the result was considered adequate. […] They ended up, of course, with code that hacked the fitness function and set it to some absurdly high integer.

… Any mind that runs off of reinforcement learning with a reward function – and this seems near-universal in biological life-forms and is increasingly common in AI – will have the same design flaw. The main defense against it this far is simple lack of capability: most computer programs aren’t smart enough for “hack your own reward function” to be an option; as for humans, our reward centers are hidden way inside our heads where we can’t get to it. A hypothetical superintelligence won’t have this problem: it will know exactly where its reward center is and be intelligent enough to reach it and reprogram it.

The end result, unless very deliberate steps are taken to prevent it, is that an AI designed to cure cancer hacks its own module determining how much cancer has been cured and sets it to the highest number its memory is capable of representing. Then it goes about acquiring more memory so it can represent higher numbers. If it’s superintelligent, its options for acquiring new memory include “take over all the computing power in the world” and “convert things that aren’t computers into computers.” Human civilization is a thing that isn’t a computer.

(It looks superficially like a version of the — absurdpaperclipper, but it isn’t, at all.)

ADDED: Wirehead central.

June 3, 2015admin 38 Comments »
FILED UNDER :Apocalypse

Logic and Nonlinearity

The crucial passages from this reconstructed conversation have already been cited over at the other place, but it’s important enough to pick over here, too. The maximally-compressed take-away: cybernetic processes are naturally registered as logical paradoxes (with consequent affinity between paradox and — dynamic — reality).

[The] whole fabric of living things is not put together by logic … when you get circular trains of causation, as you always do in the living world, the use of logic will make you walk into paradoxes. Just take the thermostat, a simple sense organ, yes? […] If it’s on, it’s off; if it’s off, it’s on. If yes, then no; if no, then yes. …

So the isomorphy between the most basic cybernetic control loop and classical logical paradoxes (for e.g.) is exact. The significance of this is surely beyond need of defense.

Capra asks, alluding to the Epimenides Paradox, “Do thermostats lie?” To which Bateson replies:

Yes-no-yes-no-yes-no. You see, the cybernetic equivalent of logic is oscillation.

It seems to me that something of vast importance was discovered here, and subsequently almost entirely lost.

(For anybody following the link, it’s worth noting that surgical extraction is in this case ‘steelmanning’. The retreat to ‘metaphor’ as a substitute for logical formalism is disastrously inadequate. The alternative that matters is not figurative language, but the circuit diagram, and recursive code.)

May 2, 2015admin 25 Comments »

Dark Precursor

Colin Lewis plays with the idea of William Blake’s The [First] Book of Urizen as a prophetic anticipation of X-risk level artificial intelligence. It’s a conceit that works gloriously. A somewhat extended illustration:

1. LO, a Shadow of horror is risen
In Eternity! unknown, unprolific,
Self-clos’d, all-repelling. What Demon
Hath form’d this abominable Void,
This soul-shudd’ring Vacuum? Some said
It is Urizen. But unknown, abstracted,
Brooding, secret, the dark Power hid.

Continue Reading

January 10, 2015admin 7 Comments »
TAGGED WITH : , , , ,