Scrap note #5

Jim wonders whether AI is still progressing:

AI is a hard problem, and even if we had a healthy society, we might still be stuck. That buildings are not getting taller and that fabs are not getting cheaper and not making smaller and smaller devices is social decay. That we are stuck on AI is more that it is high hanging fruit.

Do we need a theory of consciousness to close the deal? (Alrenous  has a long-standing commitment to this topic — see the comments.)

FWIW, Outside in is strongly emergentist on the question: doing AI and understanding AI might not be tightly — or even positively — related. (Catallaxy and AI are not finally distinguishable.) Of course, that makes the relevance of social decay even more critical.

January 30, 2014admin 17 Comments »
FILED UNDER :Uncategorized

17 Responses to this entry

  • BLDR Says:

    Look over Robert Hecht-Nielsen’s “Confabulation Theory” — in particular the confabulation equation (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.9224&rep=rep1&type=pdf) which he posits is a major discovery that debunks the “Bayesian religion” by providing a scalable model of cognition in which the parallel processing elements are performing functions similar to the brain’s thalamocortical modules. Among other things, he claims that this is the holy grail of artificial modeling of natural intelligence — that confabulation theory captures, in a scalable algorithm the essence of learning, thought and behavior. He is, in essence, claiming to have achieved strong AI (http://en.wikipedia.org/wiki/Strong_AI).

    It is, of course, tempting to dismiss his extreme claims as some sort of mental aberration — perhaps resulting from his having hit the jackpot with the sale of his company for, by some accounts, between $3B and $4B to one of the most prominent credit rating agencies in the world.

    On the other hand, he did sell his company for between $3B and $B to one of the most prominent credit rating agencies in the world.

    Moreover, if we give the initial statement in Clark’s Laws any credence: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right.”, RHN’s age and the fact that he is commenting on his specialization should be given some weight.

    With this in mind, I would ask you to review the linked presentation: http://www.cs.sandia.gov/CSRI/Workshops/2006/HPC_WPL_workshop/Presentations/08-Hecht-Nielsen-Neurocomputing.pdf — which I located at Sandia’s website (and of which I recommend you commit to memory lest it disappear down the memory hole) — made by RHN at Sandia in 2006. Note he proposes an “Extraction System Organization” with a budget rising to $300B/year by 2015.

    In particular, I found this item interesting:

    Collectors and Analysts have no need to know how extraction system works (this knowledge should be highly restricted) – users need only know extraction system’s capabilities and how to use it.

    [Reply]

    Posted on January 30th, 2014 at 3:26 am Reply | Quote
  • spandrell Says:

    Reminds me of the quote: “Every time I fire a linguist, the performance of the speech recognizer goes up”.

    Yet natural language processing has been stale for years, and machine translation just doesn’t work properly. You can’t get only so far without proper understanding.

    [Reply]

    admin Reply:

    I’m more persuaded by your first paragraph than your second.

    [Reply]

    Contemplationist Reply:

    Indeed the first indicates a contradiction to the second.

    [Reply]

    Lesser Bull Reply:

    Hence the word ‘yet’

    spandrell Reply:

    The day you stop asking your kids to translate and consistently rely instead on Google translate, the first paragraph will be true. Alas…

    [Reply]

    Posted on January 30th, 2014 at 4:31 am Reply | Quote
  • nyan_sandwich Says:

    You are of course correct that AI is doable without understanding AI; after all, evolution did it, and it doesn’t understand anything. However, I don’t believe that the *singularity* is possible without understanding intelligence – the singularity is what happens when intelligence becomes reflective and turns to the project of improving intelligence.Of course it doens’t have to be *humans* who do the understanding.

    I’ve thought before, but somehow not written, that there are two routes to the singularity: Directly through Reflective Artificial Intelligence, and indirectly through Reflective Social Technology.

    Social technology is decaying because democracy and the decline of the church means no one is in a position to understand it and do well founded social engineering. However it is possible to imagine a society better positioned to do proper technical social engineering. It is further possible for this society to get good enough at engineering institutions that do social engineering for it to hit some kind of feedback dynamic. Obviously it wouldn’t hit full silicon-speed without AI, so it would have to do that eventually, but a social-tech singularity would be a pretty cool intermediate stop, especially if Friendly (I think it would be).

    [Reply]

    nyan_sandwich Reply:

    >especially if Friendly (I think it would be).

    Optimist fail. Most things are unFriendly. A civilization undergoing order-singularity could very well turn out badly for us.

    [Reply]

    Posted on January 30th, 2014 at 4:51 am Reply | Quote
  • Antisthenean Says:

    @

    When the Singularity happens, how will we know? Surely an entity that transcends the intellect of its creators would be more than adept at concealing itself. Frankenstein’s monster was better at concealing its movements, and made better entrances, than its creator.

    I’m a Kurzweilian in terms of goals, but Hubert Dreyfus always seemed to have more sober and better analyses of the AI project(s).

    [Reply]

    Posted on January 30th, 2014 at 7:09 am Reply | Quote
  • Igitur Says:

    I understand catallaxy to simply mean “emergent economic order”. In this sense, it’s only “intelligent” insofar some rationale for how this emergent order actually works for the best — and this is a project that was close to a conclusion with Debreu and Hahn and Arrow (i.e. as in proofs of uniqueness in very general, nonmetric contexts) but fizzed out.

    Now, there are two kinds of austrian-like protests to this. The first is that neoclassical optimality is not synonym to catallaxy itself. Here I agree: emergent order means just that. Accepting it is a matter of engagement with reality — there is really only so much that “engineering thinking” can do to improve these outcomes (although institutional architecture, including actual architecture, urban planning and so forth just might). Maybe volatility can be controlled by better graph-theoretical models of counterparty-risk dependencies.

    On AI, I like Jim’s article, and I’d already sent it to some people on Facebook. But it’s not because I agree exactly; he’s grappling with the problem of consciousness, not with AI.

    I should start a blog already. I’m tracking too many threads over posts here.

    [Reply]

    Posted on January 30th, 2014 at 12:18 pm Reply | Quote
  • Bryce Laliberte Says:

    @Nyan_Sandwich

    Why does intelligence need to know how intelligence works in order to iteratively improve it? As you said, evolution managed AI without understanding anything, and we know how to model evolution.

    An AI could, by comparison, run simulations of competing cognitive models, testing for fitness against arbitrarily specified values.

    This is supposing iterative improvement of intelligence doesn’t face exponential difficulty, which precludes the runaway intelligence augmentation that seems an essential part of the Singularity.

    [Reply]

    nyan_sandwich Reply:

    >An AI could, by comparison, run simulations of competing cognitive models, testing for fitness against arbitrarily specified values.

    I have no solid technical argument, but my intuition is thus:

    As an engineer, the difference between brute force optimization and calculated leaps is *huge*. So first of all, I would expect the takeoff to be a few orders of magnitude faster once the thing understands intelligence.

    Second, intelligence understands stuff, that’s what it does, so the singularity is probably going to be reflective whether it has to or not (though this does not support the reflectivity -> singularity link I posited).

    Third, “understanding” means only that you are able to predict the subtle details that defy earlier observations. I think that there might be enough subtleties in intelligence and software design that brute forcing it will only get partial test coverage, and our hero will end up shooting itself in the foot.

    That said, I think a takeoff could happen and get way out of our reach even without really wise intelligence enhancement:

    I also don’t think it would actually take much intelligence enhancement to eclipse us very quickly. Human-level intelligence even is pretty good, and given a) infinite conscientiousness, b) scalable computing power, and c) *no coordination problems*, something could take over the world very quickly.

    Consider that the story of human history is a story of huge amounts of computational power loaded with very intelligent software going to waste because it’s not turned whole-hog to the project of civilization. Instead we have politics and the humanities and shit. An AI wouldn’t have to deal with that.

    >face exponential difficulty

    The evidence in evolution suggests that once there was proper selection pressure to develop intelligence, it happened very quickly with no sign of diminishing returns.

    [Reply]

    Posted on January 30th, 2014 at 12:26 pm Reply | Quote
  • Different T Says:

    What is the “singularity?”

    Is it the incoherent fantasy of a determinist? If not, what is it? The “hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence?”

    Will the first realization of such a “singularity” be that its existence has been determined? Will the second realization be that the first realization has been determined? Will the third realization of such a “singularity” be that realizing the first realization has been determined has been determined?…………………

    What about the first person to look through the singularity’s “window”? Will he discover that his action (looking into the “window”) has been determined? Will his next discovery be that his first discovery has been determined?………………….Will the person discover that the “window” is a “hall of mirrors?” Will he discover that this new discovery has been determined?

    Is the “singularity” a mental disease that is highly infectious to the high-IQ population? Does it cull and/or make “useful” the high-IQ population?

    [Reply]

    Antisthenean Reply:

    I’m not a determinist, and the singularity makes perfect sense to me, so I don’t know what you’re rambling about.

    [Reply]

    Different T Reply:

    Is the “singularity” the quest for ever more predictive and explanatory modeling? If not, what is it?

    If it is, and you’re not a determinist, how does it make “perfect sense” to you?

    [Reply]

    Antisthenean Reply:

    Since you’re apparently incapable of doing your own research, here’s La Wik on the subject:

    “The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.”

    So it actually has fuck-all to do with metaphysical determinism.

    Posted on February 1st, 2014 at 10:08 pm Reply | Quote
  • Different T Says:

    That was the quoted definition from the original comment.

    Again, does that “hypothetical moment” regard AI utilizing ever more predictive and explanatory modeling?

    So it actually has fuck-all to do with metaphysical determinism.

    Incorrect.

    Do you bow to Robo-God? Has it been determined?

    [Reply]

    Posted on February 2nd, 2014 at 2:15 pm Reply | Quote

Leave a comment