Quote notes (#65)

Derbyshire on Brynjolfsson and McAfee’s The Second Machine Age:

It’s all happening very fast. The field of Artificial Intelligence was dominated for decades by Moravec’s Paradox: Tasks that are very difficult for human beings, such as playing grandmaster-level chess, are fairly easy to get computers to do, while tasks any two-year-old can accomplish, such as distinguishing between a cat and a dog, are ferociously difficult to computerize.

That’s beginning to look quaint. The authors tell us about some robotics researchers working on SLAM — simultaneous location and mapping. That’s the mental work of knowing where you are in an environment and where other things are in relation to you. It’s the kind of thing the human brain does well, with very little conscious thought, but which is hard to get machines to do.

In 2008 the researchers were close to despair. A review of the topic that year described SLAM as “one of the fundamental challenges of robotics … [but it] seems that almost all the current approaches cannot perform consistent maps for large areas. …” Three years later, thanks to the development work behind the Xbox Kinect accessory, SLAM was a solved problem.

There has been similarly fast progress in other problems at the hard end of Moravec’s Paradox — natural language processing, face recognition. As with TurboTax, once the code is written, you can stamp out a million copies for essentially zero cost per copy; and also as with TurboTax, each new application will wipe out thousands of jobs, decoupling work done from the human workers once needed to do it.

What will a thoroughly decoupled society look like? …

ADDED: How about we take this to a whole new level?

ADDED: Further confused musings among the jury.

ADDED: An impressive robotics and employment round-up.

March 11, 2014admin 22 Comments »
FILED UNDER :Technology

TAGGED WITH :

22 Responses to this entry

  • Hurlock Says:

    Meh, I am always sceptical towards “oh noes ze machines are going to take all ze jobs!!1!!” hysterias. Not that there is no truth to them, but I don’t think they are such a huge concern. The explosion in human population was due to advancing technologies, in a lot of sectors technological advances actually created more jobs. Entire industries have been wiped out in the past by advanced technologies. The trade unions were loud but (thankfully) everyone ignored them. People adapted and specialized in different sectors. Humans always adapt. We are like cockroaches. We always survive. Those who cannot adapt to the new technological reality will simply die out. Nothing new here. Has been happening on the planet millenia before the first stones throwing monkey. And considering the past new technologies are likely to make life more easier for everyone, not harder. Even for those that initially seem threatened by them.

    I am more worried about these hysterias actually getting more people worried. In the long run it doesn’t matter of course, but if they go hysterical enough, a lot of imbeciles may actually try stopping and even reversing the process. And in western democracies the imbeciles are the vast voting mass…
    We already have a huge amount of idiots who preach how evil capitalism and advanced technologies are, hiding the fact that they are simply too dumb and lazy to try harder or adapt, behind “save the polar bears” slogans. If these fuckers get crazy enough I can see governments legislating new technologies out of existence just to satisfy the mob. This would of course be suicidal, but no suprising forecast considering how we are currently moving…

    How much time before the Sillicon Valley has to move to Shanghai?

    [Reply]

    admin Reply:

    I share the skepticism. It’s nevertheless worth raising some of these questions. Is there no imaginable threshold at which robots shift from substituting for human labor to comprehensive substitution? If such a possibility is declared (dogmatically) to be simply impossible, then such a dismissal itself amounts to a very strong (negative) claim about the prospects of machine intelligence development.

    Robin Hanson’s ‘Ems’ need to be brought into the discussion at some point. Where RH is evasive, though, is in the treatment of mass-produced human emulations as proletarians — i.e. freely contracting laborers — from the start, rather than attending to the social catastrophe of their transition from tacit formal slavery into labor market agents. (… but this is just a taster for a more penetrating exploration of the topic.)

    [Reply]

    Hurlock Reply:

    Oh, definitely. Complete substitution is a very, very real possibility. Saying “we are far from there yet” is obviously unsatisfactory and somewhat begs the question. Comprehensive substitution is (to me) pretty certain to happen (at some point).
    You could say that humanity is effectively inventing itself out of existence. Technological advancement is how we improve the quality of our lives, but if at the end of it lies the extinction of the species…What is technology, really? A means to self-improvement, or a means to self-destruction? The answer is not that obvious…but I digress.
    The thing is, the moment machines can comprehensively emulate humans in every aspect, losing your middle-class job will the last of your worries…

    I think Hanson is evasive on purpose. That is a very dangerous subject and it is doubtful that a satisfactory answer can be given. Probably haven’t thought about it in-depth enough, but it certainly doesn’t look pretty. I am looking forward to your treatment of it.

    [Reply]

    Alrenous Reply:

    Complete substitution by robots implies post-scarcity. It’s a good thing, not a bad one. Jobs are costs, not goods.

    The intermediate stages might be kind of rough, though. First, our wealth distribution system is not well-generalized enough to deal with post scarcity, and so as we approach it, our distribution system will become more and more deranged.

    Similarly, if it is possible to make low-skill workers obsolete, while I think all workers will rapidly become obsolete at that point, there will be a 20-40 year gap where some are and some aren’t, and that will produce incredible wealth gradients.

    RiverC Reply:

    ‘human survival’ is not what is in question, here, methinks

    [Reply]

    Posted on March 11th, 2014 at 9:10 am Reply | Quote
  • Mike Says:

    It seems to me that the main effect of automation and mechanisation over the past few centuries has been to shrink the workweek. There’s a reason why “full time” work in the west is ~35-40 hours a week rather than the 60-70 hours/wk it once was.

    [Reply]

    admin Reply:

    If true (and I’m not doubting it), this suggests that the workers have been winning the class war up to this point, appropriating the benefits of capital-automation in the form of increased leisure time. (The Autonomists put it the other way around, arguing that resistance to work drives automation, which is intriguing, but also perverse in a typical Marxist way.)

    [Reply]

    Saddam Hussein's Whirling Aluminium Tubes Reply:

    Over the past few centuries perhaps. But it’s important to keep in mind that technology (and capitalism) gave us the 60 to 70 hour work week, medieval peasants weren’t working anywhere near that many hours, at least in temperate parts of Western Europe where winter exists.

    http://lmgtfy.com/?q=hours+worked+by+medieval+peasant

    And uh… people are still working 70 hours a week on our behalf. Chinese people. We just give them little pieces of paper backed by the might of the US military so we don’t have to work those hours ourselves.

    The idea that we’re generating increased leisure time needs more examination. Some people have increased leisure time, like those of us who are collecting disability for being too fat to work. Others are working about the same hours they did decades ago (despite increased automation and rising productivity) and still others are working far more than they used to.

    [Reply]

    Peter A. Taylor Reply:

    I want to draw your attention to Robert Frank’s book, _Choosing the Right Pond: Human Behavior and the Quest for Status_. If what I really want is not a specific, absolute amount of material goods, but status relative to my neighbors and competitors, then increasing absolute productivity is almost irrelevant. Regardless of the technology level, if my neighbor works a 40-hr week, I may have to work a 45-hr week in order to have enough more wealth than he does to impress the cute redhead down the street. Technology doesn’t make all potential mates equally attractive.

    Anything I do that increases my status relative to you has the negative externality of decreasing your status relative to me. Hockey players will vote collectively to force themselves to wear helmets, but if they are allowed to choose individually, they each will want the slight relative edge they get from not wearing one.

    [Reply]

    RiverC Reply:

    There is also the level of satisfaction requirement… they found that most people did not respond to 1000+ dollar ‘incentives’ for behavior at workplaces once the people made enough money. If they had enough they would just shrug and respond that they would rather spend their extra time doing something they like rather than getting extra money.

    There are two thoughts about this. The first is that you underpay people so you can control them via money (this backfires in a reasonably free market as those willing to pay enough will probably retain more workers, etc.) but the second is that once you pay people enough you can get them to do things (provided they like them enough) without paying them.

    The latter idea used to simply be known as ‘salary’

    Posted on March 11th, 2014 at 11:14 am Reply | Quote
  • spandrell Says:

    The problem with all these techs is that they´re nowhere near close human precision. Ever tried using face recognition to categorize your photos? Dictation software? Automatic translation? Kinect?

    It’s quite amazing how much every of these has advanced during the last 10 years, and they work quite well, but not near enough. They’re 90% good, but that 10% remaining drives you crazy. You can’t rely on them. Automatic translation is no match to a human translator, and kinect lags so much that it sucks as a gaming accessory, which is why it remains unpopular.

    So again, call me when I can finally get my flying car. Until then it’s all hype.

    [Reply]

    admin Reply:

    It’s not about giving you a flying car, but about replacing your doctor.

    [Reply]

    spandrell Reply:

    My doctor has a state protected guild. He’ll be ok. We’ve had 90% accuracy diagnostic algorithms for a while, they’ve not gone mainstream. 100% would go mainstream in spite of doctors protests, but 100% hasn’t happened, and there’s no evidence it will ever do.

    [Reply]

    Posted on March 11th, 2014 at 12:56 pm Reply | Quote
  • RiverC Says:

    One has to also have slight skepticism regarding the quality of these solutions: especially now there is impetus to show progress (and progress does seem to have been made) in these fields. The trend here that concerns me is not automation but replacement, i.e. machines not as assistants and slaves but as Bladerunner style ‘better humans’ that force non-machines and non-controllers to the margins. In all structures there are agents who make it their interest to try to remove skilled, irreplaceable humans with interchangeable parts. A lot of the bad software and websites on the internet are the result of this trend; i.e. replacing native English speakers with software or design training with foreigners with none who can be contracted and monitored over oDesk and discarded with no hard feelings if no longer necessary to the operation (whatever it may be.)

    Forbidding certain technologies is a viable and reasonable thing, but we don’t presently have the proper structure to do it sanely.

    [Reply]

    Lesser Bull Reply:

    Can’t predict the future, but one possibility is that the more machines have the flexibility and holistic grasp of humans, they will also have the complex unpredictability and mixed motivations of humans.. You don’t even have to assume consciousness for that to be so.

    [Reply]

    RiverC Reply:

    perception occurs outside the realm of the visible, this is a problem that ‘machine learning’ alone cannot tackle.

    [Reply]

    Posted on March 11th, 2014 at 2:14 pm Reply | Quote
  • John Says:

    Are you aware that the Artilect War is coming to the masses via a major motion picture starring Johnny Depp?

    http://www.imdb.com/title/tt2209764/

    Regarding your ADDED link, that is the real end game impact of Bitcoin. The currency is only round one. Ultimately everything is going in the Blockchain. Google got the ball rolling but Bitcoin is where it gets legs. The Mind of God bootstrapping itself before our eyes.

    [Reply]

    Posted on March 11th, 2014 at 4:52 pm Reply | Quote
  • Lesser Bull Says:

    For all I know the Derb is right and fusion, I mean real AI, is just around the corner.

    But I have the advantage of having read him for years, and he’s been saying this for years.

    That said, I don’t think you need real AI to automate a number of intellectual tasks that humans now do directly. I don’t see any particular reason the efforts of lots of smart people can’t be systematized to take over a number of tasks (lots of legal work is pretty rote, for example) with maybe ongoing human input here and there.

    [Reply]

    Posted on March 11th, 2014 at 6:07 pm Reply | Quote
  • nyan_sandwich Says:

    A conversation I had with a wealthy friend of mine on this topic a few months ago. My position has since updated, but remains roughly the same. We want the most critical economic and military work do be done by machines while society restructures to self-actualization based flourishing.

    Him:

    … Singularity University … spent a lot of time discussing two key topics:

    1. The rise of artificial intelligence and the potential for a super AI to come to dominate society.

    2. The impact of artificial intelligence and automation on jobs. Highly skilled workers continue to grab a greater share of the economic pie, while in many cases the value of unskilled labour is so low as to make them not employable at a living wage.

    Expanding on that, in the past, as productivity gains were made, there was always some requirement to have unskilled labour, and as a result unskilled labour was able to capture a portion of productivity gains – the pool of unskilled labour was not unlimited, and therefore they had some bargaining power. If the minimum price you can pay for local unskilled labour is too high (minimum wages are too high), you will automate instead. There may be no value that those labourers can provide at a price that the market is willing to pay that is acceptable from a regulatory standpoint (if the minimum wage is higher than the clearing price for unskilled labour). In this case, you have to decide if, as a society, you simply pay off unskilled labour (give everyone welfare) or are indifferent to their plight. The option of indifference can take two forms: abandonment (see Camden from 2011 to early 2013) or extreme policing (see Camden in its current form).

    At some point, however, and this was the point that was being made at [SU], artificial intelligence starts to eliminate middle income and high skilled jobs as well. How does society maintain its stability in the face of huge swathes of the population becoming redundant?

    My point to you is the following: while you observe that most of the population is ignoring the concepts and issues that you think are most important, the wealthiest, smartest, and most powerful people in the world are not.

    Me:

    The question of what to do as everyone gradually (or rapidly) becomes economically obsolete is interesting. I think that at the point where everyone is obsolete, we’d want the machines to figure out what was best for human flourishing, because they’d be better at that sort of thing as well, conditional on being designed to actually care and do it right. There are a great many other things they could care about, though, and given the nature of moral philosophy, it would be a seriously difficult project to construct them to care about the right things, involving solving many major philosophy problems and reducing them to engineering. This is a hurdle that construction of killer indifferent AI would not have, so that part needs to get started early.

    It is encouraging to know that elites are interested in the problem. I don’t know if the *Friendliness* aspect is widely understood among such people, though. Everything I see indicates that yes, people are interested in the problem, but don’t understand the difficulty and importance of getting friendliness right. In that sense, it still seems worth making noise about that aspect of the problem. I could be wrong about this, though; I’m going on public information, and haven’t made a serious research effort.

    Him:

    If the elites design the machine (and this is highly likely), then you are far more likely to end up with a machine that is built for the benefit of the elite than for the benefit of everyone. Don’t bet on the wealthiest people in society being altruistic. Our current system is a bargain between the rich and the poor where the rich agree to give up a considerable amount if income in return for not getting shot in the street or knifed in their sleep. They are purchasing security by keeping the poor happy. That has not always been the preferred method of purchasing security, and it may not be the preferred method in the future if the weaponry value of expensive technology increases versus the weaponry value of cheap technology. If the risk adjusted cost of subjugation declines below the cost of placation, do not bet on the rich choosing placation over subjugation.

    Me:

    I suspect that *if* they get it right by their own standards (indirect normativity, friendliness, turn over details to AI singleton), on reflection the sentimental value of saving and uplifting everyone else to some level would outweigh the cost. We’re talking about a fraction between 1/2 and 10^-9 of the total resources available (the entire universe) in return for *saving and uplifting everyone*, depending how extensively you want to fund them. I expect there would be some point in that range where the utility cost would no longer outweigh the altruistic opportunity. Still, point taken that the indifference fraction could be much lower than anyone would admit now when the balance of power is different.

    [Reply]

    Posted on March 11th, 2014 at 7:15 pm Reply | Quote
  • SOBL Says:

    Decoupling won’t happen before an automated resort state. What happens when the elites don’t need the rest of us?

    [Reply]

    Posted on March 11th, 2014 at 11:15 pm Reply | Quote
  • Lesser Bull Says:

    Smart elites would keep a community of the rest of us around so they had someone to feel superior to. For that purpose, the more atavistic and non-progressive the better. Aha! We may have discovered NRx’s value proposition.

    Without such a community, elite cohesion and status competition would probably break down disastrously. Huxley thought a society of alphas wouldn’t work because no alpha would be willing to do beta and delta and gamma work. But the real reason is that an alpha by definition needs to feel himself on top of a heap. So there has to be a heap.

    [Reply]

    Posted on March 11th, 2014 at 11:35 pm Reply | Quote
  • Bryce Laliberte Says:

    People are machines, albeit of a different kind than those we use to our own purposes. This has been an age of machines since life began. That is nature of Nature; forms propagating forms propagating forms… The AI singularity is but this theme to its (to our own understanding) logical extreme, intelligence ordering itself endlessly and progressively(!) taking control of the entire universe to its own ends. Nature is subtle, and works her will out of sight from prying eyes.

    [Reply]

    Posted on March 12th, 2014 at 5:16 am Reply | Quote

Leave a comment