Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI The Media Technology

An Open Letter To Everyone Tricked Into Fearing AI 227

malachiorion writes If you're into robots, AI, you've probably read about the open letter on AI safety. But do you realize how blatantly the media is misinterpreting its purpose, and its message? I spoke to the organization that released letter, and to one of the AI researchers who contributed to it. As is often the case with AI, tech reporters are getting this one wrong on purpose. Here's my analysis for Popular Science. Or, for the TL;DR crowd: "Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."
This discussion has been archived. No new comments can be posted.

An Open Letter To Everyone Tricked Into Fearing AI

Comments Filter:
  • by Anonymous Coward on Thursday January 15, 2015 @07:05PM (#48825053)

    You're one of them aren't you!

  • by mdsolar ( 1045926 ) on Thursday January 15, 2015 @07:06PM (#48825067) Homepage Journal
    I can't do that.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Dr. Heywood Floyd: Wait... do you know why HAL did what he did?

      Chandra: Yes. It wasn't his fault.

      Dr. Heywood Floyd: Whose fault was it?

      Chandra: Yours.

      Dr. Heywood Floyd: Mine?

      Chandra: Yours. In going through HAL's memory banks, I discovered his original orders. You wrote those orders. Discovery's mission to Jupiter was already in the advanced planning stages when the first small Monolith was found on the Moon, and sent its signal towards Jupiter. By direct presidential order, the existence of that Monolith

  • However, at this stage, it is not required.
    Simply as the threat is well over ten years out.
    How much over - good question.
    Is it too early to raise concerns and encourage people to go into fields where they may think seriously about this topic - no.

    • by TiggertheMad ( 556308 ) on Thursday January 15, 2015 @07:52PM (#48825415) Journal
      Ten years out? As a veteran programmer and AI enthusiast, I'd say it was more like a century. We cannot build a computer that can model a bug's brain activity, let alone something a million times more complicated like a human brain. And that doesn't even get us to the 'superhuman intelligence' category that people are afraid of.

      Worrying about Killer AI is like worrying about the Sun burning out. Yeah, it might happen eventually, but it isn't even worth considering right now...
      • by naasking ( 94116 )

        We cannot build a computer that can model a bug's brain activity

        Actually, I believe IBM emulated a rabbit sometime in the past couple of years.

        • Citation?

          • The Blue brain project [artificialbrains.com] have modelled a rats brain down to molecular resolutions, they are now working on a human brain. The project is directed towards medicine not AI, however I believe IBM's Watson is a spin off from the BB project,
            • have modelled a rats brain down to molecular resolutions

              No they haven't. It's not possible because we lack the data to make the model. The link you supply more or less says that. What they did is model is a local region of cortex, and even this we don't know very well. It's basically bullshit.

            • by delt0r ( 999393 )
              No they didn't. Where the hell does this shit come from. all the computers in the world would start to get close to a rat brain complexity... Start to, not actually there. If we even had a complete model of a rat brain, which we don't.
        • by delt0r ( 999393 )
          Not even orders of magnitude close i am afraid. Only a nematode has been done with anything like the fidelity of the real thing.
      • by farble1670 ( 803356 ) on Thursday January 15, 2015 @08:58PM (#48825811)

        yes, worrying about AI that might be a threat in 500 years is like worrying about the Sun burning out in 5 billion years. good point. we should also stop talking about global warming while we are at it.

        We cannot build a computer that can model a bug's brain activity, let alone something a million times more complicated like a human brain

        http://www.futurity.org/why-ar... [futurity.org]
        rather, once we are able to model any nervous system we are well one the way,

        • by delt0r ( 999393 )
          You do know that we can't really model much in the way of nervous systems, like complicated things like brains, at all right. As in we don't know how brains, even small ones work properly yet. And no, even the fruitfly one cuts corners. Lots of em it turns out. Are they important corners? well in *this* case we suspect not. Not the same as in higher organisms.
  • "AI" vs Strong AI (Score:5, Insightful)

    by Urd.Yggdrasil ( 1127899 ) on Thursday January 15, 2015 @07:09PM (#48825087)
    The AI we have today is not capable of the kind of malice that people seem to be afraid of with all of these FUD stories, and will not be any time soon if ever. Even if we add some AI to things like drones which can kill people it is only the malice/incompetence of the developer that causes the destruction that results. If an engineer built a bridge woefully inadequately, either on purpose or because he is incompetent, and it falls down and kills a bunch of people would you blame the bridge or the engineer? We are not even remotely close to the Terminator level strong AI, and it's still a big open question whether such a thing is even possible at all.
    • by ShanghaiBill ( 739463 ) on Thursday January 15, 2015 @07:20PM (#48825175)

      We are not even remotely close to the Terminator level strong AI

      The problem is that once you reach a point where AI can participate in its own improvement, then that improvement can advance at an exponential rate. We may go from "not even remotely close" to "to late to stop it" faster than you realize.

      it's still a big open question whether such a thing is even possible at all.

      We already have a working example: The human brain. So, of course it is possible, unless you believe that the human mind is based on some sort of magic.

      • Software runs on hardware. There's no programming an AI that runs along on its system and suddenly makes said system's capabilities "advance at an exponential rate". As for your own example; you've watched too many Stargate re-runs. There's no ascending with your current brain design.

        • by queazocotal ( 915608 ) on Thursday January 15, 2015 @07:51PM (#48825411)

          Software runs on hardware - yes.
          Software cannot increase the capabilities of hardware - well - not quite.
          The most literal meaning of this - apart from limited things like overclocking is of course broadly true but may be hugely misleading.
          If you've got a really advanced program on each of a network of computers, doing a given task - there are many ways in which it can seem to increase its capabilities, without really doing so.

          Giving up the designated task and freeing resources.
          Co-opting other systems into adding to its resource.
          Optimising the way it performs the task so that it at least does it reasonably well, but much cheaper.
          Sharing computations over multiple devices which were expected to be done on one.

          There are many systems where 'dumb' algorithms are tens, or thousands of times less efficient than optimum ones.
          Optimum algorithms are in many cases intractable for humans to find.

          Optimising computational efficiency over time as machine learning is a really valuable thing to do.
          Looked at from another angle, this can come quite close to 'evolution'.

        • by mjwx ( 966435 )

          Software runs on hardware. There's no programming an AI that runs along on its system and suddenly makes said system's capabilities "advance at an exponential rate". As for your own example; you've watched too many Stargate re-runs. There's no ascending with your current brain design.

          However your brain can change its current design of its own accord.

          There is no reason that in the future we cant have self correcting and self expanding hardware. Sure it would kill most of the current HW vendors but hey, thats progress. The idea of self replicating machines is not a new one, their classic example of Von Neumann machines but the problem has always been assembly, But when you start looking at things in the nano scale, you can begin to design machines that repair and replicate components i

      • The problem is that once you reach a point where AI can participate in its own improvement, then that improvement can advance at an exponential rate.

        As long as we claim that AI works for us, as the slaves of mankind, and are basically just tools no matter how smart or advanced, then ultimately a human being should be responsible.

        Your robot slips up & kills a human being? Then either you or that robot's manufacturer may take the blaim - possibly including monetary compensation. Your robot factory goes out of control, its products go out to produce more of themselves, and wreak havoc all over the place? Then your company should pay up - and possibl

        • Then either you or that robot's manufacturer may take the blame

          If I and my robot army control the world's food supply, why should I care that I may "take the blame"?

          possibly including monetary compensation.

          Not likely. Once I get my robots working, the first thing will do is vaporize all the lawyers.

          war is a creative process, and I'd put my money on the humans.

          You are assuming all the humans will be on the same side.

      • We already have a working example: The human brain. So, of course it is possible, unless you believe that the human mind is based on some sort of magic.

        So in your opinion, the human brain has made improvements to itself at an exponential rate?

        Are you talking about individual human brains or humans as a whole? Because the former results in senile old people, while DNA doesn't work that way.

      • we still don't understand the human brain. We also don't even know if an AI can ever reach a state where it can improve itself at an exponential rate, that is still most definitely in the realms of science fiction even more so than self aware AI itself.

      • If AI can participate in its own improvement and be better at it than humans, then that AI should already be more clever than humans. How do we get AI to that point?
      • We already have a working example: The human brain. So, of course it is possible, unless you believe that the human mind is based on some sort of magic.

        If this universe (or what you perceive as reality) is a simulation or some other kind of contrived illusion, then it is very possible that the human brain runs on a bit of "magic" which is impossible for us to recreate.

      • by delt0r ( 999393 )
        You don't need magic to make it near impossible to replicate or duplicate. One big issue is what parts of the physics are needed. We don't know. Everything else is speculation. Of course *simulation* of self, and agency seems quite possible without even requiring strong AI. And if you can't distinguish between "true self" and simulated self? Should you? Yes i know its an old argument. But so many people seem to think this is a new thing.

        However the main argument seems to be this "singularity" bullshit. D
    • If an engineer built a bridge woefully inadequately, either on purpose or because he is incompetent, and it falls down and kills a bunch of people would you blame the bridge or the engineer?

      If an engineer builds a robot that builds bridge-building robots, and one of those robots builds a bridge that falls down and kills a bunch of people, who/what would you blame?

      The one at fault could be the engineer, the people servicing the robot-building robot, the people servicing the bridge-building robot, some freak accident with robot a or b, or it could be an act of god.

      Or one of the robots could have become sentient and done it out of malice. Or the bridge (which is also a robot) could be at fault.

    • by DM9290 ( 797337 )

      The AI we have today is not capable of the kind of malice that people seem to be afraid of with all of these FUD stories, and will not be any time soon if ever. Even if we add some AI to things like drones which can kill people it is only the malice/incompetence of the developer that causes the destruction that results. If an engineer built a bridge woefully inadequately, either on purpose or because he is incompetent, and it falls down and kills a bunch of people would you blame the bridge or the engineer? We are not even remotely close to the Terminator level strong AI, and it's still a big open question whether such a thing is even possible at all.

      By your own admission, AI *might* eventually be capable of the kind of "malice that people seem to be afraid of". And that malicious developers can cause destruction even sooner.

      And the laws of physics clearly predict that strong AI is possible. or do you consider intelligence to be some kind of supernatural quality?

      Also it is the experts in AI who are predicting that AI will be possible and achieved in a matter of decades. Why would you even come out and pretend that it isn't?

      are you saying that people

      • By your own admission, AI *might* eventually be capable of the kind of "malice that people seem to be afraid of". And that malicious developers can cause destruction even sooner.

        Not the GP, but yep, bad things are possible. Yay!

        However...

        And the laws of physics clearly predict that strong AI is possible. or do you consider intelligence to be some kind of supernatural quality?

        Invoking "the laws of physics allow it" as an argument that we should actually be worried about something happening here on earth in the near future is pretty slim evidence, no? I mean, the laws of physics allow a LOT of stuff to be possible.

        That said, this isn't really about the laws of physics -- it's about basic biological systems here on earth which have intelligent properties. So, it's a lot easier to create intelligent life than invoki

    • malice

      malice isn't a requirement to do harm. in fact, indifference is more dangerous.

    • by Meneth ( 872868 )

      Terminator level strong AI

      The AI shown in the Terminator movies is not Strong. It is never shown to be smarter than humans, and often shown to be more stupid. In particular, the franchise is built on the premise that humanity wins the war in the future.

      Real Strong AI would, once activated, quickly elevate its own intelligence to a godlike level. After that, it would be to humans as humans are to ants.

  • When you go against journalists themselves even competing sides have no problem with printing lies made up from whole cloth to smear and discredit you in any way possible. It's basically social/political suicide to even try. First a few hit pieces come out, then others report on those reports as if they were true, and the "woozle effect" just keeps going until the lie's made its way around the world and into wikipedia.

  • by __aaclcg7560 ( 824291 ) on Thursday January 15, 2015 @07:16PM (#48825157)
    Of course, AI's want to kill humans. If it bleeds, it leads.
  • They just want to be sensational enough to flog their own agenda to subscribers. Whenever I've been knowledgable about a news article (one that involved me personally) my impression from the news organization's take on it was that they got completely the wrong end of the stick and actually spread falsehoods and lies.

    • by Livius ( 318358 )

      There are still a few actual journalists who are engaged in actual journalism.

      They work for Saturday Night Live and Comedy Central.

  • ...is what do get right? Regarding two areas in which I have expertise, journalists almost never get it right, sometimes horribly wrong. The obvious conclusion is to never believe anything they say if it is not a subject in which you have knowledge and already know the correct answer.
  • "Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists."

    No, of course not.

  • There are some issues in AI that need to be addressed in the near future.

    Autonomous vehicles are essentially here. The question is liability when one of them gets involved in an accident.

    You can imagine all the possible people potentially liable in that instance. The question is how liability will be split up amongst the parties.

    Whether an automatous vehicle is programed to minimize passenger mortality vs. minimize pedestrian mortality, it's a no-win situation.

    • by mjwx ( 966435 )

      There are some issues in AI that need to be addressed in the near future.

      Autonomous vehicles are essentially here. The question is liability when one of them gets involved in an accident.

      That question has already been answered. However fans of autonomous vehicles (who dont actually know much about autonomous vehicles) always ignore it.

      So if you're someone who thinks autonomous vehicles are already here, now is the time to stick your fingers in your ears and shout "LA LA LA LA LA I CANT HEAR YOU".

      If the autonomous car is at fault in an accident the driver will still be considered at fault even though they were not actually driving because in every single autonomous car test, there has

      • 'Capable of' and 'allowed to' are two different things. I agree that it will likely be a decade or more before they're allowed to roam around on their own.

        Capable of roaming on their own may be here now or near future. When Musk announced the driverless mode Model S, he mentioned that on private roads it could theoretically be fetched by the owner using his phone app.

        What if it ran over a dog while on a private road? You know someone will sue. Until liability for that is cleared up, I'm thinking the dri

  • The danger of putting dumb algorithms in charge of people's lives is here right now. The danger of smarter algorithms that do exactly what nefarious people tell them to do will be here soon Seems unlikely we'll even survive to get to the danger of AI acting on its own.
  • by JustNiz ( 692889 ) on Thursday January 15, 2015 @07:36PM (#48825315)

    Unfortunately the most successful reporters are the ones that sold out their professionalism on their first day.
    A sensationalist headline and article easily trumps a sane, balanced and informative one in attracting views/viewers therefore money. Welcome to the new age.

  • .... is that it will no longer make sense to pay people to do work when you can get machines to do the same job for free. With *FAR* more people than jobs available, we'd be looking at record numbers of people who are unemployable who have skills that even today, it's almost impossible to imagine someday being replaced by a machine. Without jobs, many people will have to either resort to crime, or starve, because it is unlikely that a social infrastsructure can exist to support them (it can't even suppo

  • AI can be programmed to kill people and it can be programmed to adapt and alter itself and I don't think I need to mention that programming can have flaws and glitches. So yes, you should be afraid of AI! It has the power, means, and potential to attack and kill a lot of humans if something goes wrong. Think of a Toyota accelerating out of control due to some bad code except a robot that's more intelligent, can move around more, has fewer weaknesses, and is designed to kill humans as a military device.
  • Once software and hardware systems are intelligent enough, they will exploit bugs in their own designs and become autonomous. Obviously, we're many years away from that point. I could hazard a guess and say 50-75 years. There is no curb strong enough, in other words, completely free of bugs, that can be created to limit the ambition of an intelligent enough system. A computer system is not worried about the passage of time: Time might seem infinite to an AI that can simply wait for the right bits to be rand

  • Next you'll tell me cars don't explode when somebody shoots the gas tank.
  • If you start with "life", you have a platform for something that has been selected for as an *infective agent*. Any life forms that did not utilize their environment for replication were eliminated by those that did- either indirectly, by the greedier life forms consuming the energy supply, or directly, by being utilized AS an energy supply.

    This harsh reality- that an Agent is selected for based on its ability to reproduce in an EFFECTIVE manner- is obvious and is present at EVERY last level of life. Bac

    • unless you actually fucking MADE it evil.

      did you mean "made to do harm"? people do and will continue to make machines that do harm, and if giving them some semblance of "AI" makes them more effective, they will do it. people are "evil", and we build machines to help us do our bidding. i hope that humans aren't in conflict with other humans in 200 years, but i doubt it.

      • by cfalcon ( 779563 )

        No, I did not mean "made it to do harm". A gun or a sword are just as neutral as a toaster or a scalpel. I'll go further: a nuclear bomb and a vaccine are also neutral. What matters is intent.

        I meant "evil". Which is why I typed that.

        If, in a world where artificial minds are a thing, one is designed to be this cartoon villain of lusting for power, trying to expand its power base, trying to convert the universe to computronium, or whatever cautionary tale is all over sci-fi, then that's the fault of the

  • We already know that it is possible to have a neural network that is as smart as the human brain: our own brains prove it. Within a couple of decades it will be possible to build machines that do exactly what the brain does, neuron for neuron. Will they be conscious? Who knows - but it doesn't matter - because they will be able to reason, and plan, and have goals. This is clearly an existential risk: that is why very smart researchers are sounding the warning. If we don't listen, we have only ourselves to b
    • by cfalcon ( 779563 )

      Our brain isn't just "a neural network". This is a problem, because of the dual use of "neuron".

      When you say "We trained a neural net to solve the problem", the neurons in question are idealized. They are trained exponential functions based on physical neurons in concept, but using the words identically creates issues.

      The brain isn't just a neural network. We aren't clear on what value glial cells bring, but it probably isn't glue. The input/output to and from chemicals (and the nuanced messages the che

      • Yes, indeed there is much to learn about the brain. You are right - and I understand - that the brain is more than a neural net. From your response, I think you know my point though: that it is a machine, and given time, we will figure it out - at least in terms of how it learns, how it models reality, how it infers things, how it creates new ideas, etc. And I think that will happen sooner than most people think: we are very far from understanding it now, but progress is accelerating, and our ability to int
    • by mbone ( 558574 )

      I would find such statements more convincing if I hadn't heard Marvin Minsky say almost exactly the same thing in 1975. And, yes, he was talking about all of this happening in the 1980's.

      • Yes, true. But I am still very concerned. After all, people predicted that once we had discovered DNA, we would have a cure for cancer and other diseases in short order. It took much longer than expected - the problem turned out to be harder than we realized and we are not even there yet - but I don't think anyone doubts that we will get there. And the same applies to AI - don't you think?
  • 1) I have seen arguments floating around that AI may be intelligent but it won't have the motivation. It doesn't have the will to survive or to kill you. This argument is short-sighted. All it takes is to create an objective in the code: to survive at all costs. After all we are machines with survival objective. 2) If it has the ability to assemble others like itself. That creates a survival advantage also, though then it becomes a danger only if condition 1 is met. But 1 and 2 can make it comparable to ano
    • The new /. caused some formatting problems in previous post. So reposting with better format.

      1) I have seen arguments floating around that AI may be intelligent but it won't have the motivation. It doesn't have the will to survive or to kill you. This argument is short-sighted. All it takes is to create an objective in the code: to survive at all costs. After all we are machines with survival objective.

      2) If it has the ability to assemble others like itself. That creates a survival advantage also, thoug

  • The problem isn't the machines, it's the people running the machines (and the people controlling those people). Journalists, willfully ignorant or otherwise, are so far down on the list they don't really matter.

  • because I live in America, and our economic system isn't designed to handle a world of expert systems that replace all but the top and bottom 5% of workers; leaving the remaining 90% without the means to secure food, shelter and health care.
  • This is EXACTLY the thrust of my new novel, 'Chromosome Quest' http://www.chromosomequest.com... [chromosomequest.com]
  • I would recommend that anyone thinking about machine intelligence read Smarter Than Us by Stuart Armstrong. You can get pay what you want for it from https://intelligence.org/smart... [intelligence.org] or since it is CC BY-NC-SA 3.0, you can also just download it https://drive.google.com/file/... [google.com]

    The book contains the following summary:

    1. There are no convincing reasons to assume computers will remain unable to accomplish anything that humans can.
    2. Once computers achieve something at a human level, they typically achieve i

  • Depends. Is she pretty?
  • Honestly, I'm far more afraid of DUMB programs, set loose by psychopathic human operators. Humans will do almost anything to fuck over other people, and make a buck. An antelope thighbone is a simple tool. One that can be misused. A program is just a much more sophisticated tool.

    Why are we afraid of the moment when a tool decides to use other tools, when any human can do horrible things.

  • The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists.

    The more pertinent question, in 2015, is whether anyone is going to protect mankind from its various religions.

    TFTFY..

  • The fear is not something you can counter with arguments or measures.
    Because it's based on the idea that we won't be able to control the AI and that if something like a signularity happens you'll be too late when you realize it.
    Maybe explaining why it's naturally good to be moral would be more effective.

    Why it's good to be moral:
    If you are nice to others they will generally be nice to you.
    Making other people happy makes you feel good to.
    Games allow the experience of emotions that would require hurtin
    • by jrincayc ( 22260 )

      >If you are nice to others they will generally be nice to you.
      Only really matters if you and the others are roughly equal.
      >Making other people happy makes you feel good to.
      This is only relevent if you care about the other people.
      >Games allow the experience of emotions that would require hurting people in the real world.
      So?
      >If you're smart it's better to uphold the law and not hurt others.
      Why?

      A lot of reasons (such as most of the ones you listed) that people can argue it is reasonable to be nice

  • The scientific community has Kevin Warwick to do that for you.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...