Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Programming Technology

Upgrading the Turing Test: Lovelace 2.0 68

mrspoonsi tips news of further research into updating the Turing test. As computer scientists have expanded their knowledge about the true domain of artificial intelligence, it has become clear that the Turing test is somewhat lacking. A replacement, the Lovelace test, was proposed in 2001 to strike a clearer line between true AI and an abundance of if-statements. Now, professor Mark Reidl of Georgia Tech has updated the test further (PDF). He said, "For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Creativity is not unique to human intelligence, but it is one of the hallmarks of human intelligence."
This discussion has been archived. No new comments can be posted.

Upgrading the Turing Test: Lovelace 2.0

Comments Filter:
  • by itzly ( 3699663 ) on Saturday November 22, 2014 @04:12PM (#48440903)
    There's nothing wrong with the Turing test, but it needs to have some thought put into the set up and execution, plus competent judges.
    • Inference is Hard (Score:4, Insightful)

      by infogulch ( 1838658 ) on Saturday November 22, 2014 @04:36PM (#48440987)
      'In the following sentence: "Ann gave Sue a scarf. She was very happy to receive it." Does "she" refer to Ann? (yes/no)'
      A series of similar and increasingly difficult inference questions like this one can usually knock over an AI pretty easily, while not being too difficult for humans.
      • Do any of them even handle formal logic? "All cows have 4 legs. Daisy is a cow. How many legs does Daisy have?" sort of thing.
      • Yep, and those types of questions are actually used in the Winograd Schema Challenge [commonsensereasoning.org] as a alternative to the Turing test. While those questions aren't testing everything a human might be able to do over a text terminal, they have the big advantage of being objective and easily quantifiable. The Turing Test depends to much on the qualifications of the judge, simple multiple choice questions don't have that problem.
    • by ndogg ( 158021 )

      No, the Turing test is shit. Any AI that passes it would actually be far smarter than us humans since it would have to take into account the experience of all the things that itself wouldn't actually have to deal with--such as eating, pissing, and shitting. Why should an AI have to think about all the things us meatbags have to think about that aren't relevant to it? AIs don't have parents (well, not in the traditional sense anyway) and so won't have a human-like childhood experience to reflect upon, nor

      • by itzly ( 3699663 )
        So you wouldn't consider an alien from another planet intelligent unless he shared our bodily functions ?
      • Why should an AI have to think about all the things us meatbags have to think about that aren't relevant to it?

        Because if it can't model a meatbag, why would it be able to model an electron (so can't do physics), an industrial robot (so can't program them), a car (can't control vehicles), abstract entities (can't do logic or math) or anything else for that matter?

        Imagination is not optional for intelligence. Intelligence is the ability to build mental models and manipulate them.

        AIs don't have parents (wel

        • Imagination is not optional for intelligence. Intelligence is the ability to build mental models and manipulate them.

          I like this thought. Not quite sure what counts as imagination though. Does the ability of a chess algorithm to model hypothetical future board positions count?

          My experience - writing a very simple rubik cube solver as an undergraduate project - I rejected the two simple solutions for a trivial case (requires 1 turn to solve). So it turned the opposite face, then turned the first face, th

    • There are many criticisms of the Turing test...from many angles.

      You address none of them, you just simply stated the negative.

      That's the problem...supporting the Turing paradigm means constantly avoiding the question (litterally and figuratively if you think about it)

  • moving target (Score:2, Insightful)

    by Anonymous Coward

    This is just making the "Turing test" into a moving target. The Turing test makes sense, and if you have a long enough test you can eventually rule out the "abundance of if statements."

    • by narcc ( 412956 )

      By your reasoning, it's been a "moving target" since 1950 as Turing himself offered variations on his test in the original paper!

      See, there isn't a single monolithic thing call "The Turing Test". There isn't even widespread agreement on the nature of the tests Turing proposed. When you say "The Turing test makes sense" you're saying that you have some exclusive insight in to Turing that no one else has, and that you think that that variation "makes sense". So, please, share your divinely revealed interp

    • This is just making the "Turing test" into a moving target.

      Which makes sense - since the AI it's testing for is itself a moving target.

    • by MrL0G1C ( 867445 )

      No, The Turing test doesn't make sense and nor does the new test.

      To test intelligence, how about we set the AI,

      AN ACTUAL INTELLIGENCE QUOTA TEST, is that not ****ing obvious.

      How many AI can pass the same tests that an ape or bird could pass, pretty much none I'd be guessing.

      Questions should pass a 'google test' where questions that can be answered by simply googling or using Wolfram Alpha are rejected.

  • by Anonymous Coward on Saturday November 22, 2014 @04:22PM (#48440943)

    There's a Forest Service joke that the problem with designing trash cans is that the smartest bear is smarter than the dumbest tourist.

    • by Pembers ( 250842 )

      Too lazy to RTFP (read the fine PDF), but I assume the point is that some humans can pass the Lovelace test, whereas few or no machines currently can.

      • But a machine that could act as a human with IQ 90 would be lauded as a great success. Make the test too hard and you'll only consider something AI when it's smart AI.
    • by gijoel ( 628142 )
      That's easy to solve. Shoot the bears that are wearing neck ties.
  • by Elledan ( 582730 ) on Saturday November 22, 2014 @04:59PM (#48441045) Homepage
    All I can think of while reading up on the Turing and related tests is how many humans would fail such a test.

    With the many assumptions made about what constitutes 'true' intelligence, how sure are we of the assumption that a human being of at least average intelligence would pass it? What's the research telling us there so far?

    Or are human and artificial intelligence somehow considered to be mutually exclusive?
    • But isn't this point? As the ability for machines to impersonate humans improves, they will become progressively more indistinguishable. Thus, judges will move their goalposts and the number of false negatives (humans erroneously considered to be machines) will increase. The fact that this is happening is an indicator that machines that can pass the Turing test are slowly starting to mature.
  • by msobkow ( 48369 ) on Saturday November 22, 2014 @05:00PM (#48441051) Homepage Journal

    We will never have "real" AI because every time we approach it, someone moves the bar as to what is required. It's been happening since the mid-late '80s. We *have* what would have qualified as AI according to the rules of '86-'87.

    • Oh baloney. How about listing those rules. I don't ever recall seeing the handbook.

      • How about listing those rules. I don't ever recall seeing the handbook.

        exactly the point/problem with the Turing and 'teh singularity' paradigms

        Oh baloney.

        that's the correct analysis here

    • by Megol ( 3135005 )

      Bullshit. AI is AI, not expert systems as was popular for your time period. The idea that a complex expert system would suddenly become intelligent was a theory that have been thoroughly tested - today there are expert systems with more rules and faster inference processing than even beyond the wildest dreams of those AI researchers.

      The working of human intelligence is still not fully known, the definition of intelligence is still not agreed upon. One thing is sure though - expert systems aren't intelligent

    • We will never have "real" AI because every time we approach it, someone moves the bar as to what is required.

      Artificial bars. The requirement is simple, have a computer that thinks like a human.

      You don't even know what algorithm the human brain uses. They didn't in the 80s, either. Figure that out before you complain about bars being moved.

      • Artificial bars. The requirement is simple, have a computer that thinks like a human.

        Even that bar is way too high for current technology. Give me an AI that can outthink a rat.
        You can put a pair of glasses on a rat connected to a webcam and a rat can easily find
        food. Put that same webcam on a rc car and no AI in the world is even close to being
        able to compete. Based on current technology it would probably be easier to train a rat
        to drive the rc car to find food than it would be to train a computer.
        That's my definition of intelligence. Something that can accurately navigate in the real

  • by Donwulff ( 27374 ) on Saturday November 22, 2014 @05:27PM (#48441141)

    So yet another article on Turing test which completely misses the point... First of all computer scientists never considered Turing test valid test of "artificial intelligence". In fact, there's practically no conceivable reason for a computer scientist to test their artificial intelligence by any other way than making it face problems of its own domain.
    Perhaps there will come a day where we really have to ask "is this entertainment droid genuinely intelligent, or is it only pretending", possibly for determining whether it should have rights, but this kind of problem still doesn't lie in the foreseeable future.
    On the Other hand, as Turing himself put it in the paper where he introduced his thought-experiment, from Wikipedias phrasing: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words." Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"
    In other words, the Turing test does not seek to answer the question of whether machines can think, because Turing considered the question meaningless, and noted that if a machines thinking was outwardly indistinguishable from human thinking, then the whole question would become irrelevant.
    There is a further erroneous assumption at least in the summary - as of present times, even the most advanced computers and software are basically simply an abundance of if-statements, or for the low-level programmers among us, cmp and jmp mnemonics. If, on the other hand, we expand our definition of a "machine" to encompass every conceivable kind, for the materialistic pragmatic it becomes easy to answer whether machines can ever think - yes of course, the brain is a machine that can think.

    • If, on the other hand, we expand our definition of a "machine" to encompass every conceivable kind, for the materialistic pragmatic it becomes easy to answer whether machines can ever think - yes of course, the brain is a machine that can think.

      But here, you smuggle your answer in inside of your assumption. You are assuming what you are trying to prove.

  • The Turing Test has flaws.

    Firstly, it requires a human-level of communication. One cannot use the it to determine whether a crow (for example, or cat or octopus) is intelligent since they cannot communicate at our level. Even though these creatures demonstrate a surprising level [scienceblogs.com] of intelligence. Watch this video [youtube.com] and be astonished.

    The extended video shows the crow taking the worm to it's nest, then returning to grab the hooked wire and taking that back to the nest! Can we use the Turing Test to determine whe

    • by Pembers ( 250842 )

      The Turing test is usually presented as something that a machine either passes or fails, but since no machine has yet passed it, contests have focused on how long a machine can withstand questioning before the interviewer decides it's not human, or what percentage of interviewers it can fool for, say, ten minutes. So you can say one machine is more intelligent than another, even if you don't have a definition of intelligence apart from "intelligence is the ability to convince a human that you are human". To

      • I'd rather we figure out how to build machines that can do things we want to do but can't, or aren't very good at.

        well said...this should be the paradigm in computing design

    • exactly...it's all based on a tautology...a faulty ontology. The Computability Function is not a computing paradigm, it's reductive.

      'AI' is complex machines following instructions. That's what it is. The rest is people projecting their own emotions onto inanimate objects.

      When I say "it's a tautology" what I mean is, it's based on linguistic distinctions only. Not actual, functional distinctions.

      A tautology says, "If people think a pile of shit is a steak dinner, then it becomes a steak dinner"

      That's an extr

      • 'AI' is complex machines following instructions. That's what it is. The rest is people projecting their own emotions onto inanimate objects.

        That is *great* phrasing - thank you. It's going into my notes and will probably make it into my writings (with attribution). Probably as a chapter heading.

        The situation is not completely hopeless: there is a small number of people, myself included, who are working on actual AI. Most of the research is using programming to solve a (particular) problem.

      • The rest is people projecting their own emotions onto inanimate objects

        How do you tell if the object is animate or note? Are you animate? Or am I just projecting my own emotions onto some entity making a post to slashdot? Perhaps 'projection' is the way we understand other humans... Is it ever useful to project onto entities other than humans (animate or not)?

  • Wasn't there just an article about an AI that developed some magic tricks for stage magicians?
  • This test is rather silly, it's easy to come up with a chaotic system that is "beautiful".
  • The Turing and Computability Function paradigm for computing is (finally) being rightly and fully criticized (ironically, as we get a Turing hollywood movie)

    Ada Lovelace's theories ***do indeed*** provide the theoretical ground work (along with others like Claude Shannon) to cleans ourselves of Turing Test nonsense

    However...this test...in TFA is not the test.

    It's just a variation on the Turing test that still has the same tautology...it's a test of fooling a human in an artificial, one time only environment

  • It was Turing's first attempt to answer the question "what makes a machine intelligent?". As a mathematician he wanted an empirical answer so he felt that the Turing Test would be a good test. A decent idea, but remember, computers had only been around for a few years. I don't know if he'd ever written a program.

    But what he had was a user requirements list. He didn't have a working implementation. He had "computer must be able to respond like a human to questions asked", so we have software that fits thos

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...