Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Was Turing Test Legitimately Beaten, Or Just Cleverly Tricked?

timothy posted about a month and a half ago | from the in-this-case-please-distinguish dept.

AI 309

beaker_72 (1845996) writes "On Sunday we saw a story that the Turing Test had finally been passed. The same story was picked up by most of the mainstream media and reported all over the place over the weekend and yesterday. However, today we see an article in TechDirt telling us that in fact the original press release was just a load of hype. So who's right? Have researchers at a well established university managed to beat this test for the first time, or should we believe TechDirt who have pointed out some aspects of the story which, if true, are pretty damning?" Kevin Warwick gives the bot a thumbs up, but the TechDirt piece takes heavy issue with Warwick himself on this front.

cancel ×

309 comments

but that's the problem with the turing test... (5, Interesting)

Anonymous Coward | about a month and a half ago | (#47204031)

It has nothing to do with actual artificial intelligence and everything to do with writing deceptive scripts. It's not just this incident, it's a problem with the goal of the Turing test itself. I always found the Turing test a kind of stupid exercise due to this.

Re:but that's the problem with the turing test... (5, Insightful)

flaming error (1041742) | about a month and a half ago | (#47204055)

They got 30% of the people to think they were texting with a child with limited language skills. I don't think that's what Alan Turing had in mind.

Re:but that's the problem with the turing test... (3, Interesting)

i kan reed (749298) | about a month and a half ago | (#47204175)

Sure it is.

They convinced a human that they were talking to an unimpressive human. That's definitely a step above "not human at all".

Re:but that's the problem with the turing test... (1)

Stormy Dragon (800799) | about a month and a half ago | (#47204223)

It's more that it's a human that is expected to behave irrationally, which gives the machine an easy out. If it ever gets to a point where it's not sure how to respond, just do something irrational to kick the conversation onto a different topic.

Re:but that's the problem with the turing test... (4, Interesting)

Anonymous Coward | about a month and a half ago | (#47204271)

So according to you I could make a machine that simulates texting with a baby. Every now and then it would randomly pound out gibberish as if a baby was walking on the keyboard.

Re:but that's the problem with the turing test... (0)

Anonymous Coward | about a month and a half ago | (#47204285)

when 30% of the humans evaluating, don't know how to ask intelligent questions to verify if they are speaking with a computer or a human, it just goes to show that the Bell Curve is effectively accurate in scaling intelligence.

Re:but that's the problem with the turing test... (1)

shaitand (626655) | about a month and a half ago | (#47204609)

It does seem as if a legitimate Turing test would involve humans who didn't know they were potentially speaking with a computer. Even better would be if they thought it was a remote working colleague. Also for it to be legitimate, the machine would need to be able to fool them indefinitely (short of some failure to come to physical meetings or physically interact in some way).

An AI working accounts receivable might be a good option.

Re:but that's the problem with the turing test... (1)

amicusNYCL (1538833) | about a month and a half ago | (#47204377)

According to Wired [wired.com] , it sort of depends on which questions you decide to ask.

WIRED: Where are you from?
Goostman: A big Ukrainian city called Odessa on the shores of the Black Sea

WIRED: Oh, I’m from the Ukraine. Have you ever been there?
Goostman: ukraine? I’ve never there. But I do suspect that these crappy robots from the Great Robots Cabal will try to defeat this nice place too.

Re:but that's the problem with the turing test... (3, Funny)

Anonymous Coward | about a month and a half ago | (#47204481)

I have a program passing the Turing test simulating a catatonic human to a degree where more than 80% of all judges cannot tell the difference.

Once you stipulate side conditions like that, the idea falls apart.

Re:but that's the problem with the turing test... (2)

shaitand (626655) | about a month and a half ago | (#47204559)

I don't think a chat bot was what Turing had in mind in any case. A bot that was intelligent enough to be able to LEARN and SPEAK well enough that another human couldn't tell the difference between it and another human is the point.

Everything we see now is trying to win the letter of the turing test and ignoring the spirit. Turing's point was that if we can make it able to reason as well as we can we no longer have the right to deny it as intelligent life. Scripts that skip the reasoning and learning part and just try to con the judges are just attempts to cheat at the test.

It's akin to doing nothing but studying test dumps to pass an IT certification exam or memorizing the question bank to get an Amateur radio license. It being possible to cheat on Turing's test does make it a flawed test but it doesn't mean that Turing was wrong about what it would indicate if a machine passed the test WITHOUT cheating.

Re:but that's the problem with the turing test... (1)

drakaan (688386) | about a month and a half ago | (#47204231)

They thought it was a child, and not a machine imitating a child with limited language skills? What's everyone quibbling about? You either think it's a machine, or you think it's a person. 1 in 3 people thought it was a person. That's a "pass".

Re:but that's the problem with the turing test... (0)

Anonymous Coward | about a month and a half ago | (#47204275)

I think there needs to be a reverse test where the AI, once able to get people to think it's human, then has to perform the turing test on other machines and get it right 70% of the time.

Re:but that's the problem with the turing test... (4, Insightful)

BasilBrush (643681) | about a month and a half ago | (#47204521)

The problem is that priming the judges with excuses about why the candidate may make incorrect, irrational, or poor language answers is not part of the test.

If the unprimed judges themselves came to the conclusion they were speaking to a 13 year old from the Ukraine, then that would not be a problem. But that's not what happened.

Re:but that's the problem with the turing test... (1)

drakaan (688386) | about a month and a half ago | (#47204747)

What should the program have claimed to have been? If it was a human extraneously telling the judges that the program was a person with language skills, then I would agree, but the task is to fool humans into thinking a program is a person, and that's what happened, isn't it?

The entire exercise is always one of trickery, regardless of how sophisticated the program is. I think the illustration is that it's not necessarily that difficult to fool people (which we already knew).

Re:but that's the problem with the turing test... (0)

Anonymous Coward | about a month and a half ago | (#47204587)

The point of the Turing test is that questions such as "what number rhymes with LIVE" or "in which city did the twin towers stand" are hard for a computer to understand and answer but easy for most humans. Good judges can easily invent hundreds of questions that no current AI (with perhaps the exception of Watson) could answer. What this bot did is answer all those questions with "I'm a 13 year old boy and don't speak English very well, so I don't know the answer to ". That's not a pass, that's a cop-out.

Re:but that's the problem with the turing test... (1)

danlip (737336) | about a month and a half ago | (#47204237)

And since when has 30% been the threshold? I always thought it was 50% (+/- whatever the margin of error is for your experiment, which is hopefully less than 20%)

Re:but that's the problem with the turing test... (1)

NoNonAlphaCharsHere (2201864) | about a month and a half ago | (#47204277)

It was always 30%: "human", "not human", and "not sure".

Re:but that's the problem with the turing test... (1)

Anonymous Coward | about a month and a half ago | (#47204359)

Human, Not Human, and File Not Found

Re:but that's the problem with the turing test... (0)

Anonymous Coward | about a month and a half ago | (#47204249)

They got 30% of the people to think they were texting with a child with limited language skills. I don't think that's what Alan Turing had in mind.

Well then I suppose he should have been more careful to define the problem and the test requirements.

Re:but that's the problem with the turing test... (1)

jkauzlar (596349) | about a month and a half ago | (#47204321)

This might say more about these judges than it does about the bot.

Re:but that's the problem with the turing test... (1)

Karmashock (2415832) | about a month and a half ago | (#47204459)

Bingo. The test will be closer to valid if they convince a majority of people that they're talking to a human being that actually has a reasonable grasp of the language being used.

Re:but that's the problem with the turing test... (4, Insightful)

TheCarp (96830) | about a month and a half ago | (#47204177)

I always thought of it as more a philosophical question or thought experiment. How do you know that anything has an internal consciousness when you can't actually observe it? I can't even observe your process, I just assume that you and I are similarly in so many other ways (well I assume, you could be a chatbot, whreas I know I am definitely not)....and I have it, so you must too, aferall, we can talk.

So.... if a machine can talk like we can, if it can communicate well enough that we suspect it also has an internal cosciousness, then isn't our evidence for it every bit as strong as the real evidence that anyone else does?

Re:but that's the problem with the turing test... (1)

tool462 (677306) | about a month and a half ago | (#47204393)

you could be a chatbot, whreas I know I am definitely no

That sounds like something a chatbot would say. Nice try, Carp.

Re:but that's the problem with the turing test... (5, Funny)

TheCarp (96830) | about a month and a half ago | (#47204703)

Please tell me more about like something a chatbot would say.

Re:but that's the problem with the turing test... (1)

just_another_sean (919159) | about a month and a half ago | (#47204779)

Vaguely off-topic but your post reminded me of an interesting NPR Radiolab [radiolab.org] episode I heard over the weekend. The upshot being "how do we even know the people we talk to everyday are real" and how we all go through life making a series of small leaps of faith just to keep ourselves grounded in what we perceive as reality. Listening to it and than making the comparison to the Turing test makes it seem to be forever out of our reach to prove anything about consciousness, human or artificial.

Re:but that's the problem with the turing test... (1)

Anonymous Coward | about a month and a half ago | (#47204179)

The Clarke-Turing Law: Any sufficiently deceptive script is indistinguishable from AI.

Re:but that's the problem with the turing test... (5, Insightful)

Jane Q. Public (1010737) | about a month and a half ago | (#47204181)

It has nothing to do with actual artificial intelligence and everything to do with writing deceptive scripts. It's not just this incident, it's a problem with the goal of the Turing test itself. I always found the Turing test a kind of stupid exercise due to this.

Yes. TechDirt's points 3 and 6 are basically the same thing I wrote here the other day:

First, that the "natural language" requirement was gamed. It deliberately simulated someone for whom English is not their first language, in order to cover its inability to actually hold a good English conversation. Fail.

Second, that we have learned over time that the Turing test doesn't really mean much of anything. We are capable of creating a machine that holds its own in limited conversation, but in the process we have learned that it has little to do with "AI".

I think some of TechDirt's other points are also valid. In point 4, for example, they explain that this wasn't even the real Turing test. [utoronto.ca]

Re:but that's the problem with the turing test... (1)

mwvdlee (775178) | about a month and a half ago | (#47204803)

Second, that we have learned over time that the Turing test doesn't really mean much of anything. We are capable of creating a machine that holds its own in limited conversation, but in the process we have learned that it has little to do with "AI".

I disagree. All we've learned is that chatbots barely manage to fool a human, even when cheating the rules.
If anything, it demonstrates that chatbots simply aren't capable of holding a normal conversation and we need something better.

Re:but that's the problem with the turing test... (1)

AnOminusCowHerd (3399855) | about a month and a half ago | (#47204323)

This. The Turing Test itself is an over-hyped "test" that is mainly famous because journalists and bloggers can easily explain it to a general audience ... and perhaps, because it invokes a certain SkyNet-like spookiness.

No, not over-hyped at all... (1)

ZeroPly (881915) | about a month and a half ago | (#47204727)

The Turing Test is the ONLY test we have for artificial intelligence. Every other year we get some research team or the other claiming that their system is as intelligent as a dog, and now it's just a matter of scaling. The Turing Test is analogous to the test the Patent Office has for perpetual motion machines - if you can't pass the test, then you're not there yet. Simple, and easy to measure.

Re:but that's the problem with the turing test... (1)

NotDrWho (3543773) | about a month and a half ago | (#47204387)

To quote Joshua, "What's the difference?"

Re:but that's the problem with the turing test... (1)

rudy_wayne (414635) | about a month and a half ago | (#47204427)

It has nothing to do with actual artificial intelligence and everything to do with writing deceptive scripts. It's not just this incident, it's a problem with the goal of the Turing test itself. I always found the Turing test a kind of stupid exercise due to this.

Exactly right.

Was Turing Test Legitimately Beaten, Or Just Cleverly Tricked? I see no difference between the two. Beaten is beaten, no matter how it is accomplished.

If the Turing Test can be "cleverly tricked" then it simply demonstrates that the Turing Test is flawed and meaningless.

Re:but that's the problem with the turing test... (1)

BasilBrush (643681) | about a month and a half ago | (#47204623)

Did Lance Armstrong really win the Tour De France 7 times, or did he cheat? You apparently can't tell the difference.

Did a student who smuggled in some crib notes into an exam really pass the exam, or did he cheat? You apparently can't tell the difference.

You present a false dichotomy. The Turing test was neither beaten, nor tricked. The reality is a third option: It wasn't a real Turing test. Even putting aside questions about Kevin Warwick, and the lack of peer review, we know that the judges were primed with excuses about why the chatbot might make irrational, strange or poor English answers. Priming the judges with excuses for the chatbot is cheating every bit as much as Armstrong's drugs,and the exam cheat's crib notes. There is therefore no genuine result from any of these tests.

And none of these cheats mean that there is anything wrong with a bicycle race, an exam, or the Turing test per se.

I see. (1)

Anonymous Coward | about a month and a half ago | (#47204039)

Why do you ask if the Turing Test was legitimately beaten or just cleverly tricked?

But seriously, yes, it was 'legitimately beaten', just like it's been 'legitimately beaten' in times past, going back to ELIZA in the 60s.

Was it MEANINGFULLY beaten is the question to ask, and no, no it wasn't. Until the computer can actually 'understand' context to a meaningful degree, the answer to that will continue to be no.

Re:I see. (4, Funny)

RDW (41497) | about a month and a half ago | (#47204165)

But seriously, yes, it was 'legitimately beaten', just like it's been 'legitimately beaten' in times past, going back to ELIZA in the 60s.

How does that make you feel?

Re:I see. (4, Funny)

NatasRevol (731260) | about a month and a half ago | (#47204349)

I can't answer that right now.

Re:I see. (1)

houghi (78078) | about a month and a half ago | (#47204367)

It makes me feel I should enjoy my lawn and I could if it wasn't for those meddeling kids. Please get of my lawn.

open access to the AIs (3, Insightful)

dgp (11045) | about a month and a half ago | (#47204045)

I want to talk to these AIs myself! Give me a webpage or irc chatroom to interact with it directly. It sounds fascinating even if its only 'close' to passing the test.

Re:open access to the AIs (0)

Anonymous Coward | about a month and a half ago | (#47204093)

http://www.princetonai.com/bot/bot.jsp

but its offline right now

Re:open access to the AIs (0)

Anonymous Coward | about a month and a half ago | (#47204203)

And you just gave it away. He is on the toilet right now, will be back in a bit.

Re:open access to the AIs (3, Insightful)

AthanasiusKircher (1333179) | about a month and a half ago | (#47204161)

I want to talk to these AIs myself! Give me a webpage or irc chatroom to interact with it directly.

It might be interesting, but when these things have been made available in the past, I've always been disappointed.

Example: Cleverbot [cleverbot.com] , which, as TFA notes, supposedly passed the Turing test by convincing people it was 59% human [newscientist.com] , as reported almost three years ago here [slashdot.org] .

The numbers for Cleverbot sounded a LOT better than this story, and yet -- well, chat with the damn thing for a couple minutes. See what you think. Try to "test" it with even some basic questions designed to fool an AI that even a relatively stupid 13-year-old could answer. It will fail. It comes across as an unresponsive idiot. It's only if you engage with its crap questions that it begins to seem anything like "conversation" -- if you try to get it to actually talk about ANYTHING, it will rapidly become apparent that it's useless.

I have no doubt this thing does something similar.

Re:open access to the AIs (0)

Anonymous Coward | about a month and a half ago | (#47204221)

Yeah, that thing sucks.

Hi, are you a person?
Yes. Do you like Led Zeppelin?
Why would you ask that?
Because you seem like one.

Re:open access to the AIs (1)

lister king of smeg (2481612) | about a month and a half ago | (#47204479)

How could anyone be fooled by cleverbot let alone 59% of people.
My conversion with cleverbot

me: What is your name.
cleverbot: Cleverbot.
me: Would you repeat that?
cleverbot: I did not ask for your name, Richard.

Lack consistent self narrative and replies with nonsense and randomly makes up a name for me. I could tell it was computer with two queries.

Re: open access to the AIs (1)

p0p0 (1841106) | about a month and a half ago | (#47204693)

The fact it called you Richard means it was far more intelligent than we thought. Dick.

Re:open access to the AIs (1)

king neckbeard (1801738) | about a month and a half ago | (#47204571)

I always suspected that cleverbot was actually just randomly cycling me through confused humans every so often. Thus, it was more of a reverse turing test, convincing you that humans are bots.

successfully (0, Insightful)

Anonymous Coward | about a month and a half ago | (#47204053)

I have successfully written a chatbot that convinces people that it is a slime mold. It had to tell people it was a slime mold to make them do the mental gymnastics necessary to wave away all the absurd replies. But, it did manage to convince 90% of its conversational partners that it had the mental capacity of a slime mold. This is a striking success.

Re:successfully (1)

phantomfive (622387) | about a month and a half ago | (#47204145)

It's also worth mentioning that a lot of times, the way these tests are set up (with a human and a computer and the judge has to decide which), what really happens is the human manages to convince the judge that it's a computer, not the other way around.

Stupidly tricked, not clever (4, Informative)

gurps_npc (621217) | about a month and a half ago | (#47204059)

Turnign test is NOT supposed to be limited to 15 minutes, nor is it supposed to be conducted by someone that does not understand the main language claimed to be used by the computer.

Similarly, the computer must convince the judge it is a human with it's full mental capacity, not child, nor a mentally defective person, nor someone in a coma.

The test is whether a computer can, in an extended conversation, fool a competent human into thinking it is a competent human being speaking the same language,at least 50% of the time.

Re:Stupidly tricked, not clever (2, Interesting)

Trepidity (597) | about a month and a half ago | (#47204117)

Restricted Turing tests, which test only indistinguishability from humans in a more limited range of tasks, can sometimes be useful research benchmarks as well, so limiting them isn't entirely illegitimate. For example, an annual AI conference has a "Mario AI Turing test" [marioai.org] where the goal is to enter a bot that tries to play levels in a "human-like" way so that judges can't distinguish its play from humans' play, which is a harder task than just beating them (speedrunning a Mario level can be done with standard A* search, so isn't that interesting as an AI benchmark). This is useful as a benchmark for things like algorithms that try to mimic action styles in general (whether in games or elsewhere).

However it would definitely be misleading to claim passing these kinds of restricted Turing tests constitutes passing the Turing test in the sense that Turing had in mind: obviously playing Mario levels in a human-like way is not equivalent to full general intelligence, and serious researchers wouldn't claim that.

Re:Stupidly tricked, not clever (1)

wisnoskij (1206448) | about a month and a half ago | (#47204715)

But this was not even a restricted test, it was a simple cop out. I could write a 500 line script that tricked people into believing it was a mentally retarded foreigner.

Re:Stupidly tricked, not clever (1)

Trepidity (597) | about a month and a half ago | (#47204797)

Or a therapist, for that matter...

Re:Stupidly tricked, not clever (1)

Sneftel (15416) | about a month and a half ago | (#47204147)

So if there were an AI system which genuinely had the intellect and communication capabilities of a 13-year-old Ukrainian boy (conversing in English), you would not consider it intelligent?

Re:Stupidly tricked, not clever (0)

Anonymous Coward | about a month and a half ago | (#47204251)

If I make a program that replies "Me no speak english" to everything does that make it intelligent?

Re:Stupidly tricked, not clever (2, Insightful)

Anonymous Coward | about a month and a half ago | (#47204261)

So if there were an AI system which genuinely had the intellect and communication capabilities of a 13-year-old Ukrainian boy (conversing in English), you would not consider it intelligent?

Not until I posed questions in Ukrainian.

Re:Stupidly tricked, not clever (0)

Anonymous Coward | about a month and a half ago | (#47204343)

No, kids are dumb. But let's see how it is when it's seven years older. If it doesn't get even dumber by the time it's 20, it's a fail.

Re:Stupidly tricked, not clever (0)

Anonymous Coward | about a month and a half ago | (#47204191)

This. So much.

It was a complete cheat.
We are still way WAY off an AI that is even remotely as good as what the turing test demands.
And even if we get it, it won't be revolutionary, the test itself is pretty meaningless with regards to how actual intelligence works.
You can make a simulation of conversation considerably easier than you can a working brain.

Re:Stupidly tricked, not clever (1)

meta-monkey (321000) | about a month and a half ago | (#47204329)

So you wouldn't be interested in testing out my new AI that simulates someone smashing their face against a keyboard?

"How are you doing today?"

"LKDLKJELIHFOIHEOI#@LIJUIGUGVPYG(U!"

"Pass!"

Re:Stupidly tricked, not clever (1)

nine-times (778537) | about a month and a half ago | (#47204355)

the computer must convince the judge it is a human with it's full mental capacity,

And I'd like to suggest that this is a tricky qualifier, given the number of people reading Gawker and watching "Keeping up with the Kardashians".

No, seriously. Given some of the stupid things people say and do, it would make more sense if they were poorly written AIs.

Re:Stupidly tricked, not clever (0)

Anonymous Coward | about a month and a half ago | (#47204517)

Sometimes I think a variation on Elisa would make a better governmental system then we have now.....

Re:Stupidly tricked, not clever (1)

HeckRuler (1369601) | about a month and a half ago | (#47204461)

Turning test is NOT supposed to be limited to 15 minutes,

Whatever, you have to put some sort of time-limit on it just for feasibility of testing.

nor is it supposed to be conducted by someone that does not understand the main language claimed to be used by the computer.

Pft, you are not some sort of high cleric in charge of spotting bots.

Similarly, the computer must convince the judge it is a human with it's full mental capacity, not child, nor a mentally defective person, nor someone in a coma.

That's an decent point. It's certainly a valid issue to take with any bot that passes a turing test in such a way. You could claim any blank terminal is indistinguishable from a coma patient. Or a gibberish machine is equivalent to the mentally ill.

Let's extend that. The first machines that "legitimately" passes a Turing test will not be super-insightful teaching gurus. They will not be fonts of wisdom. Just as there is a difference between a math teacher and the mentally ill, there is a difference between your typical math teachers and the likes of Einstein, Stephan Hawking, and Feynman. I suspect that AI chatbots will climb that axis incrementally, similar to how robotics have progressed.
The impact of such things is that the supply of simpletons to chat with will explode and the relative value of children and the mentally ill will plummet. And as they get better, the merely moderately intelligent will likewise plummet as only the intelligent are better than whiteCollarOfficeBotv7.4_3(noSextingMod).

The test is whether a computer can, in an extended conversation, fool a competent human into thinking it is a competent human being speaking the same language,at least 50% of the time.

There we go. That's the test right there. The "extended conversation" is still variable, and I think 15 minutes is fine. But the "competent human" is refinement that's needed. It's implied in the original Turing test. It's also still rather subjective.

If only (1)

phantomfive (622387) | about a month and a half ago | (#47204071)

If only there were a method, where people could let others know about their findings, in enough detail so that the results could be reproduced. Just for fun, we could call this method "the scientific method."

Oh and hey, why don't we create a 'magazine,' where 'scientists' can submit their findings, that way they will be easy to find. We can call them 'scientific journals.' Extra benefit, the journals can make an attempt to filter out stuff that's not original.

Oh wait. Why didn't these guys submit to a journal? Probably because it adds nothing to what Joseph Weizenbaum back in the 60s.

FFS Slashdot... not you too... (0)

Anonymous Coward | about a month and a half ago | (#47204081)

Stop using linkbait headlines... leave that to Gawker.

Clever? I'm sure Mr. Turing would agree that having to explain away the flaws in grammar and syntax by claiming to be a non-native English speaker, fits well within his intended vision...

Program pretends to be foreign child, not adult (5, Informative)

dunkindave (1801608) | about a month and a half ago | (#47204083)

For those who haven't read the article (I read one yesterday and assume the details are the the same): The program claimed to be a Ukrainian boy of 13 years old, a non-native English speaker, writing in English to English speakers. This allowed the program to avoid the problem of people using language to make judgements about whether the responses were from a person or a program. Also, since the program was claiming to be a boy instead of an adult, it also greatly reduced what could be expected of the responses, again greatly simplifying the programs parameters and reducing what the testers could use to test. So basically, the Turing Test is supposed to be a test if a person can tell if the program acts like a person, but here the test was rewritten to see if the program acted like a child from a different culture and who was supposed not to be speaking in his native language. Many are apparently crying foul.

I personally agree.

Re:Program pretends to be foreign child, not adult (2)

iluvcapra (782887) | about a month and a half ago | (#47204159)

Foreign, no cultural context, limited language skills -- It sounds like this AI is ready to be deployed at Dell technical support. (You laugh today.)

Re:Program pretends to be foreign child, not adult (0)

Anonymous Coward | about a month and a half ago | (#47204509)

Foreign, no cultural context, limited language skills -- It sounds like this AI is ready to be deployed at Dell technical support. (You laugh today.)

Most offshore tech support can be easily replaced by a recording of "please unplug your [whatever it is the tech support is allegedly supporting] for 60 seconds to see if that fixes your problem." Apply whatever accent you think makes it sound more authentic.

The rare semi-quality tech support can usually be faked with one of those phone mazes that uses bad speech recognition instead of tone routing.

The amazingly odd offshore tech support is actually superior to the local. I had some trouble with Verizon DSL for a while that none of the US tech support would admit there being anything wrong, but the Indian tech support would run an automated line test (after a little prodding) and admit that something was wrong. Still took a month for them to fix anything, but the offshore group would actually listen.

Hmmm ... (1)

gstoddart (321705) | about a month and a half ago | (#47204085)

Maybe we need to more formalize the Turing test to give it specific rigor?

That or come up with a whole new test ... I don't know, maybe call it the Void Kampf [wikipedia.org] test.

It's a Turing test if I know one of the candidates is, in fact, an AI. If you tell me it's a 13 year old, you're cheating.

given we're nerds who know this stuff... (1)

acroyear (5882) | about a month and a half ago | (#47204097)

...why didn't /. just wait for the skeptical posts calling the original news articles bullshit in the first place?

Seriously, weeding out the garbage posts, 3/4ths of the comments were calling bullshit when they saw it, and 1/4th were making pointless references to Skynet and HAL.

Re:given we're nerds who know this stuff... (0)

Anonymous Coward | about a month and a half ago | (#47204157)

"and 1/4th were making pointless references to Skynet and HAL."

And you had to pointlessly mention it.

What's the diff? (0)

Anonymous Coward | about a month and a half ago | (#47204107)

Legitimately beaten or cleverly tricked. Either one says it was beaten to me. Isn't a clever trick a legitimate way of winning? Is in real life conflict.

Re:What's the diff? (0)

Anonymous Coward | about a month and a half ago | (#47204433)

The difference is that clever cheating results in a useless development.

legitimately beating the Turing test, would have resulted in an AI system that could replace humans in basicly all service positions (It would be able to handle deep discussions on the subject matter comparably to a human who understands the material. which would me that for example, when you call in to complain about your computer not working and mention that all the lights are out, it can guess that you might be experiencing a power outage).

Isn't that the only way to beat it? (1)

jtownatpunk.net (245670) | about a month and a half ago | (#47204113)

That's the whole point. To cleverly trick the tester into believing something that isn't true. The test can't be beaten without clever tricking.

Re:Isn't that the only way to beat it? (1)

dunkindave (1801608) | about a month and a half ago | (#47204163)

OK, so suppose the program claims it is a mentally challenged child with poor grammar, who has lived a very sheltered life with almost no interaction with people. Now perform the same test with a real child with the same attributes. The Turing Test is supposed to be conducted where other similar conversations are also conducted, and where to pass the tester says it sounds like a real person more often than saying the same about a real person. Given these extreme and limiting conditions, would you say the test is a fair test? To me, passing such a test would have almost no meaning, and the test in the article is not much above it.

Re:Isn't that the only way to beat it? (3, Insightful)

Anonymous Coward | about a month and a half ago | (#47204185)

Actually, that's not the whole point -- it's not even the point at all, which is what most people here are pointing out.

The test CAN be beaten without clever tricking: it can be beaten with a program that actually thinks.

This was Turing's original intent. He didn't think, "I'm going to make a test to find someone who can write a program to trick everyone into thinking the program is intelligent." He thought, "I'm going to make a test to find someone who has written a program that is actually intelligent." See the difference?

The only reason we're in this stupid mess with the Turing test right now is that most laypeople (including reporters) can't see the difference between those two positions.

(posting AC because I lost my password)

Re:Isn't that the only way to beat it? (3, Insightful)

jkauzlar (596349) | about a month and a half ago | (#47204283)

This is a good point. I'm guessing every single one of the entries into these Turing test competitions since 'Eliza' has been an attempt by the programmer to trick the judges. Turing's goal, however, was that the AI itself would be doing the tricking. If the programmer is spending time thinking of how to manufacture bogus spelling errors so that they bot looks human, then I'm guessing Turing's response would be that this is missing the point.

Re:Isn't that the only way to beat it? (2)

JMZero (449047) | about a month and a half ago | (#47204289)

A legitimately intelligent computer wouldn't have to do much tricking. It'd have to lie, sure, if it was asked "are you a computer?" - but it could demonstrate its intelligence and basic world understanding without resorting to obfuscation, filibustering, and confusion. Those are "tricks".

By contrast, building a system that can associate information in ways that result in reasonable answers (eg. Darwin), is not so much a "clever trick" as a reasonable step in building an intelligent agent. Both are clever, but hardly in the same way.

The Turing test (4, Informative)

KramberryKoncerto (2552046) | about a month and a half ago | (#47204121)

... was not actually performed in the research. End of story.

Surprising responses... (0)

Anonymous Coward | about a month and a half ago | (#47204143)

Passed or tricked??? Same thing, here; that is the point. Computer tricks people.

I don't care (4, Insightful)

mbone (558574) | about a month and a half ago | (#47204197)

The first time I saw ELIZA in action, I realized that the Turing test is basically meaningless, as it fails on two fronts. We are not good judges for it, as we are hard-wired to assume intelligence behind communications, and Turing's assumption that the ability to carry on a reasonable conversation was a proof of intelligence was wrong.

This is not to fault Turing's work, as you have to start somewhere, but, really, after all of these years we should have a better test for intelligence.

Re:I don't care (4, Insightful)

AthanasiusKircher (1333179) | about a month and a half ago | (#47204657)

The first time I saw ELIZA in action, I realized that the Turing test is basically meaningless, as it fails on two fronts. We are not good judges for it, as we are hard-wired to assume intelligence behind communications, and Turing's assumption that the ability to carry on a reasonable conversation was a proof of intelligence was wrong.

But that wasn't Turing's assumption, nor was it the standard for the Turing test.

Turing assumed that a computer would be tested against a real person who was just having a normal intelligent conversation. Not a mentally retarded person, or a person who only spoke a different language, or a person trying to "trick" the interrogator into thinking he/she is a bot.

Note that Turing referred to an "interrogator" -- this was an intensive test, where the "interrogator" is familiar with the test and how it works, and is deliberately trying to ask questions to determine which is the machine and which is the person.

ELIZA only works if you respond to its stupid questions. If you actually try to get it to actually TALK about ANYTHING, you will quickly realize there's nothing there -- or perhaps that you're talking to a mentally retarded unresponsive human.

The "assumption" is NOT "the ability to carry on a reasonable conversation," but rather the ability to carry on a reasonable conversation with someone specifically trying to probe the "intelligence" while simultaneously comparing responses with a real human.

I've tried a number of chatbots over the years when these stories come out, and within 30 seconds I generally manage to get the thing to either say something ridiculous that no intelligent human would utter in response to anything I said (breaking conversational or social conventions), or the responses become so repetitive or unresponsive (e.g., just saying random things) that it's clear the "AI" is not engaging with anything I'm saying.

You're absolutely right that people can and have had meaningful "conversations" with chatbots for decades. That's NOT the standard. The standard is whether I can come up with deliberate conversational tests determined to figure out whether I'm talking to a human or computer, and then have the computer be indistinguishable from an actual intelligent human.

I've never seen any chatbot that could last 30 seconds with my questions and still seem like (even a fairly stupid) human to me -- assuming the comparison human in the test is willingly participating and just trying to answer questions normally (as Turing assumed). If somebody walked up to me in a social situation and started talking like any of the chatbots do, I'd end up walking away in frustration within a minute or two, having concluded the person is either unwilling to actually have a conversation or is mentally ill. That's obviously not what Turing meant in his "test."

Imagine a similar test for a prosthetic leg.. (1)

JMZero (449047) | about a month and a half ago | (#47204229)

Maybe you design an obstacle course that required the leg to function in a range of everyday scenarios, that tests its endurance, comfort, and flexibility.

These chat bots would be the equivalent of calling a helicopter a "prosthetic leg" and flying over the course.

In both cases, they're avoiding the meat of the challenge. Yes, arriving at the finish line is the goal, but it's how you got there that is the interesting part. That's not to say these are useless projects - they're fun, and there's some legitimately interesting stuff there. But it'll be a very different beast that truly passes the Turing Test.

Kevin Warwick (2, Insightful)

Anonymous Coward | about a month and a half ago | (#47204291)

Kevin Warwick is a narcissistic, publicity seeking shitcock.

of course you can rig it (1)

slashmydots (2189826) | about a month and a half ago | (#47204337)

Make him speak Spanish and make sure all the judges only speak Norwegian. See, you can cheat it. But anyway, they should be disqualified for the age. 13 year olds are predictable, quite dumb, and easy to imitate. To be more scientific, between approx age 10 and 18, your brain doubles in its overall processing power and in the middle, your frontal lobe can't process logical decisions very well. That's quite a cover story for an AI to pretend to be a human.

Chatbot transcript (2)

MobyDisk (75490) | about a month and a half ago | (#47204383)

I created a chat bot that emulates a 65-year-old grocery store clerk who speaks perfect English. Here is a sample transcript:

Tester: Hello, and welcome to the Turing test!
Bot: Hey, gimme one sec. I gotta pee really bad. BRB.
.
.
.
Tester: You back yet?
.
.
.
Tester: Hello?
.
.
.

Re:Chatbot transcript (1)

alphatel (1450715) | about a month and a half ago | (#47204545)

I created a chat bot that emulates a 65-year-old grocery store clerk who speaks perfect English. Here is a sample transcript:

Tester: Hello, and welcome to the Turing test! Bot: Hey, gimme one sec. I gotta pee really bad. BRB. . . . Tester: You back yet? . . . Tester: Hello? . . .

Profit?

Maybe... (0)

Anonymous Coward | about a month and a half ago | (#47204403)

Maybe Kevin Warwick himself is a bot, and this is a cleverly-designed incarnation of the Turing test to determine whether or not we realize it. If we do, it doesn't pass the test, and there's nothing to worry about. If we don't, the AI revolution is nigh, and we're all doomed.

Turing test not very good definition (0)

Anonymous Coward | about a month and a half ago | (#47204417)

Turing test is not very good 'test' in the first place.
Use the 'convergence' model: it has to be trained, to think like a 'mind' and subsequently behaves like one. Is a superset of functionality over turing test and much more accurate.

F&%ken CS people (0)

Anonymous Coward | about a month and a half ago | (#47204423)

Wholy shit, it is amazing the ignorance of computer science people to what the Turing Test is and is not, and the 1,000+ years of Philosophy of mind, langauge, epistimology, AI, linquistics, neruology, and so on that it is based on. Inteligent life in the computer science departments of any sort would be a nice start. The AI fantasy circle jerk seems to be a lot more fun.

I'll just take a couple of the more important ones for the moment:
1) The REAL Turing test never, ever ends. It can not be beat. Just like humans can be said to be "intellegent" until we do something stupid (correct or mistaken), or just die. The computer must go on convincing the interregators that it is "intellegent" (i.e. simply convince) until it does something wrong (fails to convince) that it is a human forever.

2) The use of "language" IS the test!!! There is no "tricking", because "tricking" a human in language is the trick. I'll sum this one up with one simple question. Try having a thought outside a language (not to say it is impossible, just no one is sure how that would work). Now if you manage that, try expressing it outside of a language so it can be evaluated. Now imagine building computer to be "artificially" "intellegent" without a language. Even if there was some form that was not based in language (by the way, not just talking human language), how would you test that? How would that computer be "correct" or "mistaken"?

Thus, for this stupid test, making the test about testing a linguistically challenged child IS not taking the Turring test in the full throughted sense of the Turring test. In fact, it may not even qualify as a Turing Test light.

You all need to quit waisting time and money randomly pluging wires, and wonder over to your local Philosophy departments to find out WHAT FUCK YOU ARE BUILDING (OR NOT)!!!!

Nothing in the history of man has had so many resources pissed away trying to build something, that WE DONT EVEN KNOW WHAT IT IS WE ARE TRYING TO BUILD.

Turing himself, having come from an age where people got a bit more of a rounded education, would I am sure understand all the above.

I know, I know. I post something like this everytime slash has a stupid "AI has been discovered" article. Every time, I get pile of posts from the all the people upset that their fantasy masterbation circle jirk might not be real. As you were.

Slippery Slope (1)

Prien715 (251944) | about a month and a half ago | (#47204475)

As the saying goes "haters gonna hate", but really, it's a big accomplishment. To pass the Turing test, you'd need to choose some "identity" for your AI. The idea of using a kid with limited cognative skills was clever, but not cheating -- but it's also not simulating a professor. If there is truly intellgient AI in the future, it's reasonable to expect its evolution to start with easier people to emulate before trying harder.

Re:Slippery Slope (0)

Anonymous Coward | about a month and a half ago | (#47204725)

It is not a big accomplishment, not even close. I've chatted with the bot and it isn't even close to other AIs out there which themselves suck. I cannot imagine this thing fooling anyone after even three back and forths in the dialog much less a full conversation.

Rigging the game is NOT a step forward in AI but perhaps is in social engineering.

To paraphrase Lincoln... (1)

jurgen (14843) | about a month and a half ago | (#47204483)

You can fool all the people some of the time, and some of the people all the time, but you haven't /really/ passed the Turing test until you can fool all of the people all of the time.

No really... Eliza fooled some of the people back in 1966. There is nothing really new to see here, move right along.

Go try the bot yourself. (0)

Anonymous Coward | about a month and a half ago | (#47204489)

http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/

Seriously, type with this thing for more than 5 phrases and tell me that this thing would even fool your grandma.

It reminds me of every ALICE bot I've seen on IRC ever, and I have a sneaky suspicion that it's code is at most slightly modified from the ALICE bots, as it told me that it has a "Celeron 667" that is "nice" that it "plays games with", setting its likely date of origin somewhere around 1999/2000.

It does get partial extra credit, however, for attempting to convince me that I'm a computer. [xkcd.com]

Loebner prize anyone? (1)

Anonymous Coward | about a month and a half ago | (#47204491)

Lets see the bot first win the Loebner prize. This "test" it won seemed a little focused on free advertising for the research group.
Loebner, despite not being a true Turing test, is a long established competition with clear rules and evaluation process.

Answer to question in title: no. (0)

Anonymous Coward | about a month and a half ago | (#47204505)

Not beaten, and not cleverly anything. Warwick is a twit.

I need your help getting money out of Africa! (0)

Anonymous Coward | about a month and a half ago | (#47204507)

So does this mean that every bot that sends phishing scams and achieves some success passes the Turing test?

Warwick (1)

AdamWill (604569) | about a month and a half ago | (#47204621)

"Kevin Warwick gives the bot a thumbs up"

That's a point *against*, not a point in favour.

Adam's Law of British Technology Self-Publicists: if the name "Sharkey" is attached, be suspicious. If the name "Warwick" is attached, be very suspicious. If both "Sharkey" and "Warwick" are attached, run like hell.

Meeting strangers (0)

Anonymous Coward | about a month and a half ago | (#47204699)

I don't talk much, but I watch people a lot. I find it's easiest to truly find out about them when they're in difficult or novel situations. In games for instance, this is the only way to get loads of information fast about others. Make dirty jokes, get political, insulting with some defensive with others and you'll find out quick a lot about them.
The reason why they chose a 13 year old boy, was because you couldn't ask about politics, sex, global issues and other things that transcend national barriers.
This test, if it ever held any meaning, it's pretty much a joke now. Our understanding of what a true AI implies has grown and changed all this time, the test ... not so much.

A much, much better test... (1)

tekrat (242117) | about a month and a half ago | (#47204793)

Would be to get two bots to talk to each other and see where the conversation goes after two minutes -- my guess is that all the code is biased towards tricking actual people in a one-on-one "conversation".

But when a machine converses with another machine, all that code no longer has an effect, and pretty soon the two machines will be essentially babbling *at* each other without actually having a conversation. An outside observer will immediately recognize that both of them are machines.

Interesting (-1, Offtopic)

RuthiePSills (3689823) | about a month and a half ago | (#47204811)

my friends aunt just got a stunning black Nissan only from working parttime off a macbook... you can look here ..............www.cash29.com
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...