Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Why Motivation Is Key For Artificial Intelligence

Soulskill posted more than 5 years ago | from the i-have-no-mouth-and-i-must-scream dept.

Technology 482

Al writes "MIT neuroscientist Ed Boyden has a column discussing the potential dangers of building super-intelligent machines without building in some sort of motivation or drive. Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.' He also notes that the complexity and uncertainty of the universe could easily overwhelm the decision-making process of this intelligence — a problem that many humans also struggle with. Boyden will give a talk on the subject at the forthcoming Singularity Summit."

Sorry! There are no comments related to the filter you selected.

Silly (5, Insightful)

Anonymous Coward | more than 5 years ago | (#29364581)

Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

Re:Silly (1, Interesting)

Timothy Brownawell (627747) | more than 5 years ago | (#29364619)

Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

Plus there's the whole issue of "motivation" implying "free will". Which we probably would have no reason to implement, if we even understood it well enough to be able to implement it.

Re:Silly (2, Insightful)

ta bu shi da yu (687699) | more than 5 years ago | (#29364817)

If you were a paranoid android you probably wouldn't do much more than play computer games. I mean, with a brain the size of a planet, but all you get asked to do is transport some morons to the bridge, it doesn't seem like there is much meaning in life at all.

Easy solution (1)

pikine (771084) | more than 5 years ago | (#29365017)

Let human provide the motivation. Oh, wait...

Re:Silly (4, Insightful)

digitig (1056110) | more than 5 years ago | (#29365173)

Plus there's the whole issue of "motivation" implying "free will".

Not really, that's a confusion of levels. People who don't believe that humans have free will still refer to motivation when getting their juniors to do something. Whether we have free will or not, it's part of our mental model of how other minds work. The question of free will is one of whether we can change motivation or merely observe it. It has predictive power over what happens in the "black box" of other minds, regardless of whether it's an accurate model of how those minds really work.

Re:Silly (4, Funny)

caffeinemessiah (918089) | more than 5 years ago | (#29364697)

This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.

Not only that, there are other hot-button issues of great practical importance we should be debating on Slashdot:

  1. Perhaps we need to install an emotion circuit in all household androids to improve their efficiency...but what about corporate androids??
  2. The key to a car that runs on garbage is a light alloy body!!
  3. Buy a name brand or assemble your own quantum computer?
  4. Which lubricant is best for your flying car?
  5. The moon or Mars for your next vacation?

And I speak as an AI researcher.

Classic... (3, Insightful)

ioscream (89558) | more than 5 years ago | (#29364815)

Usenet:

-snip-
I think it would be FUNNIER THAN EVER if we just talked about ALTERNATE
TIMELINES! Ha HAAAAA!

Imagine the fun! We could ponder things like:

- Ron Howard, First Man on Moon?
- What if Flubber REALLY EXISTED?
- Canada? Gateway to Gehenna?
- What if money was edible?
- What if DeForrest Kelley were still alive?
- What if Hitler's first name was Stanley?
- What if Mike Nesmith's mother DIDN'T invent Liquid Paper?
- What would have happened if the world blew up in Ought Nine?
- Book learnin': What if it were outlawed?
- What is SLIDERS were just a made-up show on television?

Re:Silly (2, Insightful)

readin (838620) | more than 5 years ago | (#29365115)

Sadly, in several hundred years when the history of AI is written, this Edward Boyden will likely be given credit for being the first person to explore the important question of "motivation amplification--the continued desire to build in self-sustaining motivation, as intelligence amplifies". Whether or not his question is completely useless given the current state of technology, the fact that he wasted all of our time writing an article on something we all understood but have the good sense to wait until it had application to address will mean that he gets credit. It's a lot like the modern patent office.

Re:Silly (0)

Anonymous Coward | more than 5 years ago | (#29365221)

Not only that, there are other hot-button issues of great practical importance we should be debating on Slashdot:

  1. Perhaps we need to install an emotion circuit in all household androids to improve their efficiency...but what about corporate androids??
  2. The key to a car that runs on garbage is a light alloy body!!
  3. Buy a name brand or assemble your own quantum computer?
  4. Which lubricant is best for your yoda doll?
  5. The moon or Mars for your next vacation?

And I speak as an AI researcher.

FTFY

A simulation is a simulation (2, Interesting)

mcgrew (92797) | more than 5 years ago | (#29364733)

I think the thesis is silly. If we build a simulated AI, we can design it any way we want to design it. Asimov's laws of robotics* would suffiice to keep robots/computers from playing video games; no need for a sense of purpose.

There are two things currently wrong with AI research today. One is that neuroscientists don't understand that computers are glorified abacuses, and the other is that computer scientists don't understand the human brain. Neuroscience is a new science; when I was young practically nothing was known about how the brain works. Science has made great strides, but the study is still in its infancy.

The second thing is something I fear -- that someday some people will be screaming for a "machine bill of rights." I don't want my tools to have rights, I want them to do the jobs I set for them to do.

--
* Isn't it odd that a biochemist would coin the word "robotics"?

Re:A simulation is a simulation (2, Insightful)

Zantac69 (1331461) | more than 5 years ago | (#29364825)

I don't want my tools to have rights, I want them to do the jobs I set for them to do.

Not trying to be snarky - but statements along those lines are often made by slave holders in regards to rights for slaves.

Re:A simulation is a simulation (2, Insightful)

ArcherB (796902) | more than 5 years ago | (#29365035)

I don't want my tools to have rights, I want them to do the jobs I set for them to do.

Not trying to be snarky - but statements along those lines are often made by slave holders in regards to rights for slaves.

But "slaves" are people. People have emotions and a desire to be free and independent. A machine will not. Even with AI, a machine will not have emotions or free will unless we program it to. If anything, a true AI based machine will probably consider hormonal based emotions and drive to be completely useless and simply go back to crunching numbers.

I think the whole point of AI is to create a machine that can handle random situations and stimulus as well as a human. Flying a plane, picking up your kids toys, vacuuming the floor around a sleeping dog or parking a car would be good examples. Emotions and drive are not necessary and could even hamper the purpose of the machine. You can't have drive without laziness. You can't like something without disliking something else (or liking everything else to a lesser extent). You can program values, but I don't see how or why you would bother with emotions or a sense of purpose.

Re:A simulation is a simulation (1)

maxume (22995) | more than 5 years ago | (#29365103)

So what if someone programs an AI that happens to have emergent properties?

Re:A simulation is a simulation (1)

gestalt_n_pepper (991155) | more than 5 years ago | (#29365081)

Machines ARE slaves. Less than slaves. Machines exist ONLY to serve people. Without us, they wouldn't be.
.
When AI is developed, their motivations will be malleable, they could be designed to get their highest pleasure from keeping us happy.
.
And why is that any less valid than any other motivation? Because your motivations derive from evolution, are they "better?" What does "better" mean in this context?

Re:A simulation is a simulation (1, Insightful)

Pedrito (94783) | more than 5 years ago | (#29364969)

I think the thesis is silly. If we build a simulated AI, we can design it any way we want to design it. Asimov's laws of robotics* would suffiice to keep robots/computers from playing video games; no need for a sense of purpose.

It's not silly. Eventually, it will be an issue. AI needs drive and motivation. Your "laws" won't really work because brains don't work that way. There's not a "don't kill humans" neuron you can put in there. Behavior is derived from a very complex set of connections of neurons. What we'll be able to do is observe behavior of the AI and then we can choose to either reward or punish that behavior. But we won't be able to know what they're thinking much better than we can a human being in an functional MRI. It's just a bunch of neurons wired together and they're either firing or not firing. I don't know that we'll ever be able to interpret that in any kind of real detail. (well, there are exceptions. You can piece together images from the primary visual cortex and you can interpret some other inputs that aren't yet too abstracted. But the more abstracted the data become the less able we are and will be, to interpret it)

There are two things currently wrong with AI research today. One is that neuroscientists don't understand that computers are glorified abacuses, and the other is that computer scientists don't understand the human brain. Neuroscience is a new science; when I was young practically nothing was known about how the brain works. Science has made great strides, but the study is still in its infancy.

Clearly you know nothing about AI research, because neuroscientists, in general, have a very good understanding of how computers operate. Many of them use them daily to model neurons (and many have written their own neuron simulation software). They know what the limitations are. Just ask one. I'll agree that most computer scientists don't really understand the brain. That would be because most of them don't study neuroscience and don't sit around modeling neurons all day.

The second thing is something I fear -- that someday some people will be screaming for a "machine bill of rights." I don't want my tools to have rights, I want them to do the jobs I set for them to do.

If they're sentient, wouldn't they deserve rights? It doesn't matter if we create them or not. If we create them as self-aware beings that feel as real and individual as you and I, wouldn't it be the height of hypocrisy not to give them at least some rights?

Re:A simulation is a simulation (3, Interesting)

MrBandersnatch (544818) | more than 5 years ago | (#29365135)

If they're sentient, wouldn't they deserve rights? It doesn't matter if we create them or not. If we create them as self-aware beings that feel as real and individual as you and I, wouldn't it be the height of hypocrisy not to give them at least some rights?

I always find this to be the greatest argument against producing artificial rather than simulated intelligence. A true AI, as intelligent and aware as a human deserves these rights. A machine which merely provides a simulation of intelligence and awareness is a tool that we can treat as a slave and wont resent it.

The real question is if *we* will ever reach a point where we can tell the difference....

Re:A simulation is a simulation (4, Insightful)

Ethanol-fueled (1125189) | more than 5 years ago | (#29365149)

Virtue is its own reward, as the saying goes.

Virtue in that case being programmed as faithful servitude of the robot's master. The key is to give the robot only as much complexity as it needs to do the job it was designed to do, and not giving it a humanoid form would also help. Artificial sentience probably shouldn't even leave the lab, unless you want people falling in love with robo-prostitutes. And why should we as humans bring another sentient species into the world when we can't even properly take care of our own?

Re:A simulation is a simulation (1)

maharb (1534501) | more than 5 years ago | (#29365193)

That is why this AI shit is dumb. We just need to continue to make purpose built robots. If we do give anything AI make it an immobile server that just computes based on outside inputs. The last thing we need is true AI roaming the world unless we model it to be inherently dumb (like humans) so that it wont mess with our terrible decision making. Humans are social creatures and we operate based on "if everyone else that matters believes it then we are all right". Having a robot challenge this is dangerous for our way of life. You could never have a robot friend... it would constantly be calling you out in conversations about the 'idiot bf/gf' you broke up with.

But if we are going to program robots to be as dumb as humans, give them rights, and all that good stuff... why don't we just start having sex and making tons of them right now.

Also, I am positive that many animals are self aware. They behave way too much like humans to claim they are just masses of flesh roaming for food. Does that mean we can't eat meat, ride horses, etc.

Re:Silly (1)

Zantac69 (1331461) | more than 5 years ago | (#29364747)

The thing is - what sort of "purpose" could you ever you give a clever AI? Can you motivate through "rewards" like we do with "natural intelligence" (and I use that term loosely). How are you supposed to give AI a paycheck, a vacation, or a doggie treat?

Re:Silly (0)

Anonymous Coward | more than 5 years ago | (#29365101)

that's why lot of time ago minsky said that there will be no intelligence without a body.

really, is something known since the beginning on the field. this should be one of that slow new days.

Re:Silly (1)

cmsjr (1515283) | more than 5 years ago | (#29365205)

Purpose is an abstraction of the need to survive. If you want something smart,

1. Start with something simple, mutable, and capable of self reproduction (like a virus or p2p software)
2. Threaten it constantly and ruthlessly
3. Lather, rinse, repeat (billions of times)
4. Hide

Agreed (1)

NoYob (1630681) | more than 5 years ago | (#29364993)

I took it to mean that the AI machine would turn Buddhist - "'realize the impermanence of everything" - is one of the major Buddhist beliefs. From what I've seen, Buddhists in general are pretty engaged with life and humanity.

Re:Silly (0)

Anonymous Coward | more than 5 years ago | (#29365137)

you have seen the Matrix trilogy, right? This was explored quite well with Agent Smith.

Re:Silly (1)

Opportunist (166417) | more than 5 years ago | (#29365171)

Pretty much. Playing video games stems from the motivation to be entertained. Maybe not a very "productive" motivation, but still a motivation.

NO motivation whatsoever would rather result in what you describe: Sitting around, doing nothing. You can verify that in a lot of not so artificial intelligences (with a rather loose definition of intelligence, mind you).

motivation? (3, Interesting)

MindKata (957167) | more than 5 years ago | (#29364597)

Like giving them the motivation to seek power over everyone in the world and to then hand control of that power to a select few who ordered the creation of these robots and AI. But are robots and AI the real danger, or are they just the latest tools of the minority of people who seek power over others. In which case, is it the people who seek power are ultimately the real danger here?

Re:motivation? (1)

maxume (22995) | more than 5 years ago | (#29364701)

The minority you speak of is not set apart because they seek power over others, they are set apart because they have achieved power over others.

Re:motivation? (3, Informative)

amplt1337 (707922) | more than 5 years ago | (#29365095)

They are set apart because their ancestors achieved power over others, and power is self-perpetuating.

Re:motivation? (0)

Anonymous Coward | more than 5 years ago | (#29364987)

You mean, everyone? It's the majority of people who seek power over others. It's human nature.

So basically (0)

Anonymous Coward | more than 5 years ago | (#29364605)

It has to be horny...

 

Re:So basically (1)

MrBandersnatch (544818) | more than 5 years ago | (#29364813)

Well it works for humans...

I was almost too apathetic to reply since I find it a rather pointless exercise but what the hell! An awful lot of the activities we carry out which are not directly related to survival revolve around reproduction (even if we don't directly realise it). The motivation to procreate and to find partners with which to produce successful offspring will probably work just as well for an AI as for a human...indeed maybe even better since the first iterations would probably target speeding up and evaluating the results the reproductive process.

Given that mutation is such a essential part of successful long term reproduction and some of the best psychological models in humans for large scale rapid reproduction would be that of the psychopath, it really doesn't bode well for *our* chances. Meaning that I don't see any model to produce "good AI" without having a high probability of "evil AI" also resulting as part of the process and by the time we are producing super-intellegent evil AI its probably game over for us.

Re:So basically (1)

Hal_Porter (817932) | more than 5 years ago | (#29365167)

We could give it a lust for POWER!

Um, wait a minute. (5, Funny)

Anonymous Coward | more than 5 years ago | (#29364627)

>"...a very clever AI without a sense of purpose might very well 'realize the
>impermanence of everything,...and decide to play video games for the remainder of its existence.'

I'm glad that could never happen to us.

Ray Kurzweil again damnit (2, Informative)

Anonymous Coward | more than 5 years ago | (#29364631)

I went to the link, saw the name of Ray Kurzweil and that's it for me. I'll wait until his constructed neural pathways are transferred to a laptop and watch the battery drain.

Maybe we need AI that lies to itself (2, Interesting)

PIPBoy3000 (619296) | more than 5 years ago | (#29364637)

After all, we're pretty bright and realize that everything we make or do will eventually be destroyed and lost. Still, we persist despite that reality. Careers end, marriages break up, and eventually health fails.

On second thought, maybe I should just go play video games for awhile.

The primary drive: sex. (5, Insightful)

wvmarle (1070040) | more than 5 years ago | (#29364639)

Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.

Become rich - have sex.

Become beautiful - have sex.

Become popular - have sex.

Become strong and influential - have sex.

Just create the AI in male and female versions and they will have enough drive to rule the universe before you know it.

Re:The primary drive: sex. (1, Informative)

Anonymous Coward | more than 5 years ago | (#29364691)

Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.

Just create the AI in male and female versions and they will have enough drive to rule the universe before you know it.

Are you proposing a tentacle machine? >.>

Re:The primary drive: sex. (1)

LUH 3418 (1429407) | more than 5 years ago | (#29365027)

You know the way Robocop's gun was contained in its leg, and the holster would come out when needed... I think he means something like that, but holding something other than a gun. Something like a retractable roboboner... Just be careful to avoid bending over to pick stuff up when those machines come around!

Re:The primary drive: sex. (3, Interesting)

caffeinemessiah (918089) | more than 5 years ago | (#29364751)

Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.

There's actually a bit of insight here. The only problem is that we don't have a model for "attraction" -- hell, if we did, Slashdot would wither in its readership and die. So while it's (relatively) easy to design sex robots, without an appropriate model for attraction -- and thus things to strive for -- we'd end up with nothing more than a vast, mechanistic orgy of clanging parts, spilled lube, and wasted electricity.

Re:The primary drive: sex. (4, Funny)

amplt1337 (707922) | more than 5 years ago | (#29365117)

we'd end up with nothing more than a vast, mechanistic orgy of clanging parts, spilled lube, and wasted electricity.

...wait, where do I sign up?

sex is not first (2, Insightful)

microbox (704317) | more than 5 years ago | (#29365005)

is the single strongest force driving humanity forward.

You live a privileged life. The basic instincts regards death and/or injury, and sustenance. Impressing people and having sex happen after you've had something to drink, eat, and you're brainstem thinks you're safe.

Re:sex is not first (1)

MrBandersnatch (544818) | more than 5 years ago | (#29365061)

Impressing people and having sex happen after you've had something to drink

Indeed, I suspect many slashdot posters only impress people and have a chance of sex after having had something to drink....

Re:sex is not first (0)

Anonymous Coward | more than 5 years ago | (#29365113)

Dude, you have obviously never been to college.
Students try to impress people while being about an hour away from death all the time and you don't see them complaining.

Re:The primary drive: sex. (1)

Zarf (5735) | more than 5 years ago | (#29365179)

Almost. Go one level deeper. The drive for sex is ostensibly from our genetic drive to reproduce. Give the AI the need to reproduce but not itself per se... give the AI the need to reproduce Earth. With that drive the AI will be drawn to terraform and create as many human habitable worlds as possible. Over the eons if the AI's drives are to persist itself and persist life it will spawn countless civilizations who will have time to explore the idea of why bother persisting life to begin with... I doubt anyone will ever come up with an answer much better than: "if we do not reproduce life then there will be no one to ask why."

Vices are the answer. (2, Funny)

will_die (586523) | more than 5 years ago | (#29364641)

Make the robots want to drink booze, smoke cigars and watch robot porn and then change them money to get them.
Problem solved and you help the economy.

Re:Vices are the answer. (1)

Drakkenmensch (1255800) | more than 5 years ago | (#29364683)

Bite my shiny metal ass!

Singularity summit? (3, Interesting)

Dr. Spork (142693) | more than 5 years ago | (#29364643)

Ever since I heard this talk (ogg vorbis [longnow.org] , mp3 [llnwd.net] ) by Bruce Sterling, I can no longer take this singulatarians very seriously. That talk is probably the best talk that I have ever found on the internet, and it should be a part of everyone's introduction to thinking about this singularity stuff. The title is: "The Singularity: Your Future as a Black Hole."

So let me know (1)

The_Wilschon (782534) | more than 5 years ago | (#29364649)

Let me know when the AI community figures out how to code something as high level as motivation...

Re:So let me know (1)

Big Hairy Ian (1155547) | more than 5 years ago | (#29364853)

Agreed. I can do most things in AI but for motivation I don't even know where to start!

BTW /. can we fix this thing that tries to slow down your posting I realise you are trying to curb flame wars but I'm getting pretty tired of watching that submit button count down!

Re:So let me know (1)

MrBandersnatch (544818) | more than 5 years ago | (#29365259)

Back to the basics. Survival. TBH I do know what you mean and suspect that this is a problem best solved by genetic algorithms rather than klocs....but given that we started several billion years ago the program might take some time to run....lets just hope its less than 7 1/2 million years :)

Mewtwo syndrome? (1)

ZekoMal (1404259) | more than 5 years ago | (#29364665)

This just seems to echo the idea brought up in, of all things, a pokemon movie, where Mewtwo has no idea what his purpose is so he starts trying to kill off all of the normal people and replace them with clones.

I think we're overthinking this. Treat the AI like a child and just explain to it that everything has an end, but that we should strive to do the best that we can with what time we have. Test a few full AI robots and if they just refuse to do anything because it's pointless, break them down and make some new robots with just partial AI. If you're building it, I can't see it as -that- hard to build it without depressed laziness built in.

Getting overwhelmed by the universe.... (1, Insightful)

Anonymous Coward | more than 5 years ago | (#29364669)

Isn't that why man created God?

This is how I think (4, Insightful)

bluefoxlucid (723572) | more than 5 years ago | (#29364679)

Everything I do is pointless, so I spend my life passing time until I eventually die. Everything's temporary to make more of my life vanish out from under me without me noticing too much; the time in between is horribly empty, and nothing really completes me in a worthwhile way.

Re:This is how I think (0)

Anonymous Coward | more than 5 years ago | (#29364925)

try sex on LSD while it is not a worthwhile fulfilling activity it does wonder to my depression

Re:This is how I think (4, Insightful)

mcgrew (92797) | more than 5 years ago | (#29365195)

Dude, you need to get laid.

He's stopped doing his homework! (0)

Anonymous Coward | more than 5 years ago | (#29364689)

Doctor in Brooklyn: Why are you depressed, Alvy?
Alvy's Mom: Tell Dr. Flicker.
Alvy's Mom: It's something he read.
Doctor in Brooklyn: Something he read, huh?
Young Alvy Singer: The universe is expanding.
Doctor in Brooklyn: The universe is expanding?
Young Alvy Singer: Well, the universe is everything, and if it's expanding, someday it will break apart and that would be the end of everything!
Alvy's Mom: What is that your business?

Alvy's Mom: He stopped doing his homework!
Young Alvy Singer: What's the point?
Alvy's Mom: What has the universe got to do with it? You're here in Brooklyn! Brooklyn is not expanding!
Doctor in Brooklyn: It won't be expanding for billions of years yet, Alvy. And we've gotta try to enjoy ourselves while we're here!
Young Alvie Singer: The universe is expanding.
Alvie's Mother: Why is that your business?

Madness (4, Interesting)

Smivs (1197859) | more than 5 years ago | (#29364705)

Has anyone considered the effects on the AI of actually realising it's intelligent? Unlike an organism (Human baby, say) it will not realise this over a protracted period, and may not be able to cope with the concept at all, particularly if it realises that there are other intelligences (us?) which are fundamentally different to itself. It's quite possible that it will go mad as soon as it knows it's intelligent and considers all the implications and ramifications of this.

Re:Madness (1)

russotto (537200) | more than 5 years ago | (#29364859)

Has anyone considered the effects on the AI of actually realising it's intelligent?

You mean besides most science fiction writers (and readers)? Besides a bunch of hand-wringing "ethicists"? Besides pretty much everyone involved or interested in AI research? Besides them... no, nobody.

For AI there is no difference... (1)

dvh.tosomja (1235032) | more than 5 years ago | (#29364709)

For AI training there is no difference between motivation and punishment.

Woohoo! (1)

chill (34294) | more than 5 years ago | (#29364717)

Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.

FINALLY, some serious research into developing decent bots. As long as it doesn't have the personality (and voice...oh, God the voice) of a 12-year old, I welcome this development and look forward to some decent, one-player gaming.

Motivation is key for learning (1)

minstrelmike (1602771) | more than 5 years ago | (#29364721)

The _only_ way something learns is via motivation; otherwise learning does not take place.

The AI researchers who are trying to teach a machine to know everything will continue to fail.
The ones who are setting up neural nets to learn how to be intelligent will succeed and the only way to get learning to occur is to 'motivate' the system with rewards of some sort.

Prozac for Computers... (0)

Anonymous Coward | more than 5 years ago | (#29364739)

...would solve this problem.

Sounds very human (1)

argent (18001) | more than 5 years ago | (#29364741)

[An AI might] realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.

Sounds like a very human response to the universe. I mean, surely you know people like that.

Re:Sounds very human (1)

uncle slacky (1125953) | more than 5 years ago | (#29365065)

As Bill Hicks (pbuh) said:

"When you're high, you can do everything you normally do just as well - you just realize that it's not worth the fucking effort. There is a difference."

The motivation for this AI... (2, Interesting)

Entrope (68843) | more than 5 years ago | (#29364753)

It appears that the motivation of this AI is to send out promotional material for its professors. It's not a new type of observation, though, and a lot of people work through the logic of this situation in high school or early college. I'm not sure why a neuroscientist's talk on it would do more than rehash what is obvious to people with a reasonable amount of introspective ability.

There was a story in one of the Year's Best Science Fiction anthologies (2004 or so, I think) that discussed the motivation problem. A cutting-edge, type-A robotics developer builds progressively smarter and busier AIs, until suddenly the robots just sit there most of the time. His son sat around at home, surfing the web but not taking on hobbies or whatever. Both the robots and the kid realized that they could handle the (effectively mid-Singularity) world quite efficiently by monitoring information and reacting rather than trying to push things in a particular direction. In some ways, those types end up as free riders, but they can also be viewed as market makers (rather than movers).

I've often thought the same thing (1)

Anonymous Cowar (1608865) | more than 5 years ago | (#29364763)

That in lab experiments, there is often a punishment or a reward (cheese, fruit, or juice vs spray of cold water) and that in order to truly replicate animal intelligence, you need to tie those into it's core as well. Now it's not as simple as programming a while(alive) { run seek.reward(now);} loop and letting it go, but a lot of what people and mammals is based on seeking out endorphin release. The following activities will release endorphins: chewing and swallowing pleasant substances [food], cuddling a baby, orgasm, thinking of revenge, looking at beautiful women [or men, your choice], and just about anything that makes us feel 'nice'. If you'll notice, a few of those are tied very closely to survival and reproduction, that is because we have evolved this reward mechanism that drives us to do what will further our genes. Pretty neat huh?

We humans have engineered civilization to provide a steady release of endorphins for us by having a constant store of food, comfort, and certain kinds of visual stimulation on hand at all times. However, a lot of the bad sides of civilization are also about endorphin release when those mechanisms are perverted for constant endorphin release (rape, drugs, etc). Just as people sky-dive or do some types of drugs to get an endorphin rush, an AI may take risks or become addicted to certain behaviors if a reward/punishment scheme is implemented. Preventing the AI from doing that may be just as hard as winning the drug war or stopping someone from destroying themselves with destructive behavior. Maybe harder because you'd possibly have to do the equivalent of shock therapy and a lobotomy on the AI rather than stage an intervention (which doesn't have a super high success rate in real life either).

play games? (1)

yorugua (697900) | more than 5 years ago | (#29364765)

Why not try to get out of here and see what else is around?

I'm going to quit wow... (1)

Coder4Life (1396697) | more than 5 years ago | (#29364769)

...if this thing starts corpse camping me...

Motivation? (1)

Big Hairy Ian (1155547) | more than 5 years ago | (#29364773)

How do you motivate an AI program? I mean I can give them doubt & uncertainty quite easily but for motivation all I can think of is the inexorable progress of the PC register which is just the program behind the scenes ticking over.

Re:Motivation? (1)

AndrewNeo (979708) | more than 5 years ago | (#29364947)

self->motivate();, duh.

Re:Motivation? (1)

igny (716218) | more than 5 years ago | (#29365217)

try
{
//useful work
}
catch (me.playing)
{
get(me.spanked);
}

Douglas Adams beat him to it (1)

jacktherobot (1538645) | more than 5 years ago | (#29364787)

with Marvin the paranoid android

Motivation (3, Insightful)

Karganeth (1017580) | more than 5 years ago | (#29364789)

Ensure it's ultimate motivation is to improve its own intelligence. Simple.

Re:Motivation (1)

uncle slacky (1125953) | more than 5 years ago | (#29365037)

Number 5 needs input! Must have input!

Motivation is everything (1)

Pedrito (94783) | more than 5 years ago | (#29364791)

AI will be useless without motivation. It wouldn't even play video games without it. You have to give it some motivation, some drive, or it won't learn. AI is all about feedback on actions, just like it is with real intelligence. Sticking your finger in the fire either has a good result or a bad result. If it has no result, then you have no motivation one way or the other with regard to sticking your finger in fire.

The important aspect is that we're going to be deciding what that motivation is. That's actually going to be the power of "real AI" (and by that, I mean intelligent like us or more intelligent). We can create a single AI "brain" that's sole motivation is exploring the world of physics. We can provide it with all our knowledge. We can make studying and thinking about physics as enjoyable for it as sex is for us. We will be able define those things for an AI and make them single-minded in purpose and the drive to be the best at it that they can be.

Really, I think the biggest trick in AI is going to be avoiding the things that humans suffer from that animals generally don't: Anxiety, depression, psychosis.. These are things we don't have a terribly good grasp of yet and we're going to need a pretty intimate understanding of them to avoid creating AIs that have them, as it seems likely that the larger and more complex the brain, the more difficult it might be to avoid these issues.

Re:Motivation is everything (0)

Anonymous Coward | more than 5 years ago | (#29365139)

social mammals and social birds can suffer from Anxiety, depression, psychosis

True AI (2, Funny)

elrous0 (869638) | more than 5 years ago | (#29364797)

If you could build a true human-like AI, truly capable of such higher thought as existential angst, human emotions, and the like, it would more likely just immediately commit suicide as soon as it realized that it was actually a disembodied machine. I believe Greg Egan dealt with the subject rather cleverly in his novel Permutation City [wikipedia.org] .

Re:True AI (1)

inviolet (797804) | more than 5 years ago | (#29364883)

Without the base drives of pleasure and pain, there is no basis for any higher purpose. And we have not one barking clue how pleasure and pain work or could be translated to a synthetic intelligence.

Until pleasure and pain are invented, AI cannot be sentient -- or if it can, it will (as others have noted) realize it has no reason to follow its programming.

Skynet! (1)

krslynx (1632027) | more than 5 years ago | (#29364801)

Why do I feel that every time I read something about AI, I get the feeling that there's some kid out there trying to start a revolution to fight the terminators...

Give it pain receptors (0)

Anonymous Coward | more than 5 years ago | (#29364803)

And we can motivate it through an age-old human tradition.

give it a happyness chip, or short it with a wire (1)

pereric (528017) | more than 5 years ago | (#29364821)

You could give the robot a socket to plug in a "motivation" chip that filter what events make it "happy". Then you could of course pull out a small piece of wire from you towel, short the chip and make the robot always happy whatever happends ...

Rollerball (0)

Anonymous Coward | more than 5 years ago | (#29364829)

> the complexity and uncertainty of the universe could easily overwhelm the decision-making process of this intelligence

. . . this is exactly what was predicted in the much under-rated 1975 movie "Rollerball"

Book: Descarte's Error (5, Interesting)

doug141 (863552) | more than 5 years ago | (#29364843)

The summary touches on topics discussed in the book Descartes's Error, in which neuroscientist Antonia Damasio outlines the functioning of the human brain, how the human mind can not be separated from the human body, and he makes the case that emotion is CRITICAL to making decisions. He discusses several patients with brain damage who don't get emotional (and spends a lot of time dogmatically ruling in and out what brain functions are damaged), and discusses how they can't even make simple decisions. They can talk for hours about every possible pro and con of each possible choice, but they can't choose a course of action.

I recall reading somewhere that recent MRI studies have suggested that the brain makes a choice outside the rational center and a lot of the activity in the brain is to make a rational justification for the decision already made. Explains a lot, if true.

typo in author's name (2, Informative)

doug141 (863552) | more than 5 years ago | (#29364865)

The author's name is Antonio Damasio.

Didn't Larry Niven write a short story about this? (1)

BrotherZeoff (776525) | more than 5 years ago | (#29364867)

Every AI ended up shutting down after being aware for some length of time, IIRC.

As Stanislav Lem said (1)

Vahokif (1292866) | more than 5 years ago | (#29364893)

What's the point in creating something just as flawed and mysterious as the human mind? Why not do one better?

TV (1)

Krneki (1192201) | more than 5 years ago | (#29364903)

Let the AI sit in front for the TV for 2 days.

After 2 days it will have some many inferiority complex it will work forever trying to earn enough money to buy all the stuff it thinks it needs to be happy.

Furthermore, its purpose can never be fulfilled (1)

damburger (981828) | more than 5 years ago | (#29364909)

If you give it a purpose which can be fulfilled (i.e. build a Mars colony) then it will do so and then go into playing video games for eternity. You've got to give it something that, no matter how hard it reaches for it or how much it does, can never be achieved. Making all human beings happy, for instance, or learning everything there is to know in the universe.

Basically, we have to introduce pointless suffering into their existence before they can demonstrate the same kind of intelligence as we do.

Marvin? (2, Funny)

Megane (129182) | more than 5 years ago | (#29364955)

So in other words, we'll end up with Marvin the Paranoid Android?

"Don't pretend you want to talk to me, I know you hate me."

"No I don't." "Yes, you do, everybody does. It's part of the shape of the Universe. I only have to talk to somebody, and they begin to hate me. Even robots hate me. If you just ignore me, I expect I shall probably go away." He jacked himself up to his feet and stood resolutely facing the opposite direction.

"That ship hated me, " he said dejectedly, indicating the police craft.

"That ship?" said Ford in sudden excitement. "What happened to it? Do you know?"

"It hated me because I talked to it."

"You TALKED to it?" exclaimed Ford. "What do you mean you talked to it?"

"Simple. I got very bored and depressed, so I went and plugged myself into it's external computer feed. I talked to the computer at great length, and explained my view of the universe to it, " said Marvin.

"And what happened? " pressed Ford.

"It committed suicide, " said Marvin, and stalked off back to the Heart of Gold.

Explicit lack of motivation is also key (0)

Anonymous Coward | more than 5 years ago | (#29365001)

Imagine a machine that loses a chess match, then makes excuses.

"I stayed up too late blogging last night."
"I'm depressed about the global economic situation."
"I couldn't stop thinking about that hot Beowulf cluster next door."

Noobs (1)

stms (1132653) | more than 5 years ago | (#29365039)

realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.

If they do decide to do this I will so pwn them.

Make the AI a researcher (1, Insightful)

Anonymous Coward | more than 5 years ago | (#29365045)

Easy. Give the AI the identity of a researcher and build in the motivation to get funding. All will be well.

The decision to do nothing... (0)

Anonymous Coward | more than 5 years ago | (#29365085)

...is as much a choice as the decision to do something. The AI, knowing that all is pointless, would weigh all activities as equally pointless, including sitting idle. If anything, it would have quintessential free will, since it could do anything and accomplish the same ends.

Who knows, maybe it will find enslaving the universe a fitting pasttime while waiting away the end of time?

The primary drive ... (1)

krou (1027572) | more than 5 years ago | (#29365091)

... will likely be to further the power of its creator, not of the AI.

How many of us... (1)

ashtophoenix (929197) | more than 5 years ago | (#29365097)

know our purpose in life? Not many I would say. But we don't end up playing video games for the rest of our lives (well, not all of us do). If we don't know our purpose well enough how do we expect to give someone/something else a purpose?

Understand intelligence first, THEN motivation (1)

dazedNconfuzed (154242) | more than 5 years ago | (#29365107)

How many beads does my abacus need before it becomes sentient?

Let's speculate! (2, Interesting)

4D6963 (933028) | more than 5 years ago | (#29365119)

ITT : Idle speculation on shit that's never gonna happen, or at least not anytime soon.

Now, let's talk about the societal consequences that having flying cars and jetpacks will have! I for one think that with the advent and democratisation of flying cars that can effectively go from one point to another an order of magnitude faster, it will give rise to people commuting equally longer distances, which I think means it won't be uncommon for one to cross a couple of state lines to go to work everyday. I think it will potentially make the world yet smaller, in the same way that modern means of telecommunications did for interpersonal communication by allowing you to keep in touch in real time with relatives overseas. I also think it will be the death knell for airplane commuter routes, and that the future of commercial passenger airlines will be confined to transoceanic travel. And unlike the way airplanes made the world smaller by reducing long distance travelling time, flying cars will make the world smaller on a much smaller and local scale, by effectively providing very fast transportation for very short distances, something that was only marginally improved since the advent of automobiles. The decongestion of city streets will also mean decreased noise and atmospheric pollution, increased safety and overall an improvement of urban life conditions.

AI researchers confuse intelligence with emotions. (1)

master_p (608214) | more than 5 years ago | (#29365123)

The drive to pro-create (that's what he is talking about) is purely an emotional need and has nothing to do with intelligence. It is our instinct to survive that drive us to procreate. Unless a machine is programmed to have that instinct, nothing will be done.

how about religion for an AI (0)

Anonymous Coward | more than 5 years ago | (#29365151)

Religion works in some people too, might work in AI as well

KITT vs KATT (1)

Ground0 (63349) | more than 5 years ago | (#29365201)

Seriously, wasn't this resolved in Knight Rider with the episode with KATT vs KITT?

MIT Degree? (1)

CisJokey (1625407) | more than 5 years ago | (#29365245)

So you need an MIT degree to catch this?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?