Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Understanding an AI's Timescale

Soulskill posted about 3 months ago | from the meatbags-are-slow,-electrons-are-fast dept.

AI 189

An anonymous reader writes "It's a common trope in sci-fi that when AIs become complex enough to have some form of consciousness, humans will be able to communicate with them through speech. But the rate at which we transmit and analyze data is infinitesimal compared to how fast a computer can do it. Would they even want to bother? Jeff Atwood takes a look at how a computer's timescale breaks down, and relates it to human timeframes. It's interesting to note the huge variance in latency. If we consider one CPU cycle to take 1 second, then a sending a ping across the U.S. would take the equivalent of 4 years. A simple conversation could take the equivalent of thousands of years. Would any consciousness be able to deal with such a relative delay?"

cancel ×

189 comments

Sorry! There are no comments related to the filter you selected.

Will computers ever be as smart as us? Briefly... (4, Insightful)

sandbagger (654585) | about 3 months ago | (#47026027)

I hope they are nice to us.

Re:Will computers ever be as smart as us? Briefly. (5, Insightful)

Jane Q. Public (1010737) | about 3 months ago | (#47026191)

OP's entire premise is pretty thin.

Human beings perceive light, for example. (They can also perceive electricity, to a degree, but that is not as relevant to the point.)

But while a human being might perceive that a flashlight at night has shined his/her way, it takes the same amount of time, roughly, a a fiber optic signal from the same distance. So what?

Generally, it is the speed of perceiving and interpreting the signal that takes time, not the speed of its propagation. We communicate at lightspeed, too. Or close to it. Anybody who has had a video chat has done that. Did that make you superintelligent?

We have never built an "AI". And in fact we have NO reason to believe -- no evidence whatsoever -- that its speed of perception and interpretation would be any faster than our own. There is a very good chance that it would be much slower... at least in the beginning.

I would like to remind people that the idea of "intelligent" machines has been around for almost 100 years now. AND we still don't have any solid evidence of being close to achieving such a thing. Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

Re:Will computers ever be as smart as us? Briefly. (1)

mbone (558574) | about 3 months ago | (#47026257)

Mod this parent up. This there (IMHO) nothing left to say.

And another thing (2)

coolmanxx (150620) | about 3 months ago | (#47026309)

Who is to say a computer AI will be completely inorganic or digital? We are on the frontier of cybernetic implants, it makes sense any AI will incorporate genetically altered tissue in its matrix.

Re:Will computers ever be as smart as us? Briefly. (5, Insightful)

Anonymous Coward | about 3 months ago | (#47026349)

Not only that, but trying to relate to individual CPU cycles is absurd. It's not like our minds execute in a stream of arithmetic operations, but rather a complex parallel network of signals. It might take billions or trillions of CPU operations to emulate all the stuff that happens on one "instant" in the brain. A more reasonable cycle comparison might be to compare macro scale wave front propagation in the brain (i.e. brain waves) and global synchronization in large-scale supercomputers (i.e. single iteration time of an MPI-based fluid dynamics simulation or other large-scale 3D mesh problem). Even then, I am not sure how many orders of magnitude we need to increase the size of the MPI problem before the per-cycle complexity starts to approximate the signal-processing of the entire nervous system.

But all that aside, we have historically had many people who worked in relative isolation. Many artists, poets, philosophers, and scientists have had great symbiotic relationships with nothing more than the occasional letter or other work (papers, poems, paintings, sculptures) exchanged over great distances and latencies. Love affairs have been carried out with little more than a furtive glimpse and a series of notes sent through back channels...

Re:Will computers ever be as smart as us? Briefly. (-1)

Anonymous Coward | about 3 months ago | (#47026471)

This is an excellent post.

Re:Will computers ever be as smart as us? Briefly. (1)

Travis Mansbridge (830557) | about 3 months ago | (#47026553)

Furthermore, 1 second is an arbitrarily huge length of time in the context of both CPU cycles and neural transmission.

Re:Will computers ever be as smart as us? Briefly. (2, Interesting)

Anonymous Coward | about 3 months ago | (#47026395)

Well, beyond the very brief transition period (e.g. when two curves cross), AI could simply treat humans (and all other life on the planet) as we treat mountains and forests---in other words, we don't perceive them as intelligent at all, since they're changing on such a long timescale compared to us... it would be impossible for us to have a `conversation' with a mountain, for example (who knows, maybe the Earth is intelligent and is trying to talk to us via plate tectonics and pushing up mountains is one way it can generate intelligent wave forms).

Re:Will computers ever be as smart as us? Briefly. (1, Interesting)

itzdandy (183397) | about 3 months ago | (#47026511)

absolutely agreed. Though I don't have direct evidence to support this statement, I would guess that a neuron fires at a similar enough speed as a transistor. Consciousness is a very complex computation from billions of neurons *written in assembly* essentially. If/when we make an AI, it's likely to be compiled code running on a chip with less transistors than we have neurons, 100 Billion neurons vs 1.4Billion transistors in an i7 for instance.

That said, this is assuming that we limit consciousness to what humans perceive, the computer may have a somewhat different version of it. I suspect that we will try to build a human type consciousness into the machine though.

Re:Will computers ever be as smart as us? Briefly. (2, Interesting)

Anonymous Coward | about 3 months ago | (#47026637)

Though I don't have direct evidence to support this statement, I would guess that a neuron fires at a similar enough speed as a transistor.

A transistor is both smaller and made out of copper. Though a neuron varies the strength of the signal it forwards, and though it can be connected to many other neurons at once, in terms of raw speed the transistor is still faster. Even if you create a synthetic neuron with similar capabilities, it I don't see why the synthetic version wouldn't be faster.

If/when we make an AI, it's likely to be compiled code running on a chip with less transistors than we have neurons, 100 Billion neurons vs 1.4Billion transistors in an i7 for instance.

Which equates to 71 i7 processors. If you assume that each neuron takes 1000 transistors to simulate (to make the math simpler), and if you take the release price for an i7 as listed on Wikipedia [wikipedia.org] , that totals at $21.3M. Expensive, but not impossible.

Re:Will computers ever be as smart as us? Briefly. (4, Informative)

rk (6314) | about 3 months ago | (#47026901)

Neurons aren't even within several orders of magnitude as fast as transistors: linky1 [stanford.edu] and linky2 [technologyreview.com] .

However, a single typical neuron does a lot more work than a single transistor, computationally speaking.

Re:Will computers ever be as smart as us? Briefly. (1)

athe!st (1782368) | about 3 months ago | (#47026991)

A neuron might be fast, but communicating the idea that neuron just came up with is billions/millions of times slower than that. You either have to write/type or speak it, THAT is the limiting factor with how humans might communicate with an AI (baring Sci-Fi neural interfaces)

Re:Will computers ever be as smart as us? Briefly. (5, Insightful)

phantomfive (622387) | about 3 months ago | (#47026535)

Generally, it is the speed of perceiving and interpreting the signal that takes time, not the speed of its propagation. We communicate at lightspeed, too. Or close to it. Anybody who has had a video chat has done that. Did that make you superintelligent?

Another way of looking at it: have you ever sent someone a letter, then waited a long time to receive a response? Did you nearly die from the excruciating pain of not having the response, or did you do something else until the response came?

Most likely you are highly skilled at carrying on multiple conversations at different speeds.

Re:Will computers ever be as smart as us? Briefly. (1)

Anonymous Coward | about 3 months ago | (#47026873)

> Another way of looking at it: have you ever sent someone a letter, then waited a long time to receive a response?

Yes. My boss nearly had a stroke when I took 20 minutes to reply to his email this morning.

Re:Will computers ever be as smart as us? Briefly. (2)

Alomex (148003) | about 3 months ago | (#47026545)

Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

Since we don't have a clear idea how (human) intelligence operates the statement above is pretty vacuous, and likely not at all relevant.

Sure, cars do not "run" in the literal interpretation of the term, but for all practical purposes they are better than humans at "running". If we end up with computers that effectively outperform humans in most "intelligent activities" how they achieve it would be incredibly irrelevant.

Re:Will computers ever be as smart as us? Briefly. (4, Interesting)

Kjella (173770) | about 3 months ago | (#47026613)

I would like to remind people that the idea of "intelligent" machines has been around for almost 100 years now. AND we still don't have any solid evidence of being close to achieving such a thing. Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

The goal posts keep moving, no matter what they do we still say they're not really intelligent whether it's win at chess (Deep Blue) or win Jeopardy (Watson) or drive cars (Google) or act as your personal secretary (Siri). Not that I liked the tripe called "Her", but does it really matter if it's true intelligence or just a sufficiently advanced impersonation of intelligence? Do we really need true AI in order to pass a Turing test, particularly if you aren't trying to break the illusion? If it can keep a decent dinner conversation and be "fully functional" in bed can it be a substitute for a companion in the same way you can play chess against a computer instead of a person? Because I think that's what most people want to know, they don't care if the robot is "truly" intelligent or not, they want to know if it'll take their jobs and girlfriends, do their chores or give free blow jobs.

Re:Will computers ever be as smart as us? Briefly. (1)

WhiteZook (3647835) | about 3 months ago | (#47026689)

After all, humans have been selected by evolutionary processes because they had sufficiently advanced impersonation of intelligence. Whether it's "real" or "true" intelligence never an issue, if such a distinction even exists.

Re:Will computers ever be as smart as us? Briefly. (1)

Oligonicella (659917) | about 3 months ago | (#47026809)

The goal posts aren't moving, machines just haven't made them. If you limit the definition enough, a calculator could be called intelligent. Even an idiot savant can do more than your examples.

Your argument about their being intelligent would hold more weight if you could point to a machine that could do all three and write poetry, paint a picture and carry on a viable conversation.

Re:Will computers ever be as smart as us? Briefly. (0)

Anonymous Coward | about 3 months ago | (#47026759)

Watson is actually closer than you might think to artificial intelligence. Intelligence is mostly pattern matching.

Memory is important too, but computers already have good memory.

A bigger issue is that true artificial intelligence is probably not a good idea. At that point it becomes alive, and eventually either the AIs or the humans are likely to end up subjugated, genocided, or something along those lines.

Humans can barely get along with each other when half of them are given different colored T-shirts. Good luck getting along with AIs.

Re:Will computers ever be as smart as us? Briefly. (1)

jeffb (2.718) (1189693) | about 3 months ago | (#47026769)

I would like to remind people that the idea of "intelligent" machines has been around for almost 100 years now. AND we still don't have any solid evidence of being close to achieving such a thing. Sure, computers can do a lot, and what they DO accomplish, they tend to do very fast. But what they accomplish is not "AI". Even Watson is not "intelligence", it is only the illusion of it.

I'll agree that this is "insightful" as soon as you describe how to distinguish "intelligence" from "the illusion of intelligence".

Re:Will computers ever be as smart as us? Briefly. (0)

Anonymous Coward | about 3 months ago | (#47026863)

You do realize that none of us possess "intelligence" if that's the distinction, right? Yes, we're currently vastly more "intelligent" than computers in most areas that require intelligence beyond crunching numbers, but we're not fundamentally different. We just have a different hardware platform that runs biological software.

Bottom line: You're not intelligent, but it's okay because no one else is either.

Another really bad flaw (0)

Anonymous Coward | about 3 months ago | (#47026959)

Bandwidth and simultaneous processing of different signals. Example:
- you're saying X
- your tone of voice of voice and facial expressions show that you're nervous, I factor that into my understanding of your message

So I'm simultaneously receiving the signal X and the signal "X is not true or not the whole story". And there's no shortage of "bandwidth" since simply expanding the receiver's knowledge bank makes the signal contain more - e.g. probable region (from your accent), level of education (choice of words), how you perceive yourself in relation to me (above/below me in the hierarchy), when you're being honest and when and how you're not (do you need to make something up or have you rehearsed it or are you simply omitting information) and so on... Good interrogators can tell so much from how someone is speaking. And even if we limit ourselves to a conversation over skype (let's leave body odours for some other time), you're also - regardless of whether you want to or not - providing information about your approximate age, health, living habits (eating and exercising habits), ethnicity, whether you're well-rested or not etc. etc.

But even dumber: The "speed" of the signal is a signal in itself so one could just as well make the argument that there's constantly data to be received even if that data most of the time just contains something like "tone of voice and facial expression when pronouncing with accent such and such and speed such and such..." which is data that we only process in much bigger chunks.

Re:Will computers ever be as smart as us? Briefly. (1)

athe!st (1782368) | about 3 months ago | (#47026965)

The real problem is the time it takes for humans to express and communicate their ideas, speech is millions of times slower than a packet over the internet. The speed of propagation might be "fast" but the transmit time, or packet length of a word or conversation is epically slow compared to a computer. Typing at a keyboard is probably just as slow as speech. We can only receive data as fast as we can read it too, all of these thing ARE millions of times slower than a computer.

Re:Will computers ever be as smart as us? Briefly. (1)

K. S. Kyosuke (729550) | about 3 months ago | (#47027053)

And in fact we have NO reason to believe -- no evidence whatsoever -- that its speed of perception and interpretation would be any faster than our own. There is a very good chance that it would be much slower... at least in the beginning.

But if it ever becomes faster, it might resemble the later Heechee novels by Fred Pohl, the ones where the protagonist is dead and transcribed into a computer. ;-) Though the one thing I never understood is why the stored minds didn't seem to have a "suspend" switch applicable whenever they needed to wait for something.

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 3 months ago | (#47027171)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 3 months ago | (#47027181)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 3 months ago | (#47027195)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 3 months ago | (#47027205)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

K. S. Kyosuke gets called out & ran (-1)

Anonymous Coward | about 3 months ago | (#47027213)

From a fair challenge like a chickenshit blowhard http://slashdot.org/comments.p... [slashdot.org]

Re:Will computers ever be as smart as us? Briefly. (1)

Maxo-Texas (864189) | about 3 months ago | (#47027131)

I expect that AI, when it comes, will be an exceptionally good illusion.

But even then there will be no way to "prove" it's intelligent.

In part because humans will move the goal posts until they can't be moved any further to protect their self image.

Fermi Paradox (5, Funny)

SimplexBang (2685909) | about 3 months ago | (#47026029)

AI would form its own Fermi Paradox : If there is intelligent life , then why aren't they answering ?

infinitesimal (0)

Anonymous Coward | about 3 months ago | (#47026031)

But the rate at which we transmit and analyze data is infinitesimal compared to how fast a computer can do it.

I do not think that word means what you think it means.

This was covered in the movie "EPIC" (0)

Anonymous Coward | about 3 months ago | (#47026037)

In the animated motion picture "EPIC", there was a vast time-scale difference between the slow, lumbering humans, and the fast animals and 'little people'.

End of discussion. Move on to the next discussion thread.

Technology. (0)

Anonymous Coward | about 3 months ago | (#47026065)

I think by the time we have sufficient ai enough to even care about the speed at which humans can talk to it, we will have tech to address it. Something like a brain->computer direct interface.

Re:Technology. (1)

plover (150551) | about 3 months ago | (#47026143)

Our brains are electrochemical, and don't run that fast in series, but they're massively asymmetrical and parallel, and running in async. We don't have clock strobe lines or addressing, and it might take millions of internal connection points to provide a fast enough interface. Even that might be six orders of magnitude slower than our AI buddies.

The retina and optic nerve might be the closest we might get, so are you willing to give up an eye for this? Go full-on Borg?

Re:Technology. (0)

Anonymous Coward | about 3 months ago | (#47026247)

yes

Re:Technology. (1)

sir-gold (949031) | about 3 months ago | (#47026323)

We have two of them, and stereoscopic vision isn't THAT important (otherwise they wouldn't let one-eyed people drive)

Incorrect Timescale (5, Insightful)

SJrX (703334) | about 3 months ago | (#47026071)

One CPU cycle as one second might be a good metaphor for computer memory but not AI. It's closer to the equivalent of a neuron firing in the human brain, then it is to 1 second of human time. Human speech takes more than one neuron to fire, and it would take way more than one CPU cycle to process. An AI algorithm which is processing data, and analyzing it would literally take millions or billions of cycles most likely to do the most basic things. While no doubt speech recognition has gotten much faster, it is still and probably will always be a massive undertaking for a CPU to do, as opposed to say adding two 32-bit integers.

Re:Incorrect Timescale (2)

clustermonkey (320537) | about 3 months ago | (#47026123)

How using facial recognition as a benchmark for computer timescales? It would take billions of cycles for the computer to recognize you (especially out of a database of faces containing a similar number of faces a human would recognize), while a human can do it in fractions of a second. Or how about SLAM/location? Or how about calculation of movement in a changing environment? 1 Sec per CPU cycle seems quite an arbitrarily long time to use to compute any comparisons.

Re:Incorrect Timescale (2)

tchdab1 (164848) | about 3 months ago | (#47026201)

Agreed, and they failed to compare their analysis of various computer process times (cache, memory, hard disk, network, etc.) to various human component times, starting with a single neural pulse. On the order of milliseconds, and as you say we can see many of them, simultaneously and serially, when we speak. We don't know how long it will take a spontaneously-arising artificial intelligence to create a thought, retrieve its memories, consider them, observe surroundings, etc., but we can assume it's at least some collection of nanosecond cpu cycles, not a single one; some collection of data fetches, not just one.

Re:Incorrect Timescale (2)

ultranova (717540) | about 3 months ago | (#47026509)

Agreed, and they failed to compare their analysis of various computer process times (cache, memory, hard disk, network, etc.) to various human component times, starting with a single neural pulse.

Their failure goes far deeper than that: they wrote a paper and went on with their business. Presumably they get responses at some point, and then write a response, and so on.

People engage in multiple conversations in vastly different timescales all the time. All it means is that you do something else when waiting for a response. And our current non-intelligent computers have already mastered this art: the very computer I'm writing this on waits a virtual eternity between my keystrokes. And the same goes for disk read/write requests, network requests, etc.

This problem was solved long ago, by man and nature both: just use memory to store the context and interpret the reply in the stored context when it arrives.

Re:Incorrect Timescale (1)

cerberusti (239266) | about 3 months ago | (#47026295)

Too bad my mod points seem to have expired today, you made the exact comparison I wanted to.

Re:Incorrect Timescale (3, Interesting)

nine-times (778537) | about 3 months ago | (#47026387)

This is a really good point. Current CPUs have billions of cycles per second, but still struggle to perform some tasks in real-time, and that processing is not going to be powerful enough to emulated intelligence to the degree of consciousness. If past computing problems are any indication, I would guess that the first generation of AI will be a bit "slow on the uptake". That is to say, we may come up with the algorithms to emulate consciousness first, and then need to spend some time optimizing code and improving hardware designs in order to get "real time" consciousness.

Re:Incorrect Timescale (1)

Millennium (2451) | about 3 months ago | (#47026983)

The other thing to note is that humans can directly understand distinct moments in time that are well under one second apart. Not all THAT much under -it varies a little from person to person, but it's usually between 1/50 and 1/60 of a second- but the fact remains that even if we try to measure the human "clock rate" as the smallest distinct points in time that we can distinguish, we're faster than 1Hz.

A more appropriate time scale would be to say that 50 clock cycles of CPU time equals one second of human time. The numbers don't look quite as impressive when you do this -a cold boot takes just under 650 years, as opposed to some 32,000 years- but it still drives the point home that humans are slow. Some of the smaller time scales also become useful as metaphors: for example, the main memory access takes 7 "seconds": much like something you have to struggle a bit to remember, but it still seems to come quickly.

Sci-fi story (4, Informative)

Imagix (695350) | about 3 months ago | (#47026075)

Read Dragon's Egg by Robert L. Forward. (and the sequel, Starquake) Part of the story involves humans interacting with an alien species that is a lot faster. The alien's lifespan is about 15 minutes...

Re:Sci-fi story (2)

sir-gold (949031) | about 3 months ago | (#47026389)

I read this series, the aliens eventually create an AI of their own just to give the humans something long-lived enough to communicate with.
They also, eventually, find a way to slow down their own metabolism, using extrapolations of human technology.

My favorite part is, by the time the humans are half-way done transmitting their version of Wikipedia to the aliens, the aliens have already bypassed human technology and started transmitting back advanced technology of their own.

Re:Sci-fi story (0)

Anonymous Coward | about 3 months ago | (#47026941)

Uh, that wasn't because the aliens were AI, it's because they weren't even made of nuclear matter.

Re:Sci-fi story (0)

Anonymous Coward | about 3 months ago | (#47027005)

Uh, Imagix didn't say it was because the aliens were AI. The point was that, similar to the AI hypothesized in the OP, the aliens were a lot faster than humans. It doesn't matter if they were AI or not for purposes of exploring how they interacted with humans.

Delays... (1)

Anonymous Coward | about 3 months ago | (#47026087)

Of course an AI consciousness would be able to deal with such a relative delay... They wouldn't be very "intelligent" if they could not. Duh!

Four-year ping time (0)

Anonymous Coward | about 3 months ago | (#47026089)

Lewis and Clark did.

AC

CPU cycle != 1 second (4, Insightful)

Anonymous Coward | about 3 months ago | (#47026093)

No task can be accomplished in a single CPU cycle.

A human can actually do something in a second, like move or talk.

Re:CPU cycle != 1 second (1)

nurb432 (527695) | about 3 months ago | (#47026139)

Define task. For example: You can perform a calculation in one clock cycle. You can move data between registers.

Re:CPU cycle != 1 second (0)

Anonymous Coward | about 3 months ago | (#47026255)

Moving data from one register to another is the equivalent of one neuron conveying an action potential to another. Both are a concrete step that take milliseconds, yes, but they are nowhere close to units of consciousness. Literally millions of neurons need to fire in order for you to have a single thought. Similarly, an AI would probably need millions of load/stores.

Re:CPU cycle != 1 second (0)

Anonymous Coward | about 3 months ago | (#47026461)

"Millions" is a bit of understatement... the mean interconnection count between neurons in the neocortex is 10^5, humans have ~2x10^9 cortical neurons, in other words, a fuckton of connections. That deals only with declarative memory, but if we also consider procedural memory we have to include cerebelar neurons, which have 10x as many connections as neocortical neurons (look for purkinje cells, they're beautiful!).

The thing is: we need parallel architectures that allow for a lot of simultaneous memory access (in GPU terms, local memory instead of global).

Re:CPU cycle != 1 second (1)

ghettoimp (876408) | about 3 months ago | (#47026313)

Not really.

A single "calculation" such as moving data between registers ("mov ax, cx") actually takes many clock cycles. The instruction has to be fetched and decoded, which may itself take several cycles. Then the instruction has to be scheduled, the operands have to be fetched from the register file, and eventually the result of the operation gets written back into the register file.

Thanks to pipelining, branch prediction, result forwarding, and so on, much of this latency can be hidden and, under ideal conditions, your processor might achieve an average throughput of many instructions per cycle because it is decoding and executing many instructions simultaneously. But, if you track any particular instruction from start to finish, it takes several cycles.

And of course, in practice there are much harder instructions than register moves. Division can take dozens of clock cycles. Waiting for data from main memory can take hundreds of core clocks. Mispredicting a branch stalls you out while you figure out where to start decoding from again...

Re:CPU cycle != 1 second (1)

nurb432 (527695) | about 3 months ago | (#47027141)

Yes really. I'm not saying that most ( any? ) of your commodity CPU's do it. What i am saying is that it is possible to do this, if you have the design for it.

Re:CPU cycle != 1 second (0)

Anonymous Coward | about 3 months ago | (#47026449)

No task can be accomplished in a single CPU cycle.

A human can actually do something in a second, like move or talk.

Uhhh, CPUs can not only do one task per cycle, a lot of them can do two if they have a fused add/multiply instruction. Add in dual FPUs and you could conceivably do four or more tasks per clock cycle. They can also do multi billion tasks per second as most CPUs operate with gigahertz clocks.

Re:CPU cycle != 1 second (1)

rasmusbr (2186518) | about 3 months ago | (#47026751)

No task can be accomplished in a single CPU cycle.

A human can actually do something in a second, like move or talk.

Uhhh, CPUs can not only do one task per cycle, a lot of them can do two if they have a fused add/multiply instruction. Add in dual FPUs and you could conceivably do four or more tasks per clock cycle. They can also do multi billion tasks per second as most CPUs operate with gigahertz clocks.

True, but I think the GP was probably thinking about latency and not about throughput.

Meanwhile, the world has real problems... (1)

Anonymous Coward | about 3 months ago | (#47026097)

Seriously, is speculation about how bored AI might get, y'know, if it actually existed, really worthy of a /. discussion? I mean, it's a bit like solving about how guardian angels stay warm when flying to help people on the North Pole.

Artificial intelligence at the level of human consciousness doesn't actually exist. Any technology that could create/sustain a true subjective, intelligent experience wold have to be so complex that I suspect managing the perception of time as it relates to human perception will be the least of their concerns.

One CPU cycle = 1 second perceived is totally ridiculous as a premise for making such a comparison. What is the our perception, time wise, of a single neuron firing?

Re:Meanwhile, the world has real problems... (1)

kruach aum (1934852) | about 3 months ago | (#47026265)

Guardian Angels are both logically and physically impossible. AI comparable to human consciousness is neither logically nor physically impossible.

Please refrain from making analogies in the future.

Re:Meanwhile, the world has real problems... (1)

sir-gold (949031) | about 3 months ago | (#47026411)

I think his point was, neither one exists yet

Re:Meanwhile, the world has real problems... (1)

kruach aum (1934852) | about 3 months ago | (#47026669)

It is more useful to discuss the properties of things that can exist than it is to discuss the properties of things that can't exist, and therefore conscious AI is not like a Guardian Angel in the relevant aspect required to make the analogy work. After all, what he is drawing in question is the worth of the discussion, and because conscious AI is possible, the merits of its discussion outweigh the merits of the discussion of Guardian Angels.

Re:Meanwhile, the world has real problems... (0)

Anonymous Coward | about 3 months ago | (#47026429)

AI comparable to human consciousness is neither logically nor physically impossible.

Uh huh. That's quite a claim. If you can even define human consciousness (good luck with that), let alone explain the physical processes associated with it, let alone show that it can be replicated in a computer such that it recreates the phenomenon(a) of whatever consciousness may be, let alone actually design such a device with modern technology, let alone build it, let alone come up with some kind of metric for actually detecting consciousness (which I'd like to see implemented on other people first.. No one can even directly affirm consciousness in other humans only detect its "effects", probably in part because no one has a good grasp of what it even means, and simulating "effects" isn't good enough unless you're Turing), and then use that metric to show consciousness has actually been simulated, then we'll talk.

Until then, I think my guardian angel analogy is fucking fantastic. The quality of human consciousness, perception, experience, self-awareness, etc is so vaguely understood at an essential level necessary for recreating it, and in fact has a myriad of context-dependent meanings. Leaving aside the fact that tons of idiots still think the mind is independent of the physical realm, the most popular benchmark for AI is the Turing test, which is on its face an absurd test for intelligence that many humans can't even pass (and many IRC bots can)... Any discussion of how a scifi-esqei artificial intelligence would perceive time (espeically using the metric of a cpu-cycle = a thought) is such a retarded question, it's along the lines of discussing fictional characters, given that such I, let alone AI, is not fully understood as a technology, even in theory.

So yeah, I think it might be a good idea for you to stop making claims on shit you don't understand. Unless you're just a troll, in which case, bravo. You got me.

Re:Meanwhile, the world has real problems... (1)

WhiteZook (3647835) | about 3 months ago | (#47026575)

It is not necessary to understand how consciousness works before we can replicate it artificially. You only have to recognize that you made it.. and that's easier than you think. Just interact for a while, and follow your common sense. Also, consciousness is a subjective experience, that's not the same for everybody. Some people have no problem accepting the fact that it's just computation, others can't accept that at all. Even if the first group as a perfectly satisfactory explanation for themselves, the second group won't believe it, and will still be looking for the "real" explanation. I don't see how that is ever going to be resolved.

Re:Meanwhile, the world has real problems... (0)

Anonymous Coward | about 3 months ago | (#47026703)

It is not necessary to understand how consciousness works before we can replicate it artificially. You only have to recognize that you made it.. and that's easier than you think. Just interact for a while, and follow your common sense.

That's like saying that a flight simulator is the same thing as an actual airplane.

Is something that simply behaves or acts like it's intelligent the same thing as something that IS? I mean, I watch movies all the time and see "people" making decisions and talking and stuff, and I may convince myself for the moment to believe it, but there is no consciousness happening, even if it looks that way. It's just a machine replaying reprogrammed sound and light. You're not going to argue that by playing "My Dinner With Andre" that Netflix (and/or your Chromecast) is actually consciously thinking about dinner, are you?

Incidentally, Alan Turing thought that having a conversation with a machine to the point you were tricked into thinking it was intelligent was evidence enough for it to ACTUALLY be intelligent. But I think that's pretty bogus for a number of reasons. In any event, the discussion about time perception is so ludicrous, it's a fiction on top of a fiction. Not worth any serious discussion, though I guess it's fun to speculate in a "could batman beat up spiderman" kind of way (yes, deliberately combining dc/marvel). So for entertainment purposes, sure.

Re:Meanwhile, the world has real problems... (1)

WhiteZook (3647835) | about 3 months ago | (#47026855)

That's like saying that a flight simulator is the same thing as an actual airplane.

Not at all, I have travelled by plane, and I can tell you it's completely different from playing with a flight simulator. The most obvious distinction is that you play flight simulator for hours, but still end up in the same room where you started.

It's more like saying that a real nintendo is the same as an nintendo simulator, sitting in the same box, with the same inputs and outputs, and reacting in the same way. Likewise, the brain is also a black box, doing information processing with input from our senses, and output to our muscles. What's inside the brain doesn't really matter. All that matters is the behaviour. If a computer "brain" can produce similar behaviour as a human brain, but you don't believe it has the same conscious experiences, you must also accept the possibility that other people you meet aren't conscious in the same sense.

Re:Meanwhile, the world has real problems... (1)

kruach aum (1934852) | about 3 months ago | (#47026787)

It is not logically impossible because the notion of a conscious machine is not self-contradictory.

It is not physically impossible because the universe has already given rise to at least one form of conscious machines (us).

Oops, looks like I do know what I'm talking about.

The wrong question (2, Insightful)

Anonymous Coward | about 3 months ago | (#47026099)

To a computer time is meaningless; you can 'suspend' a program and resume it. Pop data onto a stack and pull it back later. It doesn't 'age', there's no lifespan; in fact even if that hardware from 30 years ago completely dies, I can load it into an emulator. I turned on a computer from 30 years ago. it runs just fine, it can even connect to the internet.

Furthermore, a consciousness in a computer would have to deal on these timescales in order to survive and be meaningful to us; Such an intelligence that didn't learn to deal on these timescales would not survive (thing maintenance intervals on machines, shutdowns at night/weekend, etc). So sure it may 'exist', and even last for billions of its cycles, but if it cannot persist past these thresholds, its irrelevant; much like an animal in a tidal pool that dies before the tide comes back; the ones that made it past that were our ancestors.

The premise is flawed. (1)

Marc_Hawke (130338) | about 3 months ago | (#47026119)

A 'cycle' doesn't constitute a thought. I would be willing to bet that a human brain can actually process speech faster than a computer can. (not sure how you'd prove that.)

Computers aren't sentient NOW because they aren't fast enough yet. At least, that's a staple of science fiction. It's only when the computer gets 'big' enough...gets 'fast' enough that they can start to be sentient. So saying when a computer becomes sentient it will suddenly "think/talk" magnitudes faster than us is a non-sequitur.

Now, what they will have is photographic memories. They'll have a huge advantage in the 'random access memory recall' area. I assume it's possible they'll be better at 'hand-eye' coordination. (Not that she had any hands in 'her'.)

Re:The premise is flawed. (0)

Anonymous Coward | about 3 months ago | (#47026233)

No, computers aren't sentient now because we don't understand what to build to be sentient. A mouse is sentient in a way that even the entire Internet isn't.

Failure in Understanding (0)

Anonymous Coward | about 3 months ago | (#47026133)

It doesn't matter the specific length of a CPU cycle. It matters how many cycles it takes to make something intelligence. A hard AI may be significantly slower than a human or it might be faster. We won't know until we're almost there.

The other point is that the computer will be reacting with the real world. Any physical thing it tries to do will take longer than thinking, same with us humans. We don't notice the time it takes for a 'move finger' signal to propagate from our mind to our finger. Likewise the computer will probably be used to interacting with whatever timescale it normally works on. When it's already dealing with eons, a thousand year conversation is pretty short.

Multiple Mental Organelles (1)

Fieryphoenix (1161565) | about 3 months ago | (#47026151)

Just as our minds are not points of awareness but collections of such, any AI will have processes that evaluate information at differing timescales. Moreover, the consciousness of an AI will be at whatever timescale the AI most commonly needs to interact at to thrive. All other processes will be subordinate, whether faster or slower.

Hmm (1)

koan (80826) | about 3 months ago | (#47026175)

A AI, a True AI would set aside a fraction of its self to "talk" to humans.

Long running thread which sleeps between events... (1)

Craig Cruden (3592465) | about 3 months ago | (#47026187)

Well then it just has to spin off a copy of the AI onto a long running thread which just sleeps between the "1000's of years" equivalence of communicating with a human. If it is sound asleep time is not an issue :p

Conciousness lag (1)

gmuslera (3436) | about 3 months ago | (#47026197)

Even if computers manage to develop a conciousness, and that conciousness have anything in common with human ones, in particular regarding motivations (2 wishful thinking hypothesis with probably little ground behind), what will be its perception of time? Is not just a cpu cycle, our individual synapses goes far faster than our perception of time, and if well computer cycles are faster, their emulation layer toward building a neural network as complex as human one may be far less efficient.

And how many CPU cycles does it take? (0)

Anonymous Coward | about 3 months ago | (#47026203)

I length of a CPU cycle does not demonstrate or equate to the perception of time by an AI consciousness. It just demonstrates that a CPU cycle takes a very short amount of time to complete.

If you were to compare synaptic transmission time to some of the same events, I'm sure you could make the same argument. Not that synaptic transmission in any way equates to a CPU cycle...

Assume that they are that much faster (0)

Anonymous Coward | about 3 months ago | (#47026223)

I don't think they would be, because as others have mentions it takes many cycles to do things, but lets say that they are orders of magnitude faster than us.

a conversation with a human may be like reading a comic book for them. Sure, if you just gave someone all of a comic books run they can breeze through it in a few hours, but they instead get new issues every month, and have to follow the story over an extended timespan with breaks in between new developments. Likewise, a much faster computer will hear our sentence, respond to it, then carry on doing other things while waiting for the next response.

Brains versus CPUs (3, Informative)

eyepeepackets (33477) | about 3 months ago | (#47026229)

This article at Science Daily is helpful in understanding the issue: http://www.sciencedaily.com/re... [sciencedaily.com]

Comparing CPUs and brains is like comparing apples to planets: Granted, both are somewhat round but that's pretty much the end of any useful comparison.

Note that I don't agree that CPU-based computers can't be made to be intelligent, but I do think such intelligence will be significantly different.

We deal with delays all the time. (2)

pushing-robot (1037830) | about 3 months ago | (#47026231)

In addition to the obvious flaw comparing a single instruction to an entire second of mental processing, humans deal with interrupted events all the time. Email conversations can take hours or days, and we used to converse by post over weeks or months. We somehow manage to deal with serial television shows and books and games with long gaps between episodes. It's really not that hard to context switch.

Why does 1 clock cycle = 1 second? (2)

TsuruchiBrian (2731979) | about 3 months ago | (#47026243)

If we consider one CPU cycle to take 1 second, then a sending a ping across the U.S. would take the equivalent of 4 years. A simple conversation could take the equivalent of thousands of years. Would any consciousness be able to deal with such a relative delay?

I am not sure why one clock cycle would be equivalent to 1 second. If we assume a clock cycle is equal to a nano second then all of a sudden computer and human time are pretty close again.

Computers are going to have to get a lot faster than they are now before they become conscious. The first AIs are probably going to be too slow for us to find entertaining to talk to. At some point they will probably catch up to and surpass natural human beings. Of course by then we may simply augment our own brains with technology to keep up with artificial intelligence.

The question of "Will computers end up being smarter then us?" might not be answerable. It might be the case that human evolution incorporates artificial intelligence and the line between man and machine is blurred.

I wank, therefore nothing much (1)

epine (68316) | about 3 months ago | (#47026261)

Would any consciousness be able to deal with such a relative delay?

Interesting to frame the story in such a way as to bring the existence of human intelligence itself into doubt.

Roger Penrose believes that human creativity is rooted at quantum effects, effects which probably play out at the Planck scale, where the ratio between the Planck scale and the reconfiguration of a single molecular bond in a gathering neurotransmitter pulse likely exceeds the ratio of a CPU cycle to a trans-continental ping.

Shall I continue wanking, or should we put this bizarre speculation to bed?

Re:I wank, therefore nothing much (1)

dmbasso (1052166) | about 3 months ago | (#47026775)

Penrose's is an argument from ignorance. There is so much noise in neuron communication that any quantum effect is pretty much irrelevant. It is like saying the hive or ant colony behavior depends on quantum effects... you can say it, but it will not make you look smart.

Re:I wank, therefore nothing much (1)

VortexCortex (1117377) | about 3 months ago | (#47026887)

Roger Penrose believes that human creativity is rooted at quantum effects,

Hahah, LOL, wat? [youtube.com] Who gives a fuck what some moron believes. There's folks who believe extra-terrestrial ghosts called Body Thetans cause illness, doesn't make shit true. Life is largely a thermodynamic mechano-molecular process. [youtube.com] Any teenager who's been through biology class can see what a hack is fool is. [youtube.com] I just love how most Philosophers are completely fucking ignorant about everything. [youtube.com]

Time scale comparisons are cool... (1)

globaljustin (574257) | about 3 months ago | (#47026273)

but why can't we just ditch "teh singularity" crap when discussing it?

"AI" is so obnoxious now...

"a simple conversation could take thousands of years"

give me a fsking break...this is almost as bad as the whole "what if we're brains in a jar" thing that people call a theory

What is intelligence? what is artificial intellige (0)

Anonymous Coward | about 3 months ago | (#47026281)

Long time ago I read a qoute - supposedly - from Von Neumann Where he claimed: "Give me a sufficiently precise definition of intelligence and I'll build you an intelligent machine.

I don't anyone really knows what intelligence is, let alone what an artificial one would be.

Until we know what intelligence really is. I wouldn't worry too much.

Speaker for the Dead (0)

Anonymous Coward | about 3 months ago | (#47026305)

There's a similar situation in this book. Just don't go AFK, and the AI will be nice.

As Data said (1)

NEDHead (1651195) | about 3 months ago | (#47026355)

0.68 seconds sir. For an android, that is nearly an eternity

The Article Shows a Profound Lack of Comprehension (0)

Anonymous Coward | about 3 months ago | (#47026363)

From reading the summary (and we all know how accurate Slashdot summaries are) it was written by one who lacks even the slightest comprehension of what a computer instruction is. Tell me, how many computer instructions would it take for a computer to translate the audio it hears from a human into a form an incredibly complicated algorithm could make sense of?

About four minutes (1)

UrsaMajor987 (3604759) | about 3 months ago | (#47026367)

A simple four minute conversation should take about say four minutes assuming the AI is keeping up with the person.

"If we consider one CPU cycle to take 1 second, ." (1)

Ihlosi (895663) | about 3 months ago | (#47026375)

... then we're starting with a premise that turns the rest of our argument into pure nonsense.

Who says that an AI can do in one CPU cycle what the human brain can do in one second? Once CPU cycle to an AI is possibly less than one neuron firing in the human brain.

Also, if you compare communication latency to the human/AI potential lifetime, then the AI suddenly has all the time in the world.

Spoiler: Don't read if you haven't seen "Her" (1)

helixcode123 (514493) | about 3 months ago | (#47026385)

The OP's point is similar to the last conversation Theodore has with Samantha where she tells him that her relationship with him is like a book, but that the time between the words keeps getting longer and longer for her, and she is becoming what is "in between the words".

Re:Spoiler: Don't read if you haven't seen "Her" (0)

Anonymous Coward | about 3 months ago | (#47026537)

You spoiled TFA as well, which begins with that conversation.

Multitasking (1)

wattersa (629338) | about 3 months ago | (#47026433)

Lt. Jenna D'Sora: Kiss me.
[Data obliges]
Lt. Jenna D'Sora: What were you just thinking?
Lt. Cmdr. Data: In that particular moment, I was reconfiguring the warp field parameters, analyzing the collected works of Charles Dickens, calculating the maximum pressure I could safely apply to your lips, considering a new food supplement for Spot...
Lt. Jenna D'Sora: I'm glad I was in there somewhere.

Surely a computer would not get bored while waiting for human input. It could run Seti@home during its spare CPU cycles, if nothing else!

Re:Multitasking (1)

WhiteZook (3647835) | about 3 months ago | (#47026469)

Exactly. It would be like a person sending a letter. Most people are not going to sit by their mailbox for days, waiting for a reply. Instead, they got about their regular business, and occasionally check the mailbox.

Suspend/nap/sleep (1)

jeffb (2.718) (1189693) | about 3 months ago | (#47026831)

Or, if it simply can't stand the suspense of waiting for a reply, it can pause itself, or slow itself down, in order to match its environment.

Or, more likely, it can reconfigure its cognitive processes into something well-suited for conversation on those timescales. Perhaps it can fill the rest of its time with "unconscious" background processing that prepares information it'll need.

Your first AI will likely be take commands. (1)

GoodNewsJimDotCom (2244874) | about 3 months ago | (#47026447)

AI that knows its environment through sensors/cameras can then do goals based on the placement of itself and objects in the environment.

It will start out goal oriented, but inevitably someone will make Bender by giving it weighted coefficients of achieving sub goals of drinking beer and petty theft.

I really like this "Mental Organelle" model. (2)

JCMontalbano (3576161) | about 3 months ago | (#47026709)

Human brains aren't performing in a way which is directly comparable to a CPU cycle. My focus is in psychology and molecular biology, so my understanding of computers may be imperfect but my understanding of brains is strong. A CPU takes a large number of instructions, organizes them in the bus, and operates them singly (or based on how many cores the CPU has, working in tandem. A brain has each neuron as a simple CPU, but there are several different types of neurons (four, by one level of classification) with different types of firing rates and different access to metabolic resources (comparable to power supply for a CPU) even for the same type of neuron at different places in the brain. The AI would, like a human, be doing a fractal buttload of processes all the time in parallel, and those processes would invest in influencing system outputs, and those processes would compete for future system resources by the success of their influences, and as networks of processes came up and went away, so the most successful processes would be regulated internally and you'd have the adaptable, universality of a true AI. This is how the brain works.

I posit that it would, like a human, have an auxiliary center for speech processing. Consider the problems:

It would, like a human, feed in that speech processing to its other computations at whatever rate it was willing to do so. For many people, when they hear someone speak and then stop speaking for a moment to think, the mind starts trying to predict what the rest of the words are going to be to have a response pre-processed and ready. Problems with this approach are 1) a predictive process make a prediction based on incoming data what the speech is going to be, spawning a bunch of other processes investigating the possible predictions, and thus investing a lot of resources into one predicted problem and solution, and then having predicted the speech-question wrongly, so that the entire effort was a waste, 2) a predictive process investing a lot of resources into one prediction and then corrupting the parallel process which is overseeing the various predictive processes to cause the overseer process to not cause it to cancel its investment.

Now alternately you could solve this by having the machine dedicate its attention to the message with small dedicated sets of processes which process speech input. These could run either by: processing the individual words and holding them in memory, or as a fractally smaller level of the networks above, so that they were just running several parallel predictive processes to predict the words, and feeding those processes out to the main language processes to the degree that they seemed likely, etc.

The point is that a true AI would not be impatient, because it wouldn't cost an inordinate amount for it to be patient, because it would be dedicating the appropriate amount of resources at each level and wouldn't be sitting around tapping its foot about waiting for this human to stop talking.

If one CPU cycle is a second? (1)

thomasoa (759290) | about 3 months ago | (#47026847)

Why would that relationship be there? One CPU cycle is more akin to a single internal reaction in a cell. The idea that our brains are *not* doing complex calculations quite a lot is the misconception here. A single CPU cycle is not enough to even begin to do a useful computation, while the human brain can hit a 100-mile fastball with less than a second of brain computation. The real difference is that a computer AI can be multi-tasking in a way the brain can't. So, even if conversations with the human world would be slow, they could be processed separately, and wouldn't require the "full" attention. (One of the things I would have loved in "Her" is if they showed us this multitasking. At one point, the AI is singing a song, and the lead starts up a conversation. She stops singing and talks, but it would have been funny and alien if she just put her singing into "background music" mode, and then talking while singing. Lack of imagination there.) We've got modes of communication which are much slower. Consider a book, which was written years ago. Finally, why ascribe impatience to an AI?

Totally Flawed Premise (1)

Nom du Keyboard (633989) | about 3 months ago | (#47026905)

Just because computers can send and receive data very fast doesn't at all mean that they would necessarily comprehend it at a conscious level any faster than we can without hour own highly parallel human brains.
Nor is there any reason to believe that an AI would experience boredom. That's projecting human quirks on non-human intelligences, which the author has no basis to validly do.

Would they care if we're slow? (1)

roger10-4 (3654435) | about 3 months ago | (#47026953)

In theory, they'd be immortal. They can wait if they're interested in what we're saying. That may be they real question...are they interested in what we think?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>