Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Blind Man Test Drives Google's Autonomous Car

Soulskill posted more than 2 years ago | from the sight-to-site-transport dept.

Google 273

Velcroman1 writes "'This is some of the best driving I've ever done,' Steve Mahan said the other day. Mahan was behind the wheel of a Toyota Prius tooling the small California town of Morgan Hill in late January, a routine trip to pick up the dry cleaning and drop by the Taco Bell drive-in for a snack. He also happens to be 95 percent blind. Mahan, head of the Santa Clara Valley Blind Center, 'drove' along a specially programmed route thanks to Google's autonomous driving technology. Google announced the self-driving car project in 2010. It relies upon laser range finders, radar sensors, and video cameras to navigate the road ahead, in order to make driving safer, more enjoyable and more efficient — and clearly more accessible. In a Wednesday afternoon post on Google+, the company noted that it has hundreds of thousands of miles of testing under its belt, letting the company feel confident enough in the system to put Mahan behind the wheel."

Sorry! There are no comments related to the filter you selected.

In other news... (-1, Flamebait)

Anonymous Coward | more than 2 years ago | (#39507053)

Disabled man's life put on the line and exploited for corporate publicity stunt.

Re:In other news... (5, Interesting)

Anonymous Coward | more than 2 years ago | (#39507077)

Grow up. They have done 200,000 miles with a person sat in the driver's seat to ensure he can take control if anything went wrong. On a pre-programmed route this is a very stable system and he had someone beside him in the passenger seat (I also wouldn't be surprised if it was dual-control so the passenger has access to a brake pedal). Meanwhile this technology could eventually change the lives of millions upon millions of disabled people, damn right it deserves the publicity. With your attitude we'd never have wheelchairs or crutches or surgery, all things which, the first time out, could have resulted in injury but have been life changing tech for millions.

Re:In other news... (-1, Troll)

SpinyNorman (33776) | more than 2 years ago | (#39507241)

The trouble with this type of driverless car tech is that it's going to be as brittle as the AI it's based on. It may work fine for normal, complex or not, situations, but the day a child runs out in front of it in a way it's not been programmed to handle there's going to be a tragedy.

It's one thing if this is going to be used as a glorified cruise control, but another entirely if it's meant to be used without a qualified driver ready to take over whenever appropriate.

For this type of tech to be safe in an unrestricted environment (e.g. on public roads) it needs to be backed by human level AI (i.e don't hold your breath), not just an expert system using lasers and cameras to stay on the road and read the speed limit.

Re:In other news... (5, Insightful)

AntmanGX (927781) | more than 2 years ago | (#39507273)

Yes, because Google (and the authorities letting these cars on the roads) would have *never* thought of the possibility of pedestrians running in front of these cars.

Quick! Get in touch with them and bring this to their attention!

Re:In other news... (5, Insightful)

bgarcia (33222) | more than 2 years ago | (#39507341)

Exactly!

And to go a little further, technology doesn't get sleepy. Technology doesn't get distracted by cell phones, GPS systems, or the radio. Technology won't have a blind spot. This is going to be an incredible advance. I'm much less worried about a driverless car hitting a pedestrian than I am the average driver hitting one.

Re:In other news... (5, Interesting)

RoboJ1M (992925) | more than 2 years ago | (#39507435)

They can also drive safely millimetres (like inches but smaller) apart from each other, massively increasing the capacity of the existing road network.
I've seen that thing MERGE WITH MOTORWAY (freeway right?) TRAFFIC!!! 8@~~
It's bonkers clever. I want one. Where we all just sit around the table inside it having breakfast.

Re:In other news... (2)

ZiggieTheGreat (934388) | more than 2 years ago | (#39507613)

I know some human drivers who have no idea how to merge onto the freeway.

Frankly, I have much more faith in a Google computer driving my car then I do the other humans on the road.

Re:In other news... (4, Informative)

TheRaven64 (641858) | more than 2 years ago | (#39507839)

They can also drive safely millimetres (like inches but smaller) apart from each other, massively increasing the capacity of the existing road network.

Stopping distance doesn't change that much. The reaction time becomes smaller, but the braking time stays the same. It's fine for normal use, but when the car in front collides with an oncoming vehicle or something falling off bridge and comes to an abrupt stop then your driverless car still needs almost as long as a human-operated car to come to a safe stop without hitting the vehicle in front.

Re:In other news... (1)

lordbeejee (732882) | more than 2 years ago | (#39507947)

And a human driver uses the superstopbreakpedal thus can avoid the situation better than the computer?

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39508029)

Wrong, quite a bit of the stopping distance is reaction time. Doesn't mean you can drive millimeters apart but perhaps 30% closer.

Re:In other news... (1)

ZeroSumHappiness (1710320) | more than 2 years ago | (#39508287)

More likely it's a step towards ganging the cars together to act as a train on long freeway hauls to gain the benefit of reduced gas consumption through better aerodynamics. The passenger cell of the car would have to be greatly stiffened in that case I think, but good crumple zones plus automated braking upon an accident should help that greatly, and ganging the cars together should be safer than a train in an accident since everyone would be in seat belts and their own safety cell.

Re:In other news... (4, Insightful)

msobkow (48369) | more than 2 years ago | (#39507471)

Automated cars are also unlikely to rip along at 80-90 kph in a 50 zone like the psycho-cabbie who nearly ran me down on my wake-up walk half an hour ago, too.

Re:In other news... (5, Funny)

Anonymous Coward | more than 2 years ago | (#39507645)

Was that you? Sorry, I was trying to get FP...

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507623)

To your point, the other big reason I'm a fan of driverless cars: Technology doesn't get drunk (see also: Bender B. Rodriguez).

Re:In other news... (2)

Dhalka226 (559740) | more than 2 years ago | (#39507643)

Reaction time is also cut considerably, as is the time it takes to physically perform whatever act is deemed the best course of action. If "slam on the breaks" is the action, the car doesn't have to lift its foot off the pedal and move it over to slam the brake -- the car's already braking.

A child running in front of a car is a recipe for disaster either way, but the kid is probably safer with the driverless.

Re:In other news... (1)

AmiMoJo (196126) | more than 2 years ago | (#39507813)

Speaking of GPS how will it deal with errors on maps? Apparently human beings have enough trouble noticing when their GPS directs their 40 tonne truck down a dirt track next to a perilous cliff.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507945)

Technology won't have a blind spot? I'm not so sure I'd agree with that. Potentially it won't, but implementations aren't perfect especially when trying to bring down the price.

Re:In other news... (2)

artfulshrapnel (1893096) | more than 2 years ago | (#39508063)

I think he's referring to the fact that a digital driver could have as many "eyes" in the form of cameras as it needs, arrayed in whatever way works best. It can have a 360 degree ring of cameras on the top of the car, for example, which has no blind spots at all. (I mean, unless you manage to crawl under the car via a sewer system or something....)

Compare that to a pair of forward facing eyes, with an elaborate system of mirrors to try and allow them to see behind the car as well as in front. Lots more blind spots, and they can only look in one direction at a time.

Re:In other news... (1)

jamesh (87723) | more than 2 years ago | (#39507359)

I asked elsewhere too... are there video's of google cars reacting to this sort of situation?

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507529)

I don't know if there is but from what I understand, this thing is incredibly intelligent. Screw kids running in front of the vehicle, it's capable of anticipating asshole drivers not paying attention and cutting you off. It has sensors in all directions, (like 360 degrees at all times instead of maybe 130 at a time) and watches everything. Wired ran a rather long article about it recently and it was quite the read. Whether this is "human level AI" or not, it is certainly a very clever machine. They aren't "Programming" these cars, they are "driving" it around and letting it learn on it's own.

Re:In other news... (3, Funny)

jamesh (87723) | more than 2 years ago | (#39507649)

I don't know if there is but from what I understand, this thing is incredibly intelligent.

If it was truly smart it would stay in the garage and never come out. Driving is a curious game, the only way to win is not to play.

Re:In other news... (1)

Anonymous Coward | more than 2 years ago | (#39508151)

watch the TED talk with Sebastian Thrun - it shows the car making a left turn, stopping for pedestrians in the crosswalk, and continuing.

Re:In other news... (0)

ledow (319597) | more than 2 years ago | (#39507377)

Did they think of the possibility of driving over a cliff-edge while out of GPS reception?

Or what happens if a bridge collapses? Does the car detect the void underneath it and stop, or just think it's a steep hill and plummet over the edge?

Does it detect ice, snow, oil, sand before the wheels are there? What about fire? What about an accident happening to the tanker in front of you and you ploughing through the spilled petroleum because the car doesn't "see" it? What about kids throwing stones off the top of a bridge onto the passing cars (common problem in the UK - someone died just the other month from this)? Is the car looking UP too and determining their intent?

There are a BILLION and one problems, that only happen once in a lifetime. But if that causes you (OR ANYONE ELSE - sod the blind person, I would complain to the highest authority if a blind person was driving a car around my area, with or without a permit, and risking pedestrians and other driver's lives) to die early, or be at raised risk of injury, there's a lot more things to consider than you can EVER detect with sensors OR ever account for in programming and testing.

This is why even a jumbo jet - so of the most highly automated and tested machines in the world - has TWO HUMAN OPERATORS. And even there, they have TWO because the first can't be trusted on their own (proven by that recent thing with the pilot).

If you honestly, seriously, think that you can reliably determine the outcome of a machine complex enough to obtain all that data, you're an idiot. You *CAN* verify a system like an airbag control, or ABS, because it's isolated and has the tiniest amount of actual code running the thing that you can (and DO) mathematically verify.

You can't verify a system on this scale. It's like trying to verify a Kinect. You just cannot guarantee what it will detect something as just by a simple test of something similar. And this is orders-of-magnitude more complex, more important and more deadly than a stupid games console.

Re:In other news... (1)

RoboJ1M (992925) | more than 2 years ago | (#39507467)

Can you verify yourself?
I know I can't. I've even driven into the back of someone in heavy traffic because a sudden hail storm fogged up the window.
Fact of the matter is, get this system to cope with most situations and put a STOP!!! button on the dash and you'll solve most traffic and traffic accident problems.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507507)

If you honestly, seriously, think that you can reliably determine the outcome of a machine complex enough to obtain all that data, you're an idiot.

I don't think you can reliably determine the outcome. But that doesn't change when the 'machine' is a human. Let's run objective tests to find out whether the automated vehicle is more safe, less safe or about the same. I believe that's what's going on.

Re:In other news... (5, Insightful)

Smidge204 (605297) | more than 2 years ago | (#39507571)

Did they think of the possibility of driving over a cliff-edge while out of GPS reception?

If the Internet is to be trusted at all, I'll take the chance of a self-driving car careening off a cliff due to lack of GPS reception over the chance of a human careening off a cliff because of GPS reception.

Does it detect ice, snow, oil, sand before the wheels are there?

Humans certainly don't... but there are already automatic traction control systems that do an excellent job maintaining the vehicle's footing in all but the more extreme situations - I can't imagine it would be that hard to send that data to the pilot AI and have it react by slowing down. Also I'd imagine it would be easier for the computer to detect ice and such using sensor data (IR cameras to detect road surface temp, lasers reveal changes in surface reflective properties, etc.)

What about kids throwing stones off the top of a bridge onto the passing cars (common problem in the UK - someone died just the other month from this)? Is the car looking UP too and determining their intent?

Again, I doubt humans would do much better. The radar systems on an automated car could conceivably be used to detect objects that may hit the car even from above and some evasive/mitigating action could take place - with better reaction times than a human driver.

This is why even a jumbo jet - so of the most highly automated and tested machines in the world - has TWO HUMAN OPERATORS. And even there, they have TWO because the first can't be trusted on their own (proven by that recent thing with the pilot).

Again, though I don't keep careful track of these things, there seems to be more incidents related to human error than automation error. Specifically the humans overriding the automated systems to correct for a problem that didn't actually exist.

If you honestly, seriously, think that you can reliably determine the outcome of a machine complex enough to obtain all that data, you're an idiot.

Humans are essentially machines much more complex than that, and have tens of thousands of years worth of historical precedent for doing incredibly stupid things despite having accurate information - yet somehow they are more trustworthy than a machine just by virtue of not being a machine? This kind of argument instantly refutes itself.

How do you test the system for these things? Tens of thousands of hours of real-world driving. Considering all a human needs to legally operate a 2-ton projectile is roughly twenty minutes worth of testing (if you're lucky!) I'll take my chances with the machine.
=Smidge=

Re:In other news... (1)

ZiggieTheGreat (934388) | more than 2 years ago | (#39507731)

"Humans are essentially machines much more complex than that, and have tens of thousands of years worth of historical precedent for doing incredibly stupid things despite having accurate information - yet somehow they are more trustworthy than a machine just by virtue of not being a machine? "

I believe it is the fact that we (humans) built it, that makes us not trust it. We know how stupid humans are, how could we create anything smarter than ourselves?

Re:In other news... (0)

arth1 (260657) | more than 2 years ago | (#39507767)

I'll take my chances with the machine.

You do that, but not on any road I drive on.

Yes, humans make mistakes. But those are mistakes. Machines only do what they are programmed to do, and go down in cascades when something unexpected happens. I would rather have an accident caused by human mistake every now and then than a future where all traffic stops and backs up for hours because a branch fell onto the road and the systems raise an alert for "unknown, may not be safe to pass", and don't understand the policeman who waves at you to go around it despite having to cross double yellow lines.

No thanks, you play in your playground as much as you like, but keep it to yourself, please.

Re:In other news... (1)

leonardluen (211265) | more than 2 years ago | (#39508069)

why can't we have it both ways? the computer can safely handle 90% of the issues but when it detects an "unknown, may not be safe to pass" then the car comes to a stop and has the human take over. remember there is still a human in the car, why not let the human handle the edge cases.

if the car sees it far enough ahead of time it may be possible for it to warn the human of the upcoming hazard and have the human take over before they even reach the obstacle without having to stop. and then once navigated past the obstacle the autopilot could be turned on again.

to be honest most of the time i would trust a computer more than most of the human drivers out there. humans show really bad judgement especially during poor rood conditions such as when there is snow and ice.

Re:In other news... (2)

cmdr_tofu (826352) | more than 2 years ago | (#39507583)

Most of the problems you cite are *highly improbable*. Nobody is claiming that a driverless car will never make a mistake, but the facts are many automotive fatalities each year: http://en.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate [wikipedia.org]
This is because humans make mistakes. Our current system is not 100% safe. A replacement system does not have to be 100% safe, just better.

To evaluate a driverless systems success it is not meaningful to look at the least likely cases, but to look at overall whether or not it will reduce of increase number of deaths. If 90% of deaths are due to drunk drivers, and driverless cars only fail when say a bridge collapses (a very rare incident), then this will be a net win.

Re:In other news... (1)

leonardluen (211265) | more than 2 years ago | (#39508255)

I agree seriously how many bridges does the gp cross that regularly collapse?

the only one i know of during my lifetime that was within 1000 miles of me is the I-35 collapse in MN about 5 years ago? and i still have never been anywhere near that bridge.

Man vs. machine (4, Informative)

DrYak (748999) | more than 2 years ago | (#39507781)

Did they think of the possibility of driving over a cliff-edge while out of GPS reception?

Or what happens if a bridge collapses? Does the car detect the void underneath it and stop, or just think it's a steep hill and plummet over the edge?

Google cars use only GPS to get direction. The actual driving is done with laser grids and radars (for near distance) and video camera (for long distance). So the car doesn't drive according to what it "thinks" should be there according to the plan, but it drives according to what it "sees" (with its sensors). Any of the situation you mention will end up with the car detecting a lack of drivable surface, stopping, asking its GPS for an alternative route and going another way.

Wherever a human driver will react the same, or will be too busy getting distracted with on-board enterteinment/smartphone/passengers/news paper, etc. and fall into the hole is left as an exercice to the reader. (yup, we've already had stories on /. of clueless drivers wrecking their cars because the GPS told them to go a certain way).

There are a BILLION and one problems, that only happen once in a lifetime.

And that's why you put the stuff on extensive testing. Already in the range of hundreds of miles on actual raods with actual traffic in the case of Google cars.
Yes, it won't take into account some really weird exceptions. But... Humans make mistakes too, and mostly in normal everyday boring situations (because they are boring and the brain kicks into "autopilot" routine mode). (And chance are that some of the really weird situation are going to be "missed" by the human, not because the human wouldn't have had reacted correctly, but because the human was to bored to pay attention). Also even if you, the human driver, think of hundreds of situation which might be missed by an IA, but where you think you'll be able to react correctly, I can probably think of situations where you drove perfectly well, but still got into an accident because of some other driver.
At some point in time, we will reach the situation where an autonomous car (even including the accidents due to weird rare situations) will cause a lot less casualities than a human driver (who might just not be paying attention).

The point of this publicity stunt is to show that, given the current extensive testing the cars have undergone, this point in time is nearing soon.

And, also, the advantage of autonomous cars is, as the wierd situation happens, they can be analysed and the programming can bu updated, leading to even less casualities on the long term.
Whereas, with human drivers, you can't just magically "programatically remove from the road" asshole, idiot and dristracted drivers .

You can't verify a system on this scale. It's like trying to verify a Kinect. You just cannot guarantee what it will detect something as just by a simple test of something similar. And this is orders-of-magnitude more complex, more important and more deadly than a stupid games console.

You can't prove *mathematically* that the autonomous car will be perfect in absolutely every single situation (juste because there is a potential inifity of such situations). But you can prove *statistically* that the autonomous car is better and safer than a human driver based on the number of accidents and casuality caused by both. And overall this *will* mathematically increase the safety on roads.

Re:In other news... (0)

arth1 (260657) | more than 2 years ago | (#39507589)

Yes, because Google (and the authorities letting these cars on the roads) would have *never* thought of the possibility of pedestrians running in front of these cars.

+1 Funny
The example was "the day a child runs out in front of it in a way it's not been programmed to handle". What part of "in a way it's not been programmed to handle" did you fail to comprehend?

You can only prepare for situations you think of in advance, while a human has the advantage of being able to judge new situations.
If you think it's possible to prepare for situations that you haven't even thought of, I have a car to sell you...

This type of system is, by its nature, going to be reactionary. For every eventuality you can program it to handle, you can come up with an infinity of more situations that it doesn't handle.

  • A person on the other side of the street starts to walk out in traffic, and you see that the oncoming driver will swerve to miss him. You compensate. Will Google's car be programmed to handle this?
  • A lawn sprinkler at the side of the road is no danger, and its spray can safely be driven through. Is the Google car smart enough to realize this?
  • A trailer is driving very slowly, but signalling with its light that it's safe to pass. Does the Google car make a safe pass, or stay behind?
  • Someone has hung rubber ribbons from a bridge to get cars to stop before proceeding. Is the Google car programmed to handle this?
  • A piece of road has been washed out, and there are a couple of planks to drive over. Who programmed the Google car to handle this?
  • A policeman is directing traffic, and standing with one arm up facing you. What does the Google car do?
  • A policeman is directing traffic, and standing with one arm up facing away from you. What does the Google car do?

The thing is, there is an infinity of situations like this you can come up with, and no way for the car to be able to handle them all. No matter whether you can say to three or all of the above that "yes, the next version of the car can handle this", we can always think of more situations it cannot handle.

For this to be feasible, the roads have to be dumbed down to the level of what the car can handle. I.e. something like a Sci-Fi railway where only authorized cars are allowed on it.
Meanwhile, in the real world, people like their freedom.

Re:In other news... (1)

Kdansky (2591131) | more than 2 years ago | (#39507709)

Can the AI fail at very rare circumstances? Yes. Will we see AI failures? Yes. Will we see the AI repeating the same mistake? No. It can be patched. Will we see people driving while drunk in ten years, despite thousands having proven beyond all doubt that it's a really stupid idea? Definitely. AIs can fail, humans fail more. The AI just has to get to less than 1'000'000 deaths per year, and it's already an improvement.

Re:In other news... (1)

arth1 (260657) | more than 2 years ago | (#39507849)

The AI just has to get to less than 1'000'000 deaths per year, and it's already an improvement.

Not necessarily. Not if it means that the average speed of transportation goes way down. Or that the "rare" incidents that defeat the AI causes major traffic blocks, because the expert systems decide to STOP in the face of the unknown.

We have chosen to have a dangerous traffic system. It would be much safer if we limited all cars to do a max of 10 mph. I am certain that almost all the deadly accidents would go away. But we do not want that. We want the freedom and speed, despite the dangers.

If you want to live in a padded room, go ahead. But don't force others to, in the name of safety. Living is dangerous, and that's what makes it worth living.

Re:In other news... (5, Funny)

Anonymous Coward | more than 2 years ago | (#39507303)

Most likely an autonomous car can react quicker to an obstacle running in front of it faster than a human can.

And given the average human's driving ability, it probably fares no worse when it comes to being in the correct lane at complex junctions.

Maybe it will need two more orders of magnitude of testing and refinement before it can be included in cars that the blind person can be alone in, but progress is progress, and this is surely a milestone?

Of course I will jailbreak my car when it comes with such technology, so that I can add my own AI modules, such as "HunterKillerMod" that turns the car into a pedestrian killing machine. And "DestroyAllCyclists" too, obviously (who won't have that installed?).

Re:In other news... (1)

na1led (1030470) | more than 2 years ago | (#39507585)

I'd like to know how it can handle hazard road conditions, like Snow, Ice, etc. Also, what if something mechanically goes wrong with the vehicle, how will the A.I. respond to that, or even notice it. If I smell smoke, or something burning, I'll pull over to check it out. Seems like so many variables to consider, I think it will be decades before we have A.I. that can deal with all this.

Re:In other news... (1)

lyml (1200795) | more than 2 years ago | (#39507761)

If your crude inbuilt sensor system can smell smoke from the engine, the check-engine light has probably been on for the last two years.

Re:In other news... (1)

Goragoth (544348) | more than 2 years ago | (#39507769)

Thing is for every problem you point out with a AI driven car you can point out 5 problems with human drivers. Humans frequently mess up in hazardous conditions, especially if they aren't used to them, meanwhile an AI car is going to be programmed for all possible conditions before it will ever be released into the wild. As for something being wrong with the car, that's what sensors are for. We have to rely on imperfect queues like smell, the AI on the other hand should be plugged straight into the onboard computer and have an excellent overview of the car's health. It might miss corner cases but once again, humans will miss many more. Also humans will often suspect something is wrong and carry on anyway because they can't be bothered to check it out, while the AI can be forced to pull over and demand a fix before carrying on.

There will still be deaths on the road if we switch over to 100% AI controlled traffic but I'll be damned if it won't drop the road toll to a tenth or less than what it is now. That's a ton of lives that will be saved, as well as the added convenience of not having to drive yourself. Of course convincing people to give up control to a machine is going to be a tough sell.

Re:In other news... (1)

na1led (1030470) | more than 2 years ago | (#39508161)

If you're going to only rely on sensors, your taking a big risk. Sensors can go bad, and in fact it usually ends up fatal in many case, IE - Space shuttle disasters, Plane Crashes, Train Wrecks, etc. I've had sensors go bad on my car many times, when their was nothing actually wrong with the car. You can't compare a few lines of code and some sensors to the Human Brain! Even military drones need a human to make decisions for it. Until we see robots that can think for themselves, self driving cars will be decades away.

Re:In other news... (1)

artfulshrapnel (1893096) | more than 2 years ago | (#39508095)

As the others point out, humans fare way worse than cars on this. If you smell smoke, you know that something is wrong with your car. If a computer is plugged into the standard OBDII port on your car, it can tell you exactly what is wrong by checking an array of sensors before the smoke even starts.

Re:In other news... (1)

cpghost (719344) | more than 2 years ago | (#39507739)

Of course I will jailbreak my car when it comes with such technology (...)

It's also an excellent excuse in case something went wrong: "sorry, I couldn't help it: my car caught a virus!"

Re:In other news... (1)

TheRaven64 (641858) | more than 2 years ago | (#39507863)

And "DestroyAllCyclists" too

I believe this is enabled by default if you set the locale to en_US.

Re:In other news... (1)

dave420 (699308) | more than 2 years ago | (#39507313)

You should read about what the system is actually capable of.

Re:In other news... (1)

beelsebob (529313) | more than 2 years ago | (#39507345)

The trouble with this type of driverless car tech is that it's going to be as brittle as the AI it's based on. It may work fine for normal, complex or not, situations, but the day a child runs out in front of it in a way it's not been programmed to handle there's going to be a tragedy.

I'm not sure many human drivers will have driven 200,000 miles without ever ever ever having had even the most minor of scrapes. To me, it sounds like even in the test phase this car is a lot less brittle than the wetware AIs we have driving cars already.

It's one thing if this is going to be used as a glorified cruise control, but another entirely if it's meant to be used without a qualified driver ready to take over whenever appropriate.

Not if the AI in those cars is less brittle than the AI in our brain, even if it is still somewhat brittle.

For this type of tech to be safe in an unrestricted environment (e.g. on public roads) it needs to be backed by human level AI (i.e don't hold your breath), not just an expert system using lasers and cameras to stay on the road and read the speed limit.

Your assertion is equivalent to "if computers are ever going to play chess properly, they'll need a human to stand by, and correct moves for them when they get it wrong". News flash – computers can't play chess perfectly, but they sure can play it better than 99.99999999% of humans.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507369)

"not just an expert system using lasers and cameras to stay on the road and read the speed limit"

I hope they implemented an expert system using lasers and cameras to follow the rules of a road.

Re:In other news... (1)

compgenius3 (726265) | more than 2 years ago | (#39508235)

I read an article a while back about the cars. It turns out, when they programmed them to follow the rules of the road exactly, they couldn't get anywhere because other drivers continuously broke the rules. So they had to reprogram the cars to allow for bending the rules in certain situations. The example that stuck in my mind was at a 4-way stop sign, the car has to inch forward to indicate its intent to pass through the intersection. Otherwise, the other drivers just ignore the car and keep going in turn, despite the rules of the road stating that's illegal.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507375)

Actually a child running in front of the car is exactly what the car performs perfectly at avoiding.

What the car can't do is tell the difference between a stationary object and a human.

This is never an issue on a route the car has previously driven on, however the issue is stationary objects on a completely new route where the driver is unable to assist the car to understand the difference between a stationary object and an object.

Either way my point is that the Google car is amazing at anything that is moving, people, cars, lights or even stationary cars. Its really not that far from being perfect and already amazing in areas it has already been.

Re:In other news... (3, Insightful)

Smidge204 (605297) | more than 2 years ago | (#39507601)

Why would it need to tell the difference between a human and a non-human? If it's not in the way, it's not in the way. If it's moving in a manner that will cause it to be in the way, then react accordingly. A human standing at the curb and suddenly running out into the street is no different, functionally, than an empty trash can getting thrown into the street by a gust of wind. An obstacle is an obstacle regardless of what it's made of and regardless of whether or not it's sentient.
=Smidge=

Re:In other news... (1)

arth1 (260657) | more than 2 years ago | (#39507679)

Why would it need to tell the difference between a human and a non-human?

No. Just plain no.
The water from a sprinkler at the side of the road is not a danger I need to stop for, even if it registers on a radar. Stopping would cause a danger, not preventing it.
Neither is it a danger when soft branches from a fallen tree that sticks out into the road. Nor do I worry about the leaves blown by a leaf blower. And the crow sitting pecking at the dead squirrel will take off before I get to it. The kid sitting at the side of the road trying to revive a dead squirrel won't.

And what about the policeman standing outside the road, signalling you to stop?

Or the guy in the broken down car in front of you who waves at you out the window to pass?

Expert systems are anything but.

Re:In other news... (4, Insightful)

TheRaven64 (641858) | more than 2 years ago | (#39507887)

Why would it need to tell the difference between a human and a non-human?

There are a few situations where this could be important. Consider a cat runs into the road on the right and a child runs into the road from the other side to get the cat out of the road. A human would typically prioritise not hitting the child. If the AI doesn't, and hits the child in preference to the cat then it's not going to look very good. If there's only one obstacle, you want to avoid it. If there are two, then you want to avoid the most valuable ones, and generally we consider humans to be more valuable than anything else you're likely to collide with.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507677)

This is never an issue on a route the car has previously driven on, however the issue is stationary objects on a completely new route where the driver is unable to assist the car to understand the difference between a stationary object and an object.

I can see cases where that matters - the car needs to swerve to avoid an accident and it can't decide that swerving one way hits something inanimate and swerving the other will hit a human. But in any normal circumstances the fact that there's a stationery obstacle of anything like human size (even small human, even baby) should mean the car either manoeuvres around, picks a new route or stops while the human occupant of the vehicle decides what to do next. As much to protect the car and its occupants as people/cats/etc who get in the way.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507451)

You have an unrealistically high opinion of the skill with which the average human drives a car. American drivers for example manage to kill roughly 40,000 people a year, that's a little over a hundred a day.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507455)

IAAR (I Am A Roboticist) and this car can handle random stuff in the environment. It has fairly simple algorithms doing object detection, sophisticated software figuring out where the road is and whether any objects are in the road, and a massive, massive data set of supporting information about the road telling it where all the relevant road stuff is (lights, signs, street markings, all of it). If it's not sure what to do, it stops, gently. The problem isn't random stuff in the environment ("Ooh! I see something! I'd better stop!") it's the fact that it won't run without the massive data set of supporting information about the road. It can only run on that road; that's where the brittleness comes in.

Re:In other news... (1)

bareman (60518) | more than 2 years ago | (#39507575)

"Human Level AI"? Humans aren't exactly doing a wonderful job out there on the streets.

I wouldn't be at all surprised if the current autonomous systems are already safer than human drivers. Are they going to be infallible? No. But we need to compare the overall safety rates of computer controlled vehicles vs. human controlled vehicles and see what is better.

Re:In other news... (1)

Kjella (173770) | more than 2 years ago | (#39507683)

The trouble with this type of driverless car tech is that it's going to be as brittle as the AI it's based on. It may work fine for normal, complex or not, situations, but the day a child runs out in front of it in a way it's not been programmed to handle there's going to be a tragedy.

The question is, how prepared is the average driver? I don't mean hazardous driving but according to stats (1 [dot.gov] , 2 [car-accidents.com] ) I found there's about 3 trillion miles driven, 6 million crashes and 40,000 fatalities in the US each year. That's one crash for every 500,000 miles driven - that sounds incredibly little to me but it says 3000 billion miles. Even if you consider that near-accidents and emergencies happen more often they're many thousands of miles apart. Nobody is really able to stay alert that long for something that doesn't happen 99.99% of the time.

It's great what you learned at your driver's exam, but most people aren't there. At best they're simply cruising and need to snap into emergency mode, not counting all those who aren't paying sufficient attention, are tired, fiddling with the stereo, their phone, distracted by passengers, fail to react, panic, do stupid things like swerve into opposing traffic, on alcohol, on drugs and so on. Nor do they have a 360 degree view or IR vision or any of the other tools an automated car would use. By far most often the threat is obvious if observed and the solution usually equally so.

After all, there's only so many things you can observe about the people around you too, you can see the child but you can't know what it's thinking. It's just the basics of that it's a child, what trajectory it's on, is it unaccompanied and in all honesty there's not that much you manage to process while you're driving by at a fair speed. It's mostly attention, if that child runs in direction of the road I have to react instantly and break which will cut a typical response time from 1.5 seconds to 0.2 seconds. It might just not be that relevant to a computer that's still analyzing all possible problems simultaneously.

Re:In other news... (1)

mapkinase (958129) | more than 2 years ago | (#39507883)

My problem with these carrs is that they won't break the laws when necessary.

Right now every rational person who is not impaired physically is speeding, rolling over stop signs or running over stop signs where they are ridiculous. I am speeding 20-25 mph in my complex when I do not see children around, but if I seen signs of children anywhere close to the road (this happens in the certain time of the day) I slow down to 5mph (the rule here is that one has to assume that at any moment the stupid child will run to the road at its maximal speed). I also maximize the distance between my car and children my crawling on the wrong side of the in-complex road. Autonomous car will continue driving 10 or 15 mph per hour in this situation.

Autonomous car will be like an old Asian lady who got her driver license 3 months ago. The real old asian ladies already have major impact on traffic.

What I want from autonomous cars (any cars for that matter) is to be able to form intelligent ad hoc trains able to drive in a very sync manner on the road, with almost no lag in accelerating or decelerating between front car and back car because they all connected and when the front car of the train had to break, the signal goes immediately at computer stock market trading rate to all the cars in a virtual train. (most importantly this happens when the front car accelerates). Only that soultion will ease the traffic.

If we cannot expand cheaply our roads we need to make them much faster, not 55 mph (speed limit on 270), not 70 mph (actual average speed when traffic is still smooth), but 100 or 120 mph.

Re:In other news... (1)

AdrianKemp (1988748) | more than 2 years ago | (#39508179)

I actually really like the U.K. proposal to do effectively this.

They want(ed) to link a train of cars to a lead car driven by a professional driver (only on motorways). You'd merge on, connect to the train (somehow, not physically) and bam, done.

Re:In other news... (1)

nashv (1479253) | more than 2 years ago | (#39507907)

The day a child runs out in a way the human behind a steering wheel is not able to react to due to their incompetence, blood alcohol levels, sleepiness, or distraction, there is always a tragedy.

This tragedy has nothing to do with whether a machine or a human is controlling the car. It's a tragedy of an unfortunate circumstance.

It is possible however that on an average the machine does better than or equal to a human. To determine if it is so, it requires testing. Which is being done. So what exactly is the problem? Why do you assume that the human level of intelligence is the end-all and be-all of doing everything? Time and again it has been demonstrated that human intelligence is biased towards certain kinds of tasks. And it is debatable if driving is one of those, considering it is something humans have started doing only in the last 150 years or so at most.

Re:In other news... (1)

ledow (319597) | more than 2 years ago | (#39507415)

When was the last time a standard wheelchair did 80mph when the user pressed a button/pedal? When was the last time a crutch was fitted with ABS to help it stop in time because it went so fast?

There's progress, and there's fecking ignorance of the scale of the problem.

Re:In other news... (1, Insightful)

aaaaaaargh! (1150173) | more than 2 years ago | (#39507555)

The problem is technical and social/moral.

We are willing to accept the fact that a human occasionally makes an error. We are, however, not willing to accept when a machine makes an error, not to speak of the occasional errors made by software engineers. That's the social or moral problem. Who's fault is it if there is a software or hardware glitch?

But there is also a technical issue. I personally would not drive in a car programmed by Google engineers, because I am not confident that these people have the experience to develop such high-integrity systems. I want to see the CVs of these engineers first. On how many high-integrity systems have they worked so far? I know plenty of people with an AI background, and trust me, I don't want these to program my car. I'd also need to know which programming languages and development tools they have used, see the source code, and would like to know which formal software and hardware verification methods were used to verify the code.

It would also be a good idea to publish the source code for software used in planes like those of Airbus, but unfortunately they haven't gotten so far yet. However, there are plenty of reasons to have more confidence in airplane safety than in the safety of autonomous vehicles developed by Google. BTW, one reason is that it's technically much easier to fly a plane (without landing, which is still mostly done manually) than to programmatically steer a car or make a robot walk.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507847)

"Meanwhile this technology could eventually change the lives of millions upon millions of disabled people, ..."

Another tech already did, it's called 'a cab'.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39507959)

In case anyone is interested, Udacity [udacity.com] will be offering CS 373: PROGRAMMING A ROBOTIC CAR again starting in April. The first class is finishing up and Sebastian Thrun does an excellent job demonstrating the concepts required to make the self-driving car happen.

Re:In other news... (1)

DragonTHC (208439) | more than 2 years ago | (#39507979)

a wheelchair isn't a two ton death machine controlled by computers hurled through the universe.

Re:In other news... (0)

Anonymous Coward | more than 2 years ago | (#39508217)

I think that's a great tech for children as well.

Headline For This Story? (2)

Jeremiah Cornelius (137) | more than 2 years ago | (#39507097)

Boy, if that's not one of the most appropriate metaphors for our time...

Soon, they'll just jack us into our pods, and grow us for the power we generate. :-)

Re:Headline For This Story? (0)

Anonymous Coward | more than 2 years ago | (#39507563)

I for one welcome our new Robotic overlords.

Re:Headline For This Story? (2)

RoboJ1M (992925) | more than 2 years ago | (#39507807)

Turning us into this...
*turns a Duracell battery around in his hand*

Is this legal? (1)

Chrisq (894406) | more than 2 years ago | (#39507109)

In the UK you are not allowed to drive unless your eye-sight meets a minimum standard [direct.gov.uk] . Is it legal for a 95% blind man to drive in the USA?

Re:Is this legal? (2)

Alioth (221270) | more than 2 years ago | (#39507137)

However, in the UK, eyes are no longer tested after you do your driving test. So in reality there are many drivers on the roads with substandard vision who have not been tested in decades (I got rear-ended on my bike by one on a straight road, in good visibility, while wearing bright clothing. It was an elderly gentleman who had no corrective lenses - he just ploughed into the back of me). At least when I was in Texas you got an eye test for driving every 4 years, not a "squint at this numberplate" eye test, but one using an optician's machine.

Re:Is this legal? (2)

Chrisq (894406) | more than 2 years ago | (#39507213)

However, in the UK, eyes are no longer tested after you do your driving test. So in reality there are many drivers on the roads with substandard vision who have not been tested in decades (I got rear-ended on my bike by one on a straight road, in good visibility, while wearing bright clothing. It was an elderly gentleman who had no corrective lenses - he just ploughed into the back of me). At least when I was in Texas you got an eye test for driving every 4 years, not a "squint at this numberplate" eye test, but one using an optician's machine.

True up to the age of 70, after which it is part of the medical carried out every three years. In theory drivers are responsible for getting their eyes tested and reporting themselves to the DVLA if they cannot see well enough to meet the requirements. In practice many people who think that their site is not good enough and cannot be corrected avoid being tested so that they can continue driving. I even knew someone who drove a 3-wheeler car as he had only passed a mtorcycle test before his eyesight deteriorated.

Too true (5, Interesting)

Kupfernigk (1190345) | more than 2 years ago | (#39507245)

Many years ago, in the UK, my wife volunteered to do the school crossing patrol. She was nearly killed (along with several kids) when a man drove straight across the crossing without slowing down. But she got the number and called the police.

Later she was called to the police station to make a statement. The police had arrested the driver. He said he had not seen the crossing because there was thick fog (mildly overcast). Then they discovered that he was registered partially sighted. He had cataracts.

He was convicted of:

  • Careless driving
  • Driving while unfit
  • Driving while uninsured (because his insurance was invalid from the moment he lied on the form).

His comment to my wife at the police station? "You've spoiled my day". He simply did not realise how serious his offense was.

So I applaud what Google is doing, because I've worked with computers for nearly 35 years, and human beings for over 40, and if the system is designed I would trust the computer over the human being any day of the week, and double on Sundays (drunks with hangovers).

Re:Is this legal? (1)

Anonymous Coward | more than 2 years ago | (#39507153)

isn't the point that the car is actually driving... given the range of sensors it's using it could be argued that it has a sight many times over the 100% of human.

i'm sure this doesn't get around the law but it would indicate that the law needs to be revisited when (not if mind) these become practical.

Re:Is this legal? (2)

Chrisq (894406) | more than 2 years ago | (#39507251)

isn't the point that the car is actually driving.

But that would only be legal in Nevada [pcworld.com] .

Re:Is this legal? (2, Informative)

Anonymous Coward | more than 2 years ago | (#39507183)

The law currently states that it's illegal to drive when legally blind, which is defined as a visual acuity of 20/200 or less using best correction possible. If your vision is better than 20/200 but still bad, you're assessed on a per-case basis. This suggests that anyone with visual acuity better than 20/200 may be allowed to drive using this technology (or future derivitives of) if it is considered to be a corrective device. How they would measure such improvement is unknown, since visual acuity tests certainly don't involve any driving. This is speculatory, of course, since there will have to be a review of driving law if this kind of thing becomes commonplace.

Re:Is this legal? (2)

jamesh (87723) | more than 2 years ago | (#39507311)

This is speculatory, of course, since there will have to be a review of driving law if this kind of thing becomes commonplace.

Interesting times ahead. For all my reservations, there will eventually come a time when a self drive car is better under all driving situations than the average road user, and the generation after that actually "driving" a car will be "retro" and a steering wheel will be something kids ask about when they see one in a movie.

Alternatively, by then kids will be plugged into their computers at birth and never move from their beds...

Re:Is this legal? (0)

Anonymous Coward | more than 2 years ago | (#39507249)

I believe google received an exemption or permit in some states, i remember a story about it being featured on Slashdot although i can't seem to find it now.

Businessweek has a related story:
http://www.businessweek.com/articles/2012-02-21/how-to-get-a-permit-for-your-driverless-car

Re:Is this legal? (1)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#39507445)

They probably also do a lot of their R&D on private courses/test tracks.

Unless I'm much mistaken, you don't need jack in terms of license, registration(except for tax purposes in some jurisdictions), insurance, road-worthiness, or much of anything else if you want to do questionably sensible things away from public roads. If you fuck up and somebody dies, the situation may get a good deal less convivial; and you won't gain magic immunity from lawsuits; but most of that is there to insure some vague minimum of operator skill and ability to pay to have the guy you just hit scraped off the road and stitched up, and doesn't apply at a closed, private, facility.

Re:Is this legal? (4, Informative)

JasterBobaMereel (1102861) | more than 2 years ago | (#39507305)

He had a policeman sitting next to him ...

"Mahan has no driver's license, of course -- just one of the hurdles that had to be crossed: Google enlisted the aid of Sergeant Troy Hoefling with the Morgan Hill Police Department to accompany the drive."

Re:Is this legal? (0)

Anonymous Coward | more than 2 years ago | (#39507329)

In the UK you are not allowed to drive unless your eye-sight meets a minimum standard [direct.gov.uk] . Is it legal for a 95% blind man to drive in the USA?

Well this is the point; he's not really driving. The car drives itself so he's only test riding. So no, no legal issues, the only license required here is Slashdot's poetic one.

Re:Is this legal? (1)

TheRaven64 (641858) | more than 2 years ago | (#39507921)

I don't think that's the case in most of the world, although there are exemptions in some places. Irrespective of how automated the car is, the human in the drivers seat is legally responsible for it and counts as the driver, even if all the 'driving' that they're doing is entering the destination coordinates.

Re:Is this legal? (2)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#39507405)

I'm not sure what the exact cutoff is(probably varies by state); but they do do some eye testing during the licensing process and you can lose your license for doing sufficiently dreadfully on the test. There are also certain conditions, it is my understanding, that can trigger a compulsory re-test.

Trouble is, the licensing tests are quite infrequent and people can go rather rapidly downhill between them(and, in much of the country, once you are too old to drive, you might as well go to the nursing home to die; because you are now about as independent as you were at 14...) The amount of risk that they pose to others is pretty selfish; but not having a license is Serious Shit in large areas of the US, in addition to the direct inconveniences of aging, so it is entirely understandable that people keep doing it for years after it stops being a good idea.

It is probably also the case that voting patterns really don't help: very young and very old drivers are both menaces to themselves and others. However, only the latter group votes. This, I suspect, is why much more scrutiny is given to the former(despite the fact that virtually all of them will become less dangerous as they gain experience), while the latter are substantially 'grandfathered in' by pro-forma renewals of decades-old licenses, despite the fact that they'll keep getting more dangerous until something eventually kills them or makes operating a vehicle physically impossible.

I stole this joke from someone, but... (0)

Anonymous Coward | more than 2 years ago | (#39507149)

For christ's sake, stop talking about the google car! Every time it's mentioned, anywhere, it pushes its release to two years in the future.

Re:I stole this joke from someone, but... (1)

Chrisq (894406) | more than 2 years ago | (#39507235)

For christ's sake, stop talking about the google car! Every time it's mentioned, anywhere, it pushes its release to two years in the future.

People will be driving them for years with "beta" written on the back

Re:I stole this joke from someone, but... (1)

artfulshrapnel (1893096) | more than 2 years ago | (#39508125)

If they start selling a "Beta fish" to replace the jesus fishes, I'll buy 20.

Blind Spot (4, Funny)

coinreturn (617535) | more than 2 years ago | (#39507255)

"'This is some of the best driving I've ever done,' Steve Mahan said the other day.

I guess he usually uses those pavement reflector thingies to drive by braille.

Re:Blind Spot (2)

c0lo (1497653) | more than 2 years ago | (#39507875)

"'This is some of the best driving I've ever done,' Steve Mahan said the other day.

I guess he usually uses those pavement reflector thingies to drive by braille.

Joking aside (... or not quite...), after staring (with your remaining eye) too much on those laser finders of the incoming traffic, you will appreciate this braille pavement yourself.

Re:Blind Spot (2)

artfulshrapnel (1893096) | more than 2 years ago | (#39508229)

You do know that the rangefinders use rapidly moving lasers which are far less bright than, say, sun light reflecting off a piece of chrome? Even if you were somehow able to stare into one for a long time, it wouldn't be bright enough to do anything to you.

My real worry is how well the car reacts to other cars' laser rangefinders. Do the lasers cause interference?

human factor (4, Insightful)

jamesh (87723) | more than 2 years ago | (#39507261)

Driving home tonight there was a young kid playing quite near the road, so I dropped my speed in anticipation of him doing something stupid. He didn't, but I did wonder about the google car making those sorts of calls. I'm sure these google guys are pretty clever and have thought of all these things... are there any video's of self drive cars reacting to these sort of situations?

Like that feeling you get when you see someone else on or near the road and you aren't completely sure that they have seen you and you react by lowering your speed to avoid a potential collision. It's got me out of trouble a few times. If there was an accident you probably wouldn't be at fault, but you've gone one better and seen the accident coming and avoided it.

I'd want to see lots of video evidence of a self drive car doing this sort of thing before I'd be happy sharing the road with one.

Re:human factor (1)

JustOK (667959) | more than 2 years ago | (#39507317)

what if the the video camera and the car were in cahoots?

Re:human factor (2, Insightful)

Anonymous Coward | more than 2 years ago | (#39507475)

I'm inclined to favor greatly improved reaction time and unerring robotic focus over your spidey sense.

Re:human factor (4, Funny)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#39507477)

The google car not only knows to slow down, it displays a tasteful unobtrusive contextual advertisement, based on the type of play being conducted, to the kid as it drives past...

Re:human factor (0)

Anonymous Coward | more than 2 years ago | (#39507861)

There's enough hysteria when what should be very deterministic systems go a bit mad and cause vehicles to not behave themselves (Toyota, was it?).

I'd love to know what happens the first time one of these gets involved in a crash - it's going to be the legal equivalent of a Mexican standoff.

Then the recalling will commence.

Re:human factor (1)

linkex (970789) | more than 2 years ago | (#39507871)

The situations you describe are when the other party is, or could be, being a retard. If you were "Sharing the road" with a google car, you would be the other party. So by your own logic, you have nothing to fear if you are not a retard. And if you are, who cares what you think?

Slowing down (5, Informative)

DrYak (748999) | more than 2 years ago | (#39507925)

As explained by other, the car *does* slow down, and even eventually halt when exposed to situation it thinks it can't handle.

Also, the car has much lower reaction times. So in some situations, it doen't really need to slow down, it will react immediatly if needed, whereas a human driver will need to slow down to make room for slower refelexes.
(The distance between autonomous vs human-driven cars on the motorway, for example).

www.webhostinghub.com (-1)

Anonymous Coward | more than 2 years ago | (#39507397)

Great article. Thank you for publishing this post. Will definetly come back for extra fascinating information.

Cant wait... (1)

johnsnails (1715452) | more than 2 years ago | (#39507737)

Cant wait till this project gets shelved in 3 years for no good reason and everyone needs to learn how to drive again...

Finally the ATM's will be used! (4, Funny)

John3 (85454) | more than 2 years ago | (#39507877)

The drive-up ATM's at Citibank branches in the NY area have had Braille labels on all the buttons for years. Seemed kind of silly up until now, you certainly would not want a blind person standing in the driveway using the ATM, and I certainly hope a person requiring Braille labels on an ATM would not be behind the wheel of a car. Not knowing Braille myself, I always assumed the labels said "Get out of the way!!! You're standing in a driveway!!!".

But now I realize that Citibank was preparing for the eventual release of the autonomous car.

NFL Jerseys (-1)

Anonymous Coward | more than 2 years ago | (#39507999)

NFL Jerseys are popularly received in many countries. People crazy about NFL events tend to buy NFL Jersey for collection or gifts. NFL Jerseys just set off overnight. Because of people’s preference, there being a growing demand for NFL Jerseys. Moreover, it boosts market for NFL Jerseys. We design, sell and deliver various NFL Jersey Wholesale around the world. Our products are of good qualities. The USA authentic nfl jerseys we designed are durable. We have our own website. And the price we provide for Football Jerseys are reasonable enough that you can rely on us without any hesitation. If you are interest, don’t hesitate to contact us through the website where more details of NHL Jerseys are provided. And it is recommended that you order the NFL Jerseys online. We assure you that your order of our Discoun NFL Jersey NBA Jerseys NHL Jerseys MLB Jerseys is of good value for money
NFL Jerseys from http://www.cheapjerseysaleshop.com/cheap-148-NFL-Jerseys.html

Note to self... (1)

DdJ (10790) | more than 2 years ago | (#39508085)

...Steve Mahan != Steve Mann.

(Note to Google: a similar test with Steve Mann has the potential to be really, really interesting.)

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?