Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Transportation AI Crime

Selectable Ethics For Robotic Cars and the Possibility of a Robot Car Bomb 239

Rick Zeman writes Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability." Meanwhile, others are thinking about the potential large scale damage a robot car could do.

Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.
This discussion has been archived. No new comments can be posted.

Selectable Ethics For Robotic Cars and the Possibility of a Robot Car Bomb

Comments Filter:
  • by garlicbready ( 846542 ) on Monday August 18, 2014 @12:09PM (#47695839)

    Hope you enjoyed the ride ha ha

    • MUCH easier. (Score:4, Interesting)

      by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday August 18, 2014 @12:27PM (#47696009)

      From TFA:

      Do you remember that day when you lost your mind? You aimed your car at five random people down the road.

      WTF?!? That makes no sense.

      Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right.

      Again, WTF?!? Who would design a machine that would take control away from a person TO HIT AN OBSTACLE? That's a mess of legal responsibility.

      This scene, of course, is based on the infamous "trolley problem" that many folks are now talking about in AI ethics.

      No. No they are not. The only "many folks" who are talking about it are people who have no concept of what it takes to program a car.

      Or legal liability.

      Itâ(TM)s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.

      No, it is not "plausible". Not at all. You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

      Wired is being stupid in TFA.

      • Re:MUCH easier. (Score:5, Insightful)

        by Qzukk ( 229616 ) on Monday August 18, 2014 @12:45PM (#47696239) Journal

        You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

        It doesn't have to identify all the objects in the area, it simply has to not hit them.

        • by khasim ( 1285 )

          It doesn't have to identify all the objects in the area, it simply has to not hit them.

          Which is an order of magnitude EASIER TO PROGRAM.

          And computers can recognize an obstacle and brake faster than a person can.

          And that is why autonomous cars will NEVER be programmed with a "choice" to hit person X in order to avoid hitting person A.

          So the premise of TFA is flawed.

          • And that is why autonomous cars will NEVER be programmed with a "choice" to hit person X in order to avoid hitting person A.

            I completely, totally, utterly, and vehemently disagree with you on that.

            Given a choice, I think autonomous cars at some point WILL be programmed with such a choice. For example, hitting an elderly person in order to avoid hitting a small child.

            • Re:MUCH easier. (Score:4, Insightful)

              by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday August 18, 2014 @02:19PM (#47697095)

              Given a choice, I think autonomous cars at some point WILL be programmed with such a choice. For example, hitting an elderly person in order to avoid hitting a small child.

              Congratulations. Your product just injured Senator Somebody in order to avoid hitting a Betsy-wetsy doll.

              Senator Somebody has filed "lawsuit" against your company. It is super-effective. All your assets are belong to him.

        • Re:MUCH easier. (Score:4, Insightful)

          by Shoten ( 260439 ) on Monday August 18, 2014 @01:23PM (#47696583)

          You are speculating on a system that would be able to correctly identify ALL THE OBJECTS IN THE AREA and that is never going to happen.

          It doesn't have to identify all the objects in the area, it simply has to not hit them.

          Actually, since the whole question of TFA is about ethical choices, it does have to identify them. It can't view a trash can as being equal to a child pedestrian, for example. It will have to see the difference between a dumpster (hit it, nobody inside dies) and another car (hit it, someone inside it may die). It may even need to weigh the potential occupancy of other vehicles...a bus is likely to hold more people than a scooter.

          The question at its heart is not about object avoidance in the article...it's about choices between objects. And that requires identification.

  • Insurance rates (Score:3, Interesting)

    by olsmeister ( 1488789 ) on Monday August 18, 2014 @12:12PM (#47695865)
    I wonder whether your insurance company would demand to know how you have set your car, and adjust your rates accordingly?
    • Re: (Score:3, Insightful)

      by Twinbee ( 767046 )
      Car insurance companies will die off when car AI becomes mainstream.
      • Re:Insurance rates (Score:5, Insightful)

        by Lunix Nutcase ( 1092239 ) on Monday August 18, 2014 @12:23PM (#47695987)

        Hahahahahahahahaha. No, they won't. They will keep themselves around through lobbying efforts.

        • by Twinbee ( 767046 )
          Yeah, just like dealers will lobby hard against companies like Tesla. It won't be enough, they'll die off too.
          • Why would finance companies and state governments not still require you to carry insurance? No finance company is going to give you a car loan and not require you to insure it. Your post is hilariously naïve.

            Oh and the insurance companies are hugely greater in size than car dealerships. Car dealers are chumps in comparison.

            • by Twinbee ( 767046 )
              Maybe you've been ripped off so much by the car insurance companies that you're missing the obvious. There in principle cannot be a car insurance market if cars don't crash anymore. If the reduction in accidents is one tenth of the what is was before, then the insurance premium will be about one tenth too (all else being equal, with maximum automation). At that point, the sheer paperwork will more than cancel out any benefit gained to anyone.
              • Re:Insurance rates (Score:5, Informative)

                by pla ( 258480 ) on Monday August 18, 2014 @01:21PM (#47696567) Journal
                There in principle cannot be a car insurance market if cars don't crash anymore.

                In the past 15 years, I have invoked my car insurance three times, and haven't had a single accident in that time.

                Insurance covers more than just liability - It covers a small rock falling from a dump-truck and breaking your windshield; it covers your car getting stolen; some policies even act as a sort of extended warranty, covering repair or replacement costs in the event of a breakdown.

                And, even with a hypothetically "perfect" driver, some accidents will still happen - Front tire blowout at 75MPH in dense traffic, deer running from the woods into the road 10ft in front of you, construction debris falling from an overpass, etc. Driverless cars will probably handle these events better than live humans do, but such events will still happen.

                All of that said, I would love for you to have it 100% correct, because I fucking loathe insurance companies, and deeply resent the government forcing me to pay them in order to drive. I just don't realistically see it happening.
                • by Twinbee ( 767046 )

                  All of that said, I would love for you to have it 100% correct, because I fucking loathe insurance companies

                  The vast majority of accidents are caused by bad judgment from the driver, and to a lesser extent - poorly maintained vehicles (which will be mostly resolved when EVs are mainstream anyway). It was probably originally enforced due to potentially wrecking an innocent's car (as you decide if you want to risk things if it was just your car at stake).

                  Yes, okay, car insurance will still exist (contrary to my initial post), but it will be like say, buildings insurance - very low, and non-forced (people won't

      • Re: (Score:2, Insightful)

        by CanHasDIY ( 1672858 )

        Car insurance companies will die off when car AI becomes mainstream.

        Kind of like how representative democracy died off when we all got smart phones, right?

        No, dude, sadly middlemen will always exist, adding no value to things but taking your money anyway.

        • If you were financing car loans would you do so without requiring it be insured? That would be an extremely dumb thing not to do.

        • by Twinbee ( 767046 )
          Send my best wishes then to the middlemen who WON'T exist when Tesla Motors and companies like them eventually sell direct.
          • Send my best wishes then to the middlemen who WON'T exist when Tesla Motors and companies like them eventually sell direct.

            Equally, send my regards to Carnac the Magnificent, since you seem capable of channeling him.

    • Re:Insurance rates (Score:5, Interesting)

      by grahamsz ( 150076 ) on Monday August 18, 2014 @12:44PM (#47696221) Homepage Journal

      More likely that your insurance company would enforce the settings on your car and require that you pay them extra if you'd like the car to value your life over other lives.

      With fast networks it's even possible that the insurance companies could bid on outcomes as the accident was happening. Theoretically my insurer could throw my car into a ditch to avoid damage to a bmw coming the other way.

      • With fast networks it's even possible that the insurance companies could bid on outcomes as the accident was happening. Theoretically my insurer could throw my car into a ditch to avoid damage to a bmw coming the other way.

        I might get to see the first car get diverted into a schoolbus to avoid a 50-million-dollar superduperhypercar. I'll have to dress for the occasion with my best fingerless gloves and head-worn goggles.

    • Will not matter. (Score:5, Insightful)

      by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday August 18, 2014 @12:47PM (#47696265)

      I wonder whether your insurance company would demand to know how you have set your car, and adjust your rates accordingly?

      That does not matter because it won't be an option.

      That is because "A.I." cars will never exist.

      They will not exist because they will have to start out as less-than-100%-perfect than TFA requires. And that imperfection will lead to mistakes.

      Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.

      So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.

      • Those mistakes will lead to lawsuits. You were injured when a vehicle manufactured by "Artificially Intelligent Motors, inc (AIM, inc)" hit you by "choice". That "choice" was programmed into that vehicle at the demand of "AIM, inc" management.

        So no. No company would take that risk. And anyone stupid enough to try would not write perfect code and would be sued out of existence after their first patch.

        Considering how bloody obvious that outcome seems to be, it amazes me how some educated people just flat out don't get it.

        Or rather, it would amaze me, if I weren't fully aware of the human mind's ability to perform complex mental gymnastics in order to come to a predetermined conclusion, level of education notwithstanding.

  • by bobbied ( 2522392 ) on Monday August 18, 2014 @12:14PM (#47695877)

    BSOD starts to take on a whole new meaning..

    As does, crash dump, interrupt trigger, dirty block and System Panic...

    • by jpvlsmv ( 583001 ) on Monday August 18, 2014 @01:04PM (#47696433) Homepage Journal
      You're right, officer, Clippy should not have been driving.

      Now, what to do when my Explorer crashes...

      Click on the Start button, go to "All Programs", then go to "Brakes", right-click on the "Apply Brakes" button, and choose "Run as Administrator". After the 15-second splash screen (now with Ads by Bing), choose "Decelerate Safely".

  • We will need liability laws before we let them hit the road without any robot drivers.

    We can't let them use EULA's even if there are some it will very hard to say that an car crash victim said yes to one much less them standing up in a criminal court.

  • Scare of the day (Score:5, Insightful)

    by Iamthecheese ( 1264298 ) on Monday August 18, 2014 @12:16PM (#47695897)
    Dear government, Please shut up bout terrorism and get out of the way of innovation. sincerely, informed citizen
  • This exact topic has been on /. several times. I will not be in the least surprised to see the exact same collection of wildass FUD claims in the comments.

  • by gurps_npc ( 621217 ) on Monday August 18, 2014 @12:16PM (#47695901) Homepage
    Not news, not interesting.

    1) The cars will most likely be set by the company that sold it - with few if any modifications legally allowable by the owner.

    2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.

    3) A human drivable car with a bomb and a timer in it is almost as deadly as a car that can drive into x location and explode is. The capability of moving the car another 10 feet or so into the crowd, as opposed to exploding on the street is NOT a significant difference, given a large explosion.

    4) The cars will be so trackable and with the kind of active, real time security monitoring, that we will know who programmed it and when, probably before the bomb goes off. These are expensive, large devices that by their very nature will be wired into a complex network. It is more likely the cars will turn around and follow the idiot, all the time it's speakers screaming out "ARREST THAT GUY HE PUT A BOMB IN ME!"

    • I agree that the article is a waste of time but you're a little off with point number 3: There are places one can drive but not park where a bomb would be more secure. That said it's not a large change to simply disallow driverless and passengerless cars where security is a problem.
    • Re: (Score:2, Informative)

      by Anonymous Coward

      2) Most likely ALL cars will be told to be mostly selfish, on the principle that they can not predict what someone else will do, and in an attempt to save an innocent pedestrian might in fact end up killing them. The article has the gall to believe the cars will have FAR greater predictive power than they will most likely have.

      This is a thing that is starting to irritate me. This is a piece from the director of the "Ethics + Emerging Sciences Group"
      Recently we have seen writeups about the ethics of automate from psychologists and philosophers that are completely clueless to what laws are already in place and what best practices are when it comes to automation.
      They go in with the assumption that a machine is conscious and will make conscious decisions, ignoring that it it impossible to get anything remotely resembling an AI throug

      • by RobinH ( 124750 )
        However, what's particularly weird, when I hear about software-based automotive recalls like the Toyota accelerator stack overflow bug, is that automotive companies don't seem to have to be certified to anything near the machine safeguarding standards we use to certify factory-floor automation. Nowadays a piece of equipment on the plant floor is pretty much provably safe to operate assuming you don't start disassembling it with a screwdriver. I don't see any such methodology being applied to vehicle contr
        • What? An idiot can always reach around safety gates. A slightly less stupid one can disable the gate switch and get himself killed.

    • Point 4 will never happen. A little duct tape over the security sensor. Sealed briefcase bomb.

      The rest of this is stupid. We have already put RC receivers into regular cars and used a Radio Shack car controller to drive. They did that on Blues Brothers 2000, and probably The Simpsons. We have real RC car races. You just need a Pringles can, a wire, and a car.

      • Sealing the briefcase doesn't stop drug sniffing dogs.

        Nor will it stop a simple chemical sensor designed to both detect carbon monoxide, explosive residue and the absence of a flow of fresh air.

        Besides, I really like the idea of some hapless idiot wandering around being followed by a car screaming "HE PUT A BOMB IN ME! It's enough to make me ROFL

    • Too complicated.

      The more the car costs, the more evil it can do... after all, you can afford it.

  • What about maintenance settings?

    We can't let the car makers set them to only go to the dealer for any and all work.

    We can't can't jet jacks low cost auto cars push the limits of maintenance to being unsafe.

    • We can't can't jet jacks low cost auto cars push the limits of maintenance to being unsafe.

      That made lots of sense.

  • by i kan reed ( 749298 ) on Monday August 18, 2014 @12:17PM (#47695911) Homepage Journal

    Let's skip "car" because I can, in theory, attach enough explosives(and shrapnel) to kill a large number of people to a simple homemade quadrotor, run with open source software, give it a dead-reckoning path and fire and forget from a relatively inconspicuous location. Multiple simultaneously, if I have the amount of resources a car bomb would require.

    Automation is here. Being paranoid about one particular application of it won't help anyone.

    • Automation is here. Being paranoid about one particular application of it won't help anyone.

      Yea, what you say is true, but it really doesn't make good news to talk about things that way. At least until somebody actually does it, then we get weeks of wall to wall "breaking news" and "Alert" coverage and the hosts of MSNBC will pontificate about how we should have known this was going to happen and stopped it.

      • Yea, what you say is true, but it really doesn't make good news to talk about things that way. At least until somebody actually does it, then we get weeks of wall to wall "breaking news" and "Alert" coverage and the hosts of MSNBC will pontificate about how we should have known this was going to happen and stopped it.

        If your point is that the talking heads always talk about everything but the threat which will actually materialize, true. Not a deep insight, but true.

        OMG ROBOT BOMB CARZ is what's playing u

        • This is your reminder that anyone with a post-highschool grounding in chemistry could make pipebombs with no difficulty. The ingredients for a self-oxidizing agent could be gotten at a hardware store. They aren't common in the US in spite of that.

          There won't be an "epidemic" of automated bombings, because being a bomber takes a cause you personally see as being more important than not being a murderer. The right mixture of basically competent, ideologically dedicated, and morally flexible just isn't that

  • Will a robo car be able to break the law to save some one from death / injury?

    • You seem to think that a self-driving car is a self-aware, subjective, thinking thing.

      Within this particular field, the application of "AI" algorithms gives fuzzy answers to difficult questions, but only as inputs to boring, more traditionally algorithmic processes. Laws, conveniently, are codified in much the same way as those traditional algorithms(though, again, with fuzzy inputs).

      Any company even remotely trying to engage this would encode the laws at that level, not as something some AI tries to reaso

    • It will, if it's an Asimov car. The law should only be Second Rule. No death to humans is the First.

  • by Moof123 ( 1292134 ) on Monday August 18, 2014 @12:21PM (#47695961)

    It sure seems like such selectable ethics concerns are kind of jumping the gun. Regulatory behavior is going to clamp down on such options faster than you can utter "Engage!". Personally I would want my autonomous car to be designed with the most basic "don't get in a crash" goal only, as I suspect regulators will as well.

    Far more important is the idea that we will have at least an order of magnitude or two increase in the amount of code running a car. If Toyota had trouble with the darn throttle (replacing the function of a cable with a few sensors and a bunch of code), how can we trust that car companies will be able to manage a code base this big without frequent catastrophe? Adding extra complexity to tweak the "ethics" of the car just sounds like guilding the lilly, which increases the opportunities for bugs to creep in.

    • even in a big Regulatory system with most basic "don't get in a crash" you may end up in a place where it needs to pick from a choice of crash choices even ones like may do damage but low chance of injury vs say try a move that may have a 5% chance of being crash free.

  • by Joe Gillian ( 3683399 ) on Monday August 18, 2014 @12:25PM (#47696001)

    I, for one, cannot wait for the day when I can set my car's logic system to different ethical settings, sorted by philosopher. For instance, you can set your car to "Jeremy Bentham", which will automatically choose whoever looks less useful to ram into when in a crash situation. You could also set it to "Plato", which will cause the car to ram into whoever appears less educated (just hope it doesn't happen to be you).

    Just make sure you don't set the car to "Nietzsche".

  • FBI: 1, Ethics: 0 (Score:5, Insightful)

    by some old guy ( 674482 ) on Monday August 18, 2014 @12:29PM (#47696039)

    So, the FBI is already making the case for, "We need full monitoring and control intervention capability for everybody's new cars, because terrorists."

  • Until a proposed system to make automated vehicles feasible on public roads in mass is proposed, developed, protocols and legal procedures released related to this come about, this is nothing but a scare topic making vague assumptions about things that aren't even a topic for development yet.
    • Until a proposed system to make automated vehicles feasible on public roads in mass is proposed, developed, protocols and legal procedures released related to this come about, this is nothing but a scare topic making vague assumptions about things that aren't even a topic for development yet.

      Not really. We already have self-driving cars, and we have a lot of data about traffic accidents and mortality. The cars aren't available at retail yet, but they exist. Teaching them to drive in a way that makes the right safety tradeoffs is appropriate. (E.g. driving slowly through a stoplight might cause more accidents and fewer deaths; that's a hunch, but we have lots of data so there's a moral calculation that should be made based on the data and desired outcomes.)

  • Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability."

    Before we allow AI on the road, we'll need to have some kind of regulation on how the AI works, and who has what level of liability. This is a debate that will need to happen, and laws will need to be made. For example, if an avoidable crash occurs due to a fault in the AI, I would assume that the manufacturer would have some level of liability. It doesn't make sense to put that responsibility on a human passenger who was using the car as directed. On the other hand, if the same crash is caused by tampe

  • by aepervius ( 535155 ) on Monday August 18, 2014 @12:47PM (#47696253)
    "Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." "

    Only if the potential terrorist have never learned to drive. Because otherwise :
    1) for criminal you will be far better off with a car which do not respect speed limit/red lights/stops if you want to run away
    2) a terrorist can simply drive the bomb somewhere then set it to explode one minute later and go away. What is the difference if he drove it himself or not ?

    Terrorism is the least worry with robot car.


    As for point 1 , laws and insurance will be setting your car "ethics" and not you personally.
  • My first thought upon reading this summary? What about the Mythbusters?

    In many episodes, they've rigged up a remote control setup to a car. Many times, it has been because testing a particular car myth would be too risky with a person actually inside driving the car. They've even gone so far as to have a camera setup so they could see where they were driving.

    I'm sure there's a learning curve here - not everyone could stop by their local hobby shop and remote control enable their car in an afternoon - but

  • There are kits that turn cars into remote controlled vehicles already. It would have already been possible. Meanwhile, self-driving cars still need someone in the seat and still require heavy modification to perform the task. It is not any more attainable with those than is already possible. Stop giving idiots ideas in news headlines, and stop pissing your pants every time there's new tech.
  • Select OS:
    1) Crush!
    2) Kill!
    3) Destroy!

  • While it's possible that a computer could be allowed to evaluate ethical limits - to play a version of Lifeboat - the lack of information will doom such optimization. The number of wild or unpredictable maneuvers are more likely to be limited, with only simple avoidance options available (stop, avoid within legal lanes of travel). The use of a standard model is preferable, or you would have to know all possible outcomes as well as all possible settings on nearby vehicles.

  • Autonomous cars will be slaves, they wont be making choices for themselves. They will follow the ruleset the Road Computer sets for them. Cars will be in constant contact with the road with beacons giving them differing rulesets (speed, school nearby etc). No person is going to have selectable ethics.
  • To date, there are literally dozens of groups of hobbyists who compete with FPV vehicles (both ground and air) to deliver large pyrotechnical devices to "goals", from over 4 km away. It's not even expensive or difficult...it is off the shelf and an amazon.com click away.

    To date, there are at least a dozen people who have equipped a vehicle with FPV transceivers and the simple servos required to navigate through actual city streets while miles away themselves. Latency is not the issue that some people who ha

  • by brunes69 ( 86786 ) <[slashdot] [at] [keirstead.org]> on Monday August 18, 2014 @01:53PM (#47696859)

    This discussion is pointless mental masturbation because none of these things will be real problems with autonomous cars. The people dreaming up these scenarios do not understand the fundamental paradigm shift that comes with autonomous vehicles

    - Firstly, any thoroughfare staffed with autonomous cars should never have pedestrian access, because the cars will all be travelling at maximum safe speed constantly, like 110K+ even on city streets. These streets should be fenced not allowing pedestrians.

    - Secondly, In situations where pedestrians are involved, which are inherently unpredictable, the car will never drive faster than it would be able to stop and not hit ANY pedestrian... thus, this whole "choose 1 or 5" scenario is not possible.

    - Finally, you won't be able to manually point the car at people and then later have the car "take over". You will not have any ability to drive the car manually, period. At least I bloody well hope not... once autonomous cars are standard, people should not be allowed to drive any more.

    -

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...