The two talk about Isaac Asimov's three
famous 70-year-old rules for robots and what this means
in a driverless car world. What if, Marcus posits, you're driving on
a bridge and a school bus veers in front of your car. Does your
driverless car save you or the school children?
Many (but not all) of these questions are short-circuited by
technology. In that future world, the bus doesn't veer into your
lane because fleets will be among the first equipped with active
safety electronics; they may not be driverless, but they won't allow
a driver to veer from a lane. Your driverless car will react a lot
faster than you would in that situation.
At the end of the day, though, the hand-wringing over "robot ethics"
is silly: It assumes our ethics are elegant and universal, when in
fact they are as nearly as varied as humans and cultures are
How many times have you played the ethics game with your kids or
friends, posing questions such as "would you steal food to feed your
starving family?" Not everyone answers the question the same way.
I think--given what we know about the likely arc of technology in
the next decade or so--that robot decision-making will be a whole
lot better for society than our own, widely varying ethical
ponderances. Begin with cars that won't start because they
sense too much alcohol on your breath.
Where the challenge will arise is farther down the arc, in places we
can't yet image, as such technology gets smarter, more "human" and
more pervasive. The ethics rule-making will be inevitable. There lie
decisions we're not willing to confront appropriately.
This is where Asimov's first rule ("A robot may not injure a human
being...") takes on a stark meaning.
Why? Because in some cases, the robot may decide that the need to
kill you in some instance, rather than a number of people, is best
It'll be hard for humans, who anguished enough these days making life-support decisions, to write those rules. Maybe we should
delegate that to the robots.
For the first time, I saw a google self driving car on the way in to work. The engineering curiousity in me made me want to go in front of it and hit my brakes to see how it would respond... I resisted. But I did notice that when someone moved into the lane next to him, he slowed down a bit... I'd be curious to see the state diagram of their sensors, actions, etc...
How is a robotic vehicle that allows you to read your paper during the morning commute different from a bus? Seriously, I wonder if public transportation will get wiped out by, or get a boost from the advent of autonomous vehicles. There's so much one can do when the cars are interacting with each other and with some overall control---redirecting traffic around congestion, etc---but at which point in this integration a personal car begins to behave like an element of public transport?
I guess that's just the price you pay for the freedom to read the paper during your morning commute instead of actually "driving"...
Same argument applies to speed limits. Presumably your "ethical" car won't let you speed, and would even report other violators complete with timestamped video for the courts. The way most people drive, it wouldn't be long before most of us would be required to drive under "ethical" robot control or pay ever-increasing traffic fines.
We must remember that Robots are just machines. I am not sure they will ever reach a point were ethics becomes part of their actions. We build robots to do a job. They will do that job within the parameters we set.
A Robot will never be 100% safe under all conditioins. Defects and untested decision paths will always exist.
Using a wrench as a hammer is not the fault of the wrench.
Just my opinion.