As driverless cars become a reality and start to capture the
imagination (the latest
news comes this week from Lexus), the conversation about
"morality" and robots is picking up steam. Consider Gary
Marcus' piece in the New Yorker ( "Moral
Machines") and Robin Young's NPR
interview with him this week.
The two talk about Isaac Asimov's three
famous 70-year-old rules for robots and what this means
in a driverless car world. What if, Marcus posits, you're driving on
a bridge and a school bus veers in front of your car. Does your
driverless car save you or the school children?
Many (but not all) of these questions are short-circuited by
technology. In that future world, the bus doesn't veer into your
lane because fleets will be among the first equipped with active
safety electronics; they may not be driverless, but they won't allow
a driver to veer from a lane. Your driverless car will react a lot
faster than you would in that situation.
At the end of the day, though, the hand-wringing over "robot ethics"
is silly: It assumes our ethics are elegant and universal, when in
fact they are as nearly as varied as humans and cultures are
How many times have you played the ethics game with your kids or
friends, posing questions such as "would you steal food to feed your
starving family?" Not everyone answers the question the same way.
I think--given what we know about the likely arc of technology in
the next decade or so--that robot decision-making will be a whole
lot better for society than our own, widely varying ethical
ponderances. Begin with cars that won't start because they
sense too much alcohol on your breath.
Where the challenge will arise is farther down the arc, in places we
can't yet image, as such technology gets smarter, more "human" and
more pervasive. The ethics rule-making will be inevitable. There lie
decisions we're not willing to confront appropriately.
This is where Asimov's first rule ("A robot may not injure a human
being...") takes on a stark meaning.
Why? Because in some cases, the robot may decide that the need to
kill you in some instance, rather than a number of people, is best
It'll be hard for humans, who anguished enough these days making life-support decisions, to write those rules. Maybe we should
delegate that to the robots.
What's your take?
--The future of military robotics
--Silicon Valley Nation: Techs you won't see at CES
--Lexus drives advanced safety initiative