"Roboethics" because it's the human ethics that robots' designers, manufacturers, and users need to have when designing, manufacturing, or interacting with a robot. It's not about the ethics of the robot but the ethics of the humans, which need to be in place.
"I don't see how having these actions carried out by robots rather than human operators really changes anything."
In a war scenario some actions carried out by robots instead of humans may be beneficial. If you send an autonomous vehicle with the capacity to make decisions to a war zone to deliver supplies, for instance, instead of a regular vehicle driven by a human the human driver can be assigned a different task where a human presence is more needed.
Ahhh, you are anticipating to one of my next articles, i.e. building emotion into AI. Both building emotion and building ethics, which we discuss now, are challenging, I believe, as human emotions and ethics are so many times so conflicting and inconsistent, far to be perfect.
Yes, I see your point referring to iRobot. Also, have you seen Spielberg's AI: Artificial Intelligence? That's another good reference when discussing this type of research.
"a "NS-4 model" robot saved him instead of his 12 years old daughter as NS-4 analyzed 45% chance of his survival vs 11% chance of his daughter's survival."
That's a great example that you bring here. :)
NS-4's decision was made based on logic according to his analysis rather than on emotions. Will Smith's character was driven by a negative emotion: Hate, as consequence of the experience. For NS-4 saving one human instead of letting two die was the best moral decision.
What do you think? Did NS-4 make the right decision?
Designing an ethical code is easy (example: the Ten Commandments). The difficulty lies in designing the exceptions to that code. Zeeglen, your list has merit (although it also has a pretty tilted set of options). Asimov's Three Laws recognizes conflicting situations rather than simple absolutes, but I am sure that in Lawyer mode there could be a lot of room for interpretation,
Surely the robots will make these ethical decisions based on policies and procedures that humans have devised? I don't see how having these actions carried out by robots rather than human operators really changes anything. What's important is transparency - the policies implemented by the robots need to be publically accessible and subject to legal challenge, not just hidden away in the computer code.
A very interesting series I watched on AI morality is Ghost in the Shell: SAC, an idea way ahead of its time. A different outlook from most 'western'(no offence meant) concepts on robot morality.
Involved multiple autonomous semi-tanks called 'Tachikoma' which shared a single conciousness which synced among all of them every night. Definitely worth a watch but can be a bit of an investment in time :)
I always feel fiction is a great place pick up on hints on topics like these, especially a lot of Asimov's works.
"Designing autonomous, morally competent robots may be inspiring and fascinating, but it certainly will not be easy."...
I agree completely. This is a fascinating research topic but seems like an impossible task. Not sure how this could be achieved in the near future. What seems impossible to me is building "Emotion" into the AI...if you have watched the movie iRobot, it is relatively easier for me to explain. The very reason why Del Spooner (Will Smith) used to hate robots...when he met an accident along with his daughter Sarah in their car, a "NS-4 model" robot saved him instead of his 12 years old daughter as NS-4 analyzed 45% chance of his survival vs 11% chance of his daughter's survival.
As we unveil EE Times’ 2015 Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. Panelists Dan Armbrust (investment firm Silicon Catalyst), Andrew Kau (venture capital firm Walden International), and Stan Boland (successful serial entrepreneur, former CEO of Neul, Icera) join in the live debate.