"Designing autonomous, morally competent robots may be inspiring and fascinating, but it certainly will not be easy."...
I agree completely. This is a fascinating research topic but seems like an impossible task. Not sure how this could be achieved in the near future. What seems impossible to me is building "Emotion" into the AI...if you have watched the movie iRobot, it is relatively easier for me to explain. The very reason why Del Spooner (Will Smith) used to hate robots...when he met an accident along with his daughter Sarah in their car, a "NS-4 model" robot saved him instead of his 12 years old daughter as NS-4 analyzed 45% chance of his survival vs 11% chance of his daughter's survival.
A very interesting series I watched on AI morality is Ghost in the Shell: SAC, an idea way ahead of its time. A different outlook from most 'western'(no offence meant) concepts on robot morality.
Involved multiple autonomous semi-tanks called 'Tachikoma' which shared a single conciousness which synced among all of them every night. Definitely worth a watch but can be a bit of an investment in time :)
I always feel fiction is a great place pick up on hints on topics like these, especially a lot of Asimov's works.
Ahhh, you are anticipating to one of my next articles, i.e. building emotion into AI. Both building emotion and building ethics, which we discuss now, are challenging, I believe, as human emotions and ethics are so many times so conflicting and inconsistent, far to be perfect.
Yes, I see your point referring to iRobot. Also, have you seen Spielberg's AI: Artificial Intelligence? That's another good reference when discussing this type of research.
"a "NS-4 model" robot saved him instead of his 12 years old daughter as NS-4 analyzed 45% chance of his survival vs 11% chance of his daughter's survival."
That's a great example that you bring here. :)
NS-4's decision was made based on logic according to his analysis rather than on emotions. Will Smith's character was driven by a negative emotion: Hate, as consequence of the experience. For NS-4 saving one human instead of letting two die was the best moral decision.
What do you think? Did NS-4 make the right decision?
Surely the robots will make these ethical decisions based on policies and procedures that humans have devised? I don't see how having these actions carried out by robots rather than human operators really changes anything. What's important is transparency - the policies implemented by the robots need to be publically accessible and subject to legal challenge, not just hidden away in the computer code.
Designing an ethical code is easy (example: the Ten Commandments). The difficulty lies in designing the exceptions to that code. Zeeglen, your list has merit (although it also has a pretty tilted set of options). Asimov's Three Laws recognizes conflicting situations rather than simple absolutes, but I am sure that in Lawyer mode there could be a lot of room for interpretation,
Making excuses and loopholes for sin is easy; it's obeying the law of God's righteousness that's the hard part. Going beyond excuses and deliberately designing exceptions to a holy law - well, you make the bed you sleep in.
The 10 Commandments is a good foundation, but it took thousands of years and God's own Son to sum it all up with loving God first and then loving your neighbor as yourself; and it took crucifixion itself to live up to it.
military robot ethics falls under the latter rule about loving your neighbor. I'm pretty sure most people don't love themselves by having robot armies breaking down their doors and flying drones sniping off their loved ones, but if that's how we treat our international neighbors.. well, that's not my bed.
"I don't see how having these actions carried out by robots rather than human operators really changes anything."
In a war scenario some actions carried out by robots instead of humans may be beneficial. If you send an autonomous vehicle with the capacity to make decisions to a war zone to deliver supplies, for instance, instead of a regular vehicle driven by a human the human driver can be assigned a different task where a human presence is more needed.
"Roboethics" because it's the human ethics that robots' designers, manufacturers, and users need to have when designing, manufacturing, or interacting with a robot. It's not about the ethics of the robot but the ethics of the humans, which need to be in place.
Emotion can cloud judgement. Even human is trying to leave emotion aside when we come to making critical decision, why would we want to introduce emotion into robot? However, if we envision robots be able to learn and adopt, teaching robot emotion become inevitable. Looking further, I am questioning what happen when emotion is introduced to a robot. Will it think for itself and believe they are the ultimate being of the world? It sounds like terminator to me now. ;)
"In a practical scenario, an autonomous, morally competent medical transport acting should be able to determine if changing its route from checkpoint Alpha to checkpoint Beta is the best way to achieve its goal"
An ethics question would be about what the goal is. Is the goal delivering medical supplies to a disaster site or delivering a bomb to a school bus full of children?
In the scenario described, it was given that the goal was to deliver medical supplies to a disaster site or battlefield. But there is still an ethical question about the robot's goals and it being able to decide if changing its route from checkpoint Alpha to checkpoint Beta is the best way to achieve those goals. Suppose that one route choice has a higher probability of saving the most number of lives, but the other choice has a higher probability of saving certain VIPs?
You read/copied only part of the sentence. The full sentence is as follows:
"In a practical scenario, an autonomous, morally competent medical transport acting should be able to determine if changing its route from checkpoint Alpha to checkpoint Beta is the best way to achieve its goal of delivering supplies to a disaster area or the battlefield."
"An ethics question would be about what the goal is."
The goal it's there: to deliver supplies to a disaster area or the battlefield.
@Susan Fourtane The goal it's there: to deliver supplies to a disaster area or the battlefield.
I understand that, but because the goal is there I don't see any moral question. Deliver the supplies. My guess is that you meant to indicate that the robot is supposed to perform some sort of battlefield triage and decide who can't be saved, who can wait, who needs immediate help and what order to put the casualties in to save the most lives. Even here I don't see any ethical dilemmas.