The engineering of autonomous, morally competent robots might include a perfectly crafted conscience capable of distinguishing right from wrong and acting on it. In the near future, artificial intelligence entities might be "better moral creatures than we are" -- or at least better decision makers when facing certain dilemmas.
Roboethics vs. machine ethics
In 2002, roboticist Gianmarco Veruggio coined the term roboethics -- the human ethics of robots' designers, manufacturers, and users -- to outline where research should be focused. At that time, the ethics of artificial intelligence was divided into two subfields.
- Machine ethics: This branch deals with the behavior of artificial moral agents.
- Roboethics: This branch responds to questions about the behavior of humans -- how they design, construct, use, and treat robots and other artificially intelligent beings. Roboethics ponders the possibility of programming robots with a code of ethicsthe could respond appropriately according to social norms that differentiate between right and wrong.
Robotics in the Tufts computer science department: Nao offers a greeting.
(Source: Alonso Nichols/Tufts University)
Naturally, to be able to create such morally autonomous robots, researchers have to agree on some fundamental pillars: what moral competence is and what humans would expect from robots working side by side with them, sharing decision making in areas like healthcare and warfare. At the same time, another question arises: What is the human responsibility of creating artificial intelligence with moral autonomy? And the leading research question: What would we expect of morally competent robots?