Researchers are exploring how to engineer ethics into artificially intelligent beings that would interact and help humans as socially assistive robots.
What is the human responsibility of creating artificial intelligence with moral autonomy? How can we engineer ethics into a robot responsibly?
Researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute, among others around the world, are investigating these and other questions together with the US Navy with the goal of developing algorithmic morals that could be successfully engineered into a robot. To learn more about current research on embedding moral algorithms in robots and to continue a series of articles on artificial intelligence (AI) and ethics (see Part 1, Ethical, Autonomous Robots of the Near Future ), EE Times interviewed principal investigator Matthias Scheutz, PhD, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at the university.
The search for answers to this fascinating topic that links the engineering of robotics and artificial intelligence, big-data analytics, and the philosophy of ethics brings challenges to design engineers that should not be overlooked.
Nao sits quietly, waiting before Tufts Department of Computer Science Professor Matthias Scheutz's robotics programming class.
(Source: Alonso Nichols/Tufts University)
The first question EE Times asked him refers to the most significant challenge in designing robots with components of moral competence. "There are many challenges involved, from being able to recognize morally charged situations, to enabling the right kind of ethical reasoning that will allow to resolve moral conflicts, to deciding on the proper course of action, to implementing all algorithms in a robotic architecture in a way that will make them work robustly in real time," explains Scheutz, who also mentions they are currently researching the different design options for reasoning algorithms, such as augmented deontic logics (i.e. logics that formalize the notions of obligation and permission), and ways to detect morally charged situations, mostly based on language.
In his paper "The Need for More Competency in Autonomous Agents Architectures" (PDF) he discusses healthcare and the battlefield as two fields where autonomous robots will be deployed. Some of the challenges that robots' designers and engineers face in terms of what decisions these robots will make -- and who would be responsible for an inadequate decision that could cause an otherwise avoidable injury -- get attention in the research. EE Times asked Scheutz about this from the point of view of a robot designer.