Breaking News

Engineering Ethics Into A Robot

Watch Cindy and Transport in action in a preliminary lab test:

How to engineer ethics into a robot
A decision-making process that mimics what humans tend to do in morally challenging situations may be the answer to engineering ethics into a robot. This is done by first recognizing morally challenged situations, followed by deploying reasoning strategies that include moral principles, norms, and values. This corresponds to Prof. James H. Moor's third kind of ethical agent, the "explicit ethical agent," as described by Moor:

    Explicit ethical agents can identify and process ethical information about a variety of situations and make sensitive determinations about what should be done. In particular, they are able to reach "reasonable decisions" in moral dilemma-like situations in which various ethical principles are in conflict.

Scheutz has argued that current technological advances in robotics and artificial intelligence have enabled the deployment of autonomous robots that can make decisions on their own. However, most of these currently employed robots are fairly simple and their autonomy is limited. Therefore they carry a potential for becoming harmful machines due to their lack of robotic decision-making algorithms that could take any moral aspects into account. EETimes asked Scheutz what exactly his fear is.

"If we do not endow robots with the ability to detect and properly handle morally charged situations in ways that are acceptable to humans, we will increase the potential for harm and human suffering unnecessarily," he says, "for autonomous robots will then inevitably make decisions that we deem 'morally wrong,' e.g. failure to provide a patient with pain medication when it was warranted."

There are actions that robot developers could take in order to mitigate the problem of morally challenged situations that social robots deployed in human societies will face, according to what Scheutz argues in his paper "Think and Do the Right Thing -- A Plea for Morally Competent Autonomous Robots." EE Times asked Scheutz what these actions could be.

< Previous Page 2 / 3 Next >
View Comments: Newest First | Oldest First | Threaded View
<<   <   Page 2 / 2
Susan Fourtané
User Rank
Re: Ethical AI has Already Been Achieved
Susan Fourtané   7/20/2014 11:55:02 AM
Thanks, John L. Yes, I will, as I said. It interests me. Have you considered contacting Matthias Scheutz? -Susan

User Rank
Re: Ethical AI has Already Been Achieved
lamuth   7/19/2014 5:31:51 PM
Thank you Susan...

Please consider my AI work for further coverage ^_^

I have been in contact w/ the NAVY liaison and further hope that the grantees (should they read this) might be open to a potentially mutually-beneficial collaboration...


John L

Susan Fourtané
User Rank
Re: Ethical AI has Already Been Achieved
Susan Fourtané   7/17/2014 9:16:57 AM

No one is claiming this is the first research on ethics and AI. :)

On the contrary. The article focuses on current research on different design options for reasoning algorithms, such as augmented deontic logics, as explained by Professor Matthias Scheutz.

The good thing, I believe, is that research on ethics and AI has been going on and it is making progress.

I appreciate you are adding all this information to the discussion and adding to my knowledge. I will keep this for reference.

It's by adding knowledge that we can build on a mutual interest. :) Thanks so much.


User Rank
Ethical AI has Already Been Achieved
lamuth   7/16/2014 9:56:38 PM
How soon they forget !

Here is my EETimes article from 2003

Inventor constructs 'ethical' artificial intelligence

Chappell Brown

7/30/2003 06:03 PM EDT


Hancock, N.H. - If successful, a Darpa initiative to develop technology for a "perceptive assistant that learns," or PAL, could kick off a new phase for artificial intelligence, enabling devices that would peruse large databases and assemble their own knowledge bases to assist people in decision-making.


As the 22 labs that have received initial funding from the Defense Advanced Research Projects Agency work out the thorny artificial intelligence (AI) issues to realize the agency's vision, a critical piece of the puzzle may already be in place, in the form of a patent granted last month to author and inventor John E. LaMuth for an "ethical" AI system. LaMuth said he has approached Darpa's Information Processing Technology Office about his expert system, but its proprietary nature has been a stumbling block.


The inventor believes his system addresses a crucial facet of any human-oriented automated personal assistant: an understanding of human motivation and ethics. "This AI patent allows for information processing in an emotive, motivational specialization. As such, it represents a quantum leap in the AI field, allowing for abstract reasoning and long-term planning in matters of a motivational nature," said LaMuth, who believes the personal assistant envisioned in the Darpa initiative (see would be an ideal first application for his expert system.


LaMuth said he has been talking to Ron Brachman, director of Darpa's Information Processing Technology Office, about the invention and asserted that Brachman has no problem with the expert system itself but is concerned because the technology is proprietary. "I fear that this shortsighted attitude could prove detrimental to America's current preeminence in the field," LaMuth said. "I feel strongly that the newly issued patent eventually will prove its merit."


The inventor's system tackles some of the less-defined areas of mental ability. Generally, expert systems take some well-defined area of expertise and implement rote rule-execution algorithms. Because emotion, ethics and motivation are relatively esoteric concepts that defy hard definitions, capturing them in a digital system that represents discrete rules and procedures is a challenge.


Rather than bytes as the basic unit of data, LaMuth uses the sentence. "This AI entity is readily able to learn through experience [by] employing verbal conversation or a controller interface," he said. "Technically it would not be [reflectively] aware of its own existence, as this is a strictly subjective determination. It would, however, be able to simulate this feature through language, thereby convincing others of the fact."


The system is based on affective language analysis, a branch of linguistics in which language is characterized in terms of goals, preferences and emotions. LaMuth has automated this aspect of linguistics using conventional ethical categories drawn from Western religion, philosophy and ancient Greek thought.


After working out a basic set of ethical categories, LaMuth created a hierarchy of definitions based on the human cognitive ability to construct emotional and motivational models of someone else's state of mind. For example, a customer talking to a salesperson at a car dealership will be aware of his or her own motivation and expectations but will also construct a model of the salesperson's motivation and ethical values to evaluate information presented about a car. The same two people might meet in different circumstances-say, at a party -and the ethical/motivational models they construct then would be different. But the process would be the same in both cases.


Part of the ethical model building involves successive levels of "indirection." In the car salesperson example, the customer might also construct a cognitive model of how the salesperson is thinking about the customer's own motivation and expectations. The human mind can only take this process a few steps, but logically it can be extended indefinitely. LaMuth's expert system uses a 10-level hierarchy, resulting in 32 pages of "schematic definitions" in the patent application.

more at

patent # 6587846

<<   <   Page 2 / 2
Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed