Design Con 2015
Breaking News
Blog

Engineering Ethics Into A Robot

Page 1 / 3 Next >
View Comments: Threaded | Newest First | Oldest First
lamuth
User Rank
Rookie
Ethical AI has Already Been Achieved
lamuth   7/16/2014 9:56:38 PM
NO RATINGS
How soon they forget !

Here is my EETimes article from 2003

http://www.eetimes.com/document.asp?doc_id=1146269


Inventor constructs 'ethical' artificial intelligence


Chappell Brown


7/30/2003 06:03 PM EDT

 


Hancock, N.H. - If successful, a Darpa initiative to develop technology for a "perceptive assistant that learns," or PAL, could kick off a new phase for artificial intelligence, enabling devices that would peruse large databases and assemble their own knowledge bases to assist people in decision-making.

 

As the 22 labs that have received initial funding from the Defense Advanced Research Projects Agency work out the thorny artificial intelligence (AI) issues to realize the agency's vision, a critical piece of the puzzle may already be in place, in the form of a patent granted last month to author and inventor John E. LaMuth for an "ethical" AI system. LaMuth said he has approached Darpa's Information Processing Technology Office about his expert system, but its proprietary nature has been a stumbling block.

 

The inventor believes his system addresses a crucial facet of any human-oriented automated personal assistant: an understanding of human motivation and ethics. "This AI patent allows for information processing in an emotive, motivational specialization. As such, it represents a quantum leap in the AI field, allowing for abstract reasoning and long-term planning in matters of a motivational nature," said LaMuth, who believes the personal assistant envisioned in the Darpa initiative (see www.eet.com/at/news/OEG20030717S0040) would be an ideal first application for his expert system.

 

LaMuth said he has been talking to Ron Brachman, director of Darpa's Information Processing Technology Office, about the invention and asserted that Brachman has no problem with the expert system itself but is concerned because the technology is proprietary. "I fear that this shortsighted attitude could prove detrimental to America's current preeminence in the field," LaMuth said. "I feel strongly that the newly issued patent eventually will prove its merit."

 

The inventor's system tackles some of the less-defined areas of mental ability. Generally, expert systems take some well-defined area of expertise and implement rote rule-execution algorithms. Because emotion, ethics and motivation are relatively esoteric concepts that defy hard definitions, capturing them in a digital system that represents discrete rules and procedures is a challenge.

 

Rather than bytes as the basic unit of data, LaMuth uses the sentence. "This AI entity is readily able to learn through experience [by] employing verbal conversation or a controller interface," he said. "Technically it would not be [reflectively] aware of its own existence, as this is a strictly subjective determination. It would, however, be able to simulate this feature through language, thereby convincing others of the fact."

 

The system is based on affective language analysis, a branch of linguistics in which language is characterized in terms of goals, preferences and emotions. LaMuth has automated this aspect of linguistics using conventional ethical categories drawn from Western religion, philosophy and ancient Greek thought.

 

After working out a basic set of ethical categories, LaMuth created a hierarchy of definitions based on the human cognitive ability to construct emotional and motivational models of someone else's state of mind. For example, a customer talking to a salesperson at a car dealership will be aware of his or her own motivation and expectations but will also construct a model of the salesperson's motivation and ethical values to evaluate information presented about a car. The same two people might meet in different circumstances-say, at a party -and the ethical/motivational models they construct then would be different. But the process would be the same in both cases.

 

Part of the ethical model building involves successive levels of "indirection." In the car salesperson example, the customer might also construct a cognitive model of how the salesperson is thinking about the customer's own motivation and expectations. The human mind can only take this process a few steps, but logically it can be extended indefinitely. LaMuth's expert system uses a 10-level hierarchy, resulting in 32 pages of "schematic definitions" in the patent application.

more at

http://www.angelfire.com/rnb/fairhaven/patent.html

patent # 6587846

Susan Fourtané
User Rank
Blogger
Re: Ethical AI has Already Been Achieved
Susan Fourtané   7/17/2014 9:16:57 AM
NO RATINGS
lamuth, 

No one is claiming this is the first research on ethics and AI. :)

On the contrary. The article focuses on current research on different design options for reasoning algorithms, such as augmented deontic logics, as explained by Professor Matthias Scheutz.

The good thing, I believe, is that research on ethics and AI has been going on and it is making progress.

I appreciate you are adding all this information to the discussion and adding to my knowledge. I will keep this for reference.

It's by adding knowledge that we can build on a mutual interest. :) Thanks so much.

-Susan 

lamuth
User Rank
Rookie
Re: Ethical AI has Already Been Achieved
lamuth   7/19/2014 5:31:51 PM
NO RATINGS
Thank you Susan...

Please consider my AI work for further coverage ^_^

I have been in contact w/ the NAVY liaison and further hope that the grantees (should they read this) might be open to a potentially mutually-beneficial collaboration...

Appreciatively

John L

http://www.angelfire.com/rnb/fairhaven/patent.html

Susan Fourtané
User Rank
Blogger
Re: Ethical AI has Already Been Achieved
Susan Fourtané   7/20/2014 11:55:02 AM
NO RATINGS
Thanks, John L. Yes, I will, as I said. It interests me. Have you considered contacting Matthias Scheutz? -Susan

mhrackin
User Rank
CEO
And if they succeed....
mhrackin   7/24/2014 2:18:03 PM
...will they then try embedding ethical behavior into humans?  Now THAT would really be worthwhile.  It's a goal that our entire education, upbringing, and social culture "experts" seems to have largely abandoned in the past 40 years or so.

Susan Fourtané
User Rank
Blogger
Re: And if they succeed....
Susan Fourtané   7/25/2014 4:27:49 AM
NO RATINGS
mhrackin, 

I love your comment. I totally agree with you. That's exactly one of the points of interest in this subject.

You need to imagine that whoever is involved embedding ethical behavior into robots must have proven a high level of ethics themselves. 

That is one of the reasons why I believe embedding ethics into robots carries a lot of responsibility. 

-Susan

 

David Ashton
User Rank
Blogger
Re: And if they succeed....
David Ashton   7/25/2014 5:08:28 AM
NO RATINGS
@MHRackin...that is almost identical to a comment I made on another blog about ethical autonomous robots here.  Glad to see I am not alone in my cynicism (or, as George Bernard Shaw would call it, accurate observation :-)

Susan Fourtané
User Rank
Blogger
Re: And if they succeed....
Susan Fourtané   7/25/2014 7:42:45 AM
NO RATINGS
David, 

That other article is the first part of this one. :) They belong to a series of articles that will be aiming at exploring current research on ethics, robotics, and AI. 

Indeed, as you and MHRackin have observed, one of the thoughts that come out from this topic is related to observing and analyzing the condition of human ethics at the moment.

And also, as I mentioned before, to consider the responsibility that working in this field should represent.

-Susan

David Ashton
User Rank
Blogger
Re: And if they succeed....
David Ashton   7/26/2014 2:52:12 AM
NO RATINGS
HI Susan...sorry, did not realise this article was a continuation of the first one.

I tell you what DOES need some ethics - those self-serve supermarket checkouts. I tried them a couple of times but they were forever accusing me of not putting things in the bag, or taking things out of the bag, or putting extra things in the bag without checking them.   I was once about to punch the screen of the stupid thing.   They will have to get a LOT better before I use them again.   There's something for your ethical embedded programmers to start on!!

Crusty1
User Rank
CEO
Liar
Crusty1   7/26/2014 4:14:46 AM
NO RATINGS
Hi David, Susan, From what I have seen so far in the articles and others to date, we are well on our way to allowing logical entities to start lying.

A medical autonomous robot would not do the patient much good if it said you are 99.9% certain to die from your wounds. Ethically the surgeon or nurse will easily bend logic to increase the will of the patient to live, but we all know they are lying for the best reasons?

David I think the Point of Sale check out computers get so bored with the speed of humans that they have fun with us at the till. Should we leave himour out of the autonomous robots reactions?

David Ashton
User Rank
Blogger
Re: Liar
David Ashton   7/26/2014 4:43:18 AM
NO RATINGS
Hi Crusty.  Well those point of sale checkouts cause a serious sense of humour failure on my part (maybe that is what they are aiming at :-)

If they're so damn clever, they can get a robot arm to take the items off the belt and scan them and put them in bags.  That would save them of accusing me of cheating.

And if they are so damn clever, how come they need a human to supervise them?  (Oh, sorry, that's to stop me punching the screen :-)

If we could program the genius of Dr House into a medical robot, we'd be doing well....

I suppose human beings are fairly fuzzy, logic wise.  I suppose the poor computers have a fair bit to put up with.  But there's a rule that covers most of these situations:

To err is human.  To really stuff things up, takes a computer!

 

Crusty1
User Rank
CEO
Re: Liar
Crusty1   7/26/2014 6:04:13 AM
NO RATINGS
Hi David: To err is human.  To really stuff things up, takes a computer!


Ah well if we are chopping logic then on a circular argument humans made the computer so they are responsible for the computers inability to process correctly.

Trouble with Point of sale programmes is that they most often do not get the right people to write the algorithms. It should be a computer literate shopper who writes and tests the programme.

I remember when the first attempts at biological cell recognition was starting, that biologists did not understand computers and the computer coders did not understand biology. Got a few years of paid bidirectional translation between the biologists and the hardware / software guys.

Personally I will only shop in outlets that use hand scanners, these work well but do require an element of honesty from the shopper.

 

David Ashton
User Rank
Blogger
Re: Liar
David Ashton   7/26/2014 8:54:06 PM
NO RATINGS
@Crusty..."humans made the computer so they are responsible for the computers inability to process correctly."      Ywah, that other old rule, Garbage in, Garbage out.....

"It should be a computer literate shopper who writes and tests the programme."

He'd need to be literate to write it, but I would say you should use a computer ILLITERATE shopper to test it.

I think one of the problems I had was that I pulled something out of the bag, then put it back in, packing it better so I'd only need one bag.   That is the sort of thing that these @#$%^& auto-checkouts need to be able to cope with.

Re hand-scanners...I don't have a problem with the scanners, they work quite well, but the algorithms that sense whether I have put everything in the bags only after scanning need a bit of tweaking.  When the auto checkouts approach the level of friendliness and intelligence of even the dumbest checkout chick person then I'll use them, not before.

 

David Ashton
User Rank
Blogger
Re: And if they succeed....
David Ashton   7/25/2014 6:19:27 AM
@mhrackin....been thinking more about your comment....I always find it astounding that while the human race has progressed so much technically, socially they seem to have gone backwards.  

Flash Poll
Like Us on Facebook

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
EE Times on Twitter
EE Times Twitter Feed
Top Comments of the Week