Hancock, N.H. - If successful, a Darpa initiative to develop technology for a "perceptive assistant that learns," or PAL, could kick off a new phase for artificial intelligence, enabling devices that would peruse large databases and assemble their own knowledge bases to assist people in decision-making.
As the 22 labs that have received initial funding from the Defense Advanced Research Projects Agency work out the thorny artificial intelligence (AI) issues to realize the agency's vision, a critical piece of the puzzle may already be in place, in the form of a patent granted last month to author and inventor John E. LaMuth for an "ethical" AI system. LaMuth said he has approached Darpa's Information Processing Technology Office about his expert system, but its proprietary nature has been a stumbling block.
The inventor believes his system addresses a crucial facet of any human-oriented automated personal assistant: an understanding of human motivation and ethics. "This AI patent allows for information processing in an emotive, motivational specialization. As such, it represents a quantum leap in the AI field, allowing for abstract reasoning and long-term planning in matters of a motivational nature," said LaMuth, who believes the personal assistant envisioned in the Darpa initiative (see www.eet.com/at/news/OEG20030717S0040) would be an ideal first application for his expert system.
LaMuth said he has been talking to Ron Brachman, director of Darpa's Information Processing Technology Office, about the invention and asserted that Brachman has no problem with the expert system itself but is concerned because the technology is proprietary. "I fear that this shortsighted attitude could prove detrimental to America's current preeminence in the field," LaMuth said. "I feel strongly that the newly issued patent eventually will prove its merit."
The inventor's system tackles some of the less-defined areas of mental ability. Generally, expert systems take some well-defined area of expertise and implement rote rule-execution algorithms. Because emotion, ethics and motivation are relatively esoteric concepts that defy hard definitions, capturing them in a digital system that represents discrete rules and procedures is a challenge.
Rather than bytes as the basic unit of data, LaMuth uses the sentence. "This AI entity is readily able to learn through experience [by] employing verbal conversation or a controller interface," he said. "Technically it would not be [reflectively] aware of its own existence, as this is a strictly subjective determination. It would, however, be able to simulate this feature through language, thereby convincing others of the fact."
The system is based on affective language analysis, a branch of linguistics in which language is characterized in terms of goals, preferences and emotions. LaMuth has automated this aspect of linguistics using conventional ethical categories drawn from Western religion, philosophy and ancient Greek thought.
After working out a basic set of ethical categories, LaMuth created a hierarchy of definitions based on the human cognitive ability to construct emotional and motivational models of someone else's state of mind. For example, a customer talking to a salesperson at a car dealership will be aware of his or her own motivation and expectations but will also construct a model of the salesperson's motivation and ethical values to evaluate information presented about a car. The same two people might meet in different circumstances-say, at a party -and the ethical/motivational models they construct then would be different. But the process would be the same in both cases.
Part of the ethical model building involves successive levels of "indirection." In the car salesperson example, the customer might also construct a cognitive model of how the salesperson is thinking about the customer's own motivation and expectations. The human mind can only take this process a few steps, but logically it can be extended indefinitely. LaMuth's expert system uses a 10-level hierarchy, resulting in 32 pages of "schematic definitions" in the patent application.
...will they then try embedding ethical behavior into humans? Now THAT would really be worthwhile. It's a goal that our entire education, upbringing, and social culture "experts" seems to have largely abandoned in the past 40 years or so.
HI Susan...sorry, did not realise this article was a continuation of the first one.
I tell you what DOES need some ethics - those self-serve supermarket checkouts. I tried them a couple of times but they were forever accusing me of not putting things in the bag, or taking things out of the bag, or putting extra things in the bag without checking them. I was once about to punch the screen of the stupid thing. They will have to get a LOT better before I use them again. There's something for your ethical embedded programmers to start on!!
Hi David, Susan, From what I have seen so far in the articles and others to date, we are well on our way to allowing logical entities to start lying.
A medical autonomous robot would not do the patient much good if it said you are 99.9% certain to die from your wounds. Ethically the surgeon or nurse will easily bend logic to increase the will of the patient to live, but we all know they are lying for the best reasons?
David I think the Point of Sale check out computers get so bored with the speed of humans that they have fun with us at the till. Should we leave himour out of the autonomous robots reactions?
Hi David: To err is human. To really stuff things up, takes a computer!
Ah well if we are chopping logic then on a circular argument humans made the computer so they are responsible for the computers inability to process correctly.
Trouble with Point of sale programmes is that they most often do not get the right people to write the algorithms. It should be a computer literate shopper who writes and tests the programme.
I remember when the first attempts at biological cell recognition was starting, that biologists did not understand computers and the computer coders did not understand biology. Got a few years of paid bidirectional translation between the biologists and the hardware / software guys.
Personally I will only shop in outlets that use hand scanners, these work well but do require an element of honesty from the shopper.
@Crusty..."humans made the computer so they are responsible for the computers inability to process correctly." Ywah, that other old rule, Garbage in, Garbage out.....
"It should be a computer literate shopper who writes and tests the programme."
He'd need to be literate to write it, but I would say you should use a computer ILLITERATE shopper to test it.
I think one of the problems I had was that I pulled something out of the bag, then put it back in, packing it better so I'd only need one bag. That is the sort of thing that these @#$%^& auto-checkouts need to be able to cope with.
Re hand-scanners...I don't have a problem with the scanners, they work quite well, but the algorithms that sense whether I have put everything in the bags only after scanning need a bit of tweaking. When the auto checkouts approach the level of friendliness and intelligence of even the dumbest checkout chick person then I'll use them, not before.
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.