A friend just finished a post-doc where they used genetic algorithms to modify the object code of a program to fix bugs and modify the program's behaviour. Their informal name was Skynet. The project was successful. The work was at UNM - University of New Mexico.
Interesting that the military wants UAV's to have this capability instead of tanks. Read "Bolo" and others in the series... http://en.wikipedia.org/wiki/Bolo_(tank) and http://en.wikipedia.org/wiki/Keith_Laumer
One of the other questions is: what does the UAV decide what to do when damaged to the point of not being able to return home, or not complete the mission, etc? An obvious choice is pick a target and dive onto it.
I often wonder about the reporting from a country that has limited freedom and their ultimite motives (talking about RT from Russia) but I have thoroughly enjoyed every one of these. Listening to Jody is inspiring and I have to agree war is so dreadful that really you don't want anything to make it easier. The US with its incredible military and industrial strength could find itself in a position (with these autonomous robots) where they could chose to wage war on anyone just because their leader made a wise crack and there would be almost no political backlash because no soldiers die. They might even have a future leader who is as full of himself as the current NK leader and decide that they don't have enough oil and maybe Canada's tar sands would be a good acquistion. Jody really brought up good points and I think she's rigth on the money even where she said Obama shouldn't have accepted the Nobel peace prize. I don't have anything against Obama mind you, but as someone who has commanded many a military attack (commander in chief and all that) it wasn't good form. Re other's comments, I have to agree this has a definite Skynet thing about it.
There is a big difference between some unethical "dumb" unmanned weapons in a fixed location (i.e. land mines) and some unethical "smart" unmanned weapon like a robot that can pick its own targets, move, and act on them. Without a human in the loop (i.e. drones), one can imagine all the plethora of new and wonderful unintended things that can go wrong. I could mention dozens of movies and sci-fi stories of typically expected unexpected failures with robots, but why? Certain people are going to and rushing to build killer robots anyway no matter how good or bad the idea. Some will say it's safer, cheaper, or saves lives while increasing "target throughput" and from a practical POV they are right. The argument not to build such automatic killing machines will have to fall under ethics or even etiquette (touchy feely human reasons). Mainly, is it ethical for some unthinking manmade object to take away the fundamental right of humans to decide which other humans should die and thus avoid there afterward the need for justifying their ethical/moral reasoning to other humans with wrong decisions resulting in the same possible outcome for murder? This is almost likened to a high tech Nuremburg defense (not me, the machine did it). Then strangely, in the "etiquette" of war, it would frankly be discourteous to both sides to let some machine do the dirty work when a personal human touch is required (warrior ethic). Then finally, even after a successful Friend or Foe identification, will these "smart" machines be able to discern the nuance of someone surrendering, injured, mentally damaged, defecting, or wanting to negotiate?