The Terminator series have forever seared the idea of AI's turning into SkyNet, the military AI that decided people were a threat to its existence and they had to be eliminated.
The Janus Project is a story by James P. Hogan that brings up a similar problem, but basically one of unintended consequences. The story starts with surveyors on the moon asking the computer to estimate how long it would take to clear this mountain range for construction of a new linear acceleration catapult. The computer answers, "15 minutes." laughing, the surveyors tell the computer to execute the plan and are almost killed because the computer directed the existing catapult to start dumping its loads onto the mountain range to blow it away. The rest of the book is devoted to trying to teach the computers the consequences of their actions.
Fictional stories, but we may see instances where the AI's are going to reach unexpected solutions to problems they are presented with. The same way your child may produce a solution different from what you expect.
It's not going to be as easy as installing Asimov's Three Laws of Robotics. Besides, he spent most of his stories illustrating how inadequate they were. Jack Williamson's With Folded Hands demonstrated those rules taken to extreme were not good either.
I don't want to sound down on AI's, I really like the idea, but we're fooling ourselves if we think they are going to turn out to be exactly what we think they are going to be.
In 1984, Scientific American published an article on a game called "core wars". The premise was that two or more programs, each of which was allowed to execute one instruction per turn, would try to eliminate all of the competitor programs. The game was won when only one program remained functional (or was the last program with instructions that could be executed). These programs were executed in a "sandbox", allocated to the execution space, and using an interpreted programming language. One of the first strategies developed was the use of replicating code.
It was not long after this that the first virus and worm programs appeared. While written by humans, these pieces of code were designed to replicate themselves and spread, similar to early life. We have seen these evolve from purely destructive functions to stealthy infections which are designed to compromise data and security without user awareness. Much of today's malware collects information and passes it along to a server using the internet.
Programming itself has evolved a long way from the rudimentry single task functions of early computing. We now have distributed programs which harness the power of many computers connected through the internet to accomplish truly herculean tasks. The programs which form the basis of our interface to computers now have hundred's of thousands to millions of lines of code, and many of the applications we use are the same.
All of this mimics the evolution of life, condensed from billions of years to only a few decades. Life evolved in large measure by the error's encoded into the programming of life. Will real computer awareness result from a similar mechanism? For now at least, we still control the "food" and replication means for machines.
I had focused on AI in graduate school, but, for the last 25+ years I've been doing processor design. Admittedly I'm impressed by our ability to keep doubling the number of devices in same area about every two years but I just think it's total poppycock (you young engineers go look that up in a dictionary, um, or google it :-) to say were going to have some kind of sentient AI that can best us.
Every time I see some press release out of the likes of something like "The Singularity" conference or Ray Kurzweil, etc. my eyeballs roll back into my head.
First thoughts our, oh boy, they must be needing additional funding or something from DARPA and so here's the hype machine going again!
To me it's simple, look at the human brain, it has around 100 billion neurons, and most of these neurons have thousands of connections to neighboring neurons.
What's the best we have in a computer chip today, 2.5 billion? Most of those are six transistors coupled together to form a bit cell. In logic gates, typical fanout (connections) is around 2 or 3. Not thousands of connections a analog neuron has.
While I oversimplify I believe the comparison is fare is show how very, very far we have left to start worrying about some cognizant machine tweaking with us.
Don't get me wrong, I hope we one day get to that point, I for one have a positive outlook on our new robot overlords keeping us in check!
Living things self-replicate with variation and this over time gives rise to evolution. Machines, on the other hand, are a result of cooperative assembly - it takes many machines to make a machine. How can the whole system of machines evolve together rather than selfishly compete? Machines only exist because humans decide what machines to make and set everything up.
Also why would a machine want to make more like itself? We are wired that way because we are the result of millions of years of evolution that favoured creatures that were good at reproducing. But machines don't have to be wired any particular why at all. Every time youi imagine a rogue machine that wants to take over the world just imagine another machine that wants to stop it. Machines can want whatever we tell them to want.
Chrisw270 & Poppycock. My article was intended to raise the question rather indicate our future. I think the key to the possibility of it happening in the extreme is the need for what I have called Synergistic Evolution. If you can find an example in natural evolution where one species has aided the evolution of another then it might be possible for it to happen again. As you will note for my Fig 2, I suggested the use of the variable Bovine Excrement on a log scale from left to right to indicate the possibility of a particular events occurring. I hope that conveyed my sentiment that sophisticated tools, body parts and weapons are most likely near term outcome.
You claim that machines only do what we want them to do. I think you must accept the fact that it is possible for an upset in hardware and complex software for equipment not to do what is intended. Let me give you an example, from my background in rad hardening. I will use the "muchsimplified" example of launch and leave strategic weapons, where the target information is in the memory. Assume the memory is not radiation hard, then incident radiation from what ever source could change the memory and change the target. You can extend the example to drones with people recognition equipment if you want. Yes I know about triple redundancy and error detection and correction (although of late in dealing with claims in relation to the capability of FEC and memory some personal doubts have crept in).
I use the radiation effects example because as well as other environmental effects radiation may have played a part in modifying the human genetic structure to produce small species variants that had a better chance of surviving and helped our species on our way.
I think the important thing to keep in mind is that until now machines were very limited in what they could do but autonomous machines are something that we must address because they will become a reality in the near future because we already have the technology to create autonomous attack drones see http://news.sky.com/story/1259885/ban-killer-robots-before-they-even-exist
What is clear is that ultimately machines will design machines and if computers can be designed to render themselves obsolete by designing their successor it will lead to recursive self-improvement in other words the computer will be able to make adjustments to its own capabilities without human intervention resulting in ongoing improvements. In effect each improvement could be more significant than the last leading to a rapid rate of evolution that could beat anything possible in nature in terms of speed. It is this theory that each step will yield exponentially more improvements then the previous one that is the basis of the theory of the singularity.
I don't believe in this recursive self-improvement idea. All big improvements in technology are the result of a diverse mix of technologies coming together and involve extensive experimentation in the real world. It won't just happen inside a computer. Also the real world sets hard limits on how far you can go.
Computers can store huge amounts of data and manipulate it very quickly. That's a nice trick but useful applications are not limitless. You can label certain types of computation as intelligence, if you like, but that doesn't change anything. Evolution shows that intelligence is not universally favoured, it just enhances survival chances in certain scenarios.
Evolution can be programmed into a computer... However, that evolution would be limited by the explicit or implicit parameters of the original program. Much like mammals are limited to a bone structure with 4 arms/legs... (not completely true, but I hope you get the point)
This basically means a 28nm machine "species" wouldn't be able to "evolve" a into 20nm machine without priori knowledge of physics & material properties (not likely to be done anytime soon), not without the help of human researchers.
What would evolution be useful for? software development & digital design? possibly. An evolutionary algorithm could be used to develop a better evolutionary algorithm or a better CPU so that the evolutionary algorithm can run faster. but that's wouldn't be a true AI as it would have been limited by the parameters of the initial program.
wc0 mentionned a great point: computer viruses replicate and spread "like life". However, it wouldn't be "like life" until they start to be programmed to self-mutate to adapt to various environments, and possibly enter in symbiossis with other forms of viruses in order to keep self-replicating. This aspect of mutation is, to my knowledge, still absent from computer programs, and until programs start to self-mutate (& possibly "die" in order to free resources to later generations), I won't be able to consider a program "alive". Let alone intelligent.
Would it be good to write such a program? depends... could we force into such a program the 3 laws of robotics?
The future is going to be very interesting with machines trying to become like more human and humans trying to become more machines. Its just the emotions and intelligence that will separate them. I guess many sci-fi movies are going to get real.
I think machines, are not enough concious to be judged for anything, humans are, and the people who make these things have to evaluate their potential conflicts of interest regarding more functionality on a system that can lead to dangerous side effects, or to study in depth what can be done without harming others such as anti malware Operating Systems, safe by construction languages and ethical matters, anyway, this has been done in Nuclear research with mild results, so should be done with good judgment, but in hands of psycopaths this is not looking good, for neither humans nor machines
Preposteous. The thought the "evolution can be programmed into a computer" is so absurd, that I immediately don't know how to react to it. I've programmed a lot of computers, but the thought of including evolution into computers, which is an extremely valuable human abstract concept, provided by Darwin and Huxley in the 19th century, is beyond my imagination. How would you do that? Certainly you could not input a bunch of character strings, or operations on and manipulation of same. That would be, on it's face, absurd.
Are you in charge of the Universe, or what? Truly, how would you do that?
For sure, the "mechanics" (what a loaded word) of genetic reproduction are not understood, hardly at all. Consider the remarkable complexity of the fruit fly transcriptome. People at UC Berkeley found about 100 genes that can encode hundreds or even thousands of different types of proteins, which they do selectively, in a completely unknown way, in response to environmental stress tests they performed on them. And that's only a fruit fly.
Sofianitz:- WOW First Order Predicate Logic You may well be right. However, would you say the division of and replication of strands of DNA matches your requirements as a predicated logic activity. Where the intention of who ever "designed" it ( No we don't do God) was to effect perfect replication. Yet somehow this process can introduce effects and the changes which allowed the evolution that got you from some primitive cells to where you are today writing in EETimes. I think you must allow for the fact that when the level of complexity is high unintended and un-predicated changes can be introduced.
I think the point I tried to make in my, as you say"absurd" diagrams ,was the absolute necessity of a period of synergistic evolution for the machine branch to exist. I think my question fact or fantasy was clear.
Machines can't think. Most digital computers can encode and perform rudimentary mathematical operations on a small subset of the rational numbers, and they can encode and store character strings provided by people (like these here), and aggregate, count and arrange them in various ways, following algorithms provided by a person or persons But they don't know anything, and it doesn't mean anything (to them). Meaning is a relation between two or more persons and some object, usually (but not always) some kind of a semiotic entity.
Symbols (e.g., words) are far too simple to mean. Thought is also too simple to mean. When you or I think, that does not mean anything, either, because there is no relation between two people, a necessary condition.
If I "interact" ( a word I believe inappropriate) with a computer running say a neural net symbol manipulator, I am actually interacting with the algorithm writer (another person, or persons).
Cosmologists tell us that the star we call the Sun appeared 4.6 billion years ago, and the Earth around 100 million years later. Life maybe a billion years after that.
Humans like us have been around 150,000 years or so, and there hasn't been much important evolutionary change in us since then (blue eyes?). Evolutionary change does not occur in a time dimension that is the same as that in which social discourse occurs. Lots of things change (we learned to read and write about 4-5 thousand years ago), but that has nothing to do with evolution.
I have enormous respect for people working in the field "Artificial and Machine Intelligence", but you guys sure do talk funny. Confusing.
PS: John Lilly answered Wittgenstein at Spencer-Brown's Esalen seminar in 1973:
"Whereof one cannot speak [as yet], Thereof one must be silent." Wittgenstein
"The province of the mind has no limits; its own contained beliefs set limits that can be transcended by suitable meta-beliefs (like this one)." Lilly
I understand Dijkstra's comment to mean that it's pointless to argue about whether computers can think because everybody has a different understanding of what thinking means. If one argues that a submarine cannot swim, then one would have to argue that an airplane cannot fly. Birds fly, and so do airplanes. Fish swim, but for some reason submarines don't? It's just a question of what semantics you want to assign to the words, and is about as useful as wondering why the plural of mouse is mice, yet the plural of house is houses.
Personally, I like the Turing test. Every day people communicate with other entities on the Internet, and have no way of knowing whether the entity on the other end is another human, an intelligent computer, someone from another planet, or a dog :-)
Update: Besides, one can speak of a vessel as swimming, as in my father's favorite line from Joseph Conrad's Narcissus, which refers to a disabled ship at sea: "As long as she swims I will cook!"
Oh God, that's rediculous. We communicate with programs written by humans (computers can't communicate, what an absurd idea). We communicate with algorithims that supply us with character strings that we can recognize, often provided reasonably directly to us by other humans, but ultimately always provided by humans, even if indirectly. Truly, who else can possily provide us with these symbols from which we generate meaning? And, if they were not intentional human communications, how would we possibly recognize them?
That is false. Everybody has the same undestanding of thinking. That is, I believe that there are other beings like me out there. This is the first principle of faith, that makes human communication posssibe. But if you don't understand the difference between "thinking" and "what thinking means"? Gosh, try to crawl out of the mud.
As a result of the AI article above author Larry Kilham was kind enough to provide me with an advance copy of his AI related book and as a summer project I have been working my way through it. In this, his latest very readable and entertaining book, Winter of the Genomes, author Larry Kilham explores and explains almost all aspects of the current state of development of robots and artificial intelligence (AI) and poses some very important questions: Where will humans fit in? Can the human relationship with robots and AI machines ever really proceed beyond one of master and slave or will AI inflict some very different kind of changes in society? An excellent contribution to our AI debate and a very worthwhile read. I understand it is due for publication sometime in October 2014.
i now close my comments on this question. I hope they have opened some thoughts of those working on the problem of 'Machine intelligence". Truly, one of humanity's "hard" problems and I honor all of you working on it.