Machines can't think. Most digital computers can encode and perform rudimentary mathematical operations on a small subset of the rational numbers, and they can encode and store character strings provided by people (like these here), and aggregate, count and arrange them in various ways, following algorithms provided by a person or persons But they don't know anything, and it doesn't mean anything (to them). Meaning is a relation between two or more persons and some object, usually (but not always) some kind of a semiotic entity.
Symbols (e.g., words) are far too simple to mean. Thought is also too simple to mean. When you or I think, that does not mean anything, either, because there is no relation between two people, a necessary condition.
If I "interact" ( a word I believe inappropriate) with a computer running say a neural net symbol manipulator, I am actually interacting with the algorithm writer (another person, or persons).
Cosmologists tell us that the star we call the Sun appeared 4.6 billion years ago, and the Earth around 100 million years later. Life maybe a billion years after that.
Humans like us have been around 150,000 years or so, and there hasn't been much important evolutionary change in us since then (blue eyes?). Evolutionary change does not occur in a time dimension that is the same as that in which social discourse occurs. Lots of things change (we learned to read and write about 4-5 thousand years ago), but that has nothing to do with evolution.
I have enormous respect for people working in the field "Artificial and Machine Intelligence", but you guys sure do talk funny. Confusing.
PS: John Lilly answered Wittgenstein at Spencer-Brown's Esalen seminar in 1973:
"Whereof one cannot speak [as yet], Thereof one must be silent." Wittgenstein
"The province of the mind has no limits; its own contained beliefs set limits that can be transcended by suitable meta-beliefs (like this one)." Lilly
Sofianitz:- WOW First Order Predicate Logic You may well be right. However, would you say the division of and replication of strands of DNA matches your requirements as a predicated logic activity. Where the intention of who ever "designed" it ( No we don't do God) was to effect perfect replication. Yet somehow this process can introduce effects and the changes which allowed the evolution that got you from some primitive cells to where you are today writing in EETimes. I think you must allow for the fact that when the level of complexity is high unintended and un-predicated changes can be introduced.
I think the point I tried to make in my, as you say"absurd" diagrams ,was the absolute necessity of a period of synergistic evolution for the machine branch to exist. I think my question fact or fantasy was clear.
I think machines, are not enough concious to be judged for anything, humans are, and the people who make these things have to evaluate their potential conflicts of interest regarding more functionality on a system that can lead to dangerous side effects, or to study in depth what can be done without harming others such as anti malware Operating Systems, safe by construction languages and ethical matters, anyway, this has been done in Nuclear research with mild results, so should be done with good judgment, but in hands of psycopaths this is not looking good, for neither humans nor machines
The future is going to be very interesting with machines trying to become like more human and humans trying to become more machines. Its just the emotions and intelligence that will separate them. I guess many sci-fi movies are going to get real.
Evolution can be programmed into a computer... However, that evolution would be limited by the explicit or implicit parameters of the original program. Much like mammals are limited to a bone structure with 4 arms/legs... (not completely true, but I hope you get the point)
This basically means a 28nm machine "species" wouldn't be able to "evolve" a into 20nm machine without priori knowledge of physics & material properties (not likely to be done anytime soon), not without the help of human researchers.
What would evolution be useful for? software development & digital design? possibly. An evolutionary algorithm could be used to develop a better evolutionary algorithm or a better CPU so that the evolutionary algorithm can run faster. but that's wouldn't be a true AI as it would have been limited by the parameters of the initial program.
wc0 mentionned a great point: computer viruses replicate and spread "like life". However, it wouldn't be "like life" until they start to be programmed to self-mutate to adapt to various environments, and possibly enter in symbiossis with other forms of viruses in order to keep self-replicating. This aspect of mutation is, to my knowledge, still absent from computer programs, and until programs start to self-mutate (& possibly "die" in order to free resources to later generations), I won't be able to consider a program "alive". Let alone intelligent.
Would it be good to write such a program? depends... could we force into such a program the 3 laws of robotics?
I don't believe in this recursive self-improvement idea. All big improvements in technology are the result of a diverse mix of technologies coming together and involve extensive experimentation in the real world. It won't just happen inside a computer. Also the real world sets hard limits on how far you can go.
Computers can store huge amounts of data and manipulate it very quickly. That's a nice trick but useful applications are not limitless. You can label certain types of computation as intelligence, if you like, but that doesn't change anything. Evolution shows that intelligence is not universally favoured, it just enhances survival chances in certain scenarios.
I think the important thing to keep in mind is that until now machines were very limited in what they could do but autonomous machines are something that we must address because they will become a reality in the near future because we already have the technology to create autonomous attack drones see http://news.sky.com/story/1259885/ban-killer-robots-before-they-even-exist
What is clear is that ultimately machines will design machines and if computers can be designed to render themselves obsolete by designing their successor it will lead to recursive self-improvement in other words the computer will be able to make adjustments to its own capabilities without human intervention resulting in ongoing improvements. In effect each improvement could be more significant than the last leading to a rapid rate of evolution that could beat anything possible in nature in terms of speed. It is this theory that each step will yield exponentially more improvements then the previous one that is the basis of the theory of the singularity.