The Terminator series have forever seared the idea of AI's turning into SkyNet, the military AI that decided people were a threat to its existence and they had to be eliminated.
The Janus Project is a story by James P. Hogan that brings up a similar problem, but basically one of unintended consequences. The story starts with surveyors on the moon asking the computer to estimate how long it would take to clear this mountain range for construction of a new linear acceleration catapult. The computer answers, "15 minutes." laughing, the surveyors tell the computer to execute the plan and are almost killed because the computer directed the existing catapult to start dumping its loads onto the mountain range to blow it away. The rest of the book is devoted to trying to teach the computers the consequences of their actions.
Fictional stories, but we may see instances where the AI's are going to reach unexpected solutions to problems they are presented with. The same way your child may produce a solution different from what you expect.
It's not going to be as easy as installing Asimov's Three Laws of Robotics. Besides, he spent most of his stories illustrating how inadequate they were. Jack Williamson's With Folded Hands demonstrated those rules taken to extreme were not good either.
I don't want to sound down on AI's, I really like the idea, but we're fooling ourselves if we think they are going to turn out to be exactly what we think they are going to be.
In 1984, Scientific American published an article on a game called "core wars". The premise was that two or more programs, each of which was allowed to execute one instruction per turn, would try to eliminate all of the competitor programs. The game was won when only one program remained functional (or was the last program with instructions that could be executed). These programs were executed in a "sandbox", allocated to the execution space, and using an interpreted programming language. One of the first strategies developed was the use of replicating code.
It was not long after this that the first virus and worm programs appeared. While written by humans, these pieces of code were designed to replicate themselves and spread, similar to early life. We have seen these evolve from purely destructive functions to stealthy infections which are designed to compromise data and security without user awareness. Much of today's malware collects information and passes it along to a server using the internet.
Programming itself has evolved a long way from the rudimentry single task functions of early computing. We now have distributed programs which harness the power of many computers connected through the internet to accomplish truly herculean tasks. The programs which form the basis of our interface to computers now have hundred's of thousands to millions of lines of code, and many of the applications we use are the same.
All of this mimics the evolution of life, condensed from billions of years to only a few decades. Life evolved in large measure by the error's encoded into the programming of life. Will real computer awareness result from a similar mechanism? For now at least, we still control the "food" and replication means for machines.
I had focused on AI in graduate school, but, for the last 25+ years I've been doing processor design. Admittedly I'm impressed by our ability to keep doubling the number of devices in same area about every two years but I just think it's total poppycock (you young engineers go look that up in a dictionary, um, or google it :-) to say were going to have some kind of sentient AI that can best us.
Every time I see some press release out of the likes of something like "The Singularity" conference or Ray Kurzweil, etc. my eyeballs roll back into my head.
First thoughts our, oh boy, they must be needing additional funding or something from DARPA and so here's the hype machine going again!
To me it's simple, look at the human brain, it has around 100 billion neurons, and most of these neurons have thousands of connections to neighboring neurons.
What's the best we have in a computer chip today, 2.5 billion? Most of those are six transistors coupled together to form a bit cell. In logic gates, typical fanout (connections) is around 2 or 3. Not thousands of connections a analog neuron has.
While I oversimplify I believe the comparison is fare is show how very, very far we have left to start worrying about some cognizant machine tweaking with us.
Don't get me wrong, I hope we one day get to that point, I for one have a positive outlook on our new robot overlords keeping us in check!
Living things self-replicate with variation and this over time gives rise to evolution. Machines, on the other hand, are a result of cooperative assembly - it takes many machines to make a machine. How can the whole system of machines evolve together rather than selfishly compete? Machines only exist because humans decide what machines to make and set everything up.
Also why would a machine want to make more like itself? We are wired that way because we are the result of millions of years of evolution that favoured creatures that were good at reproducing. But machines don't have to be wired any particular why at all. Every time youi imagine a rogue machine that wants to take over the world just imagine another machine that wants to stop it. Machines can want whatever we tell them to want.
Chrisw270 & Poppycock. My article was intended to raise the question rather indicate our future. I think the key to the possibility of it happening in the extreme is the need for what I have called Synergistic Evolution. If you can find an example in natural evolution where one species has aided the evolution of another then it might be possible for it to happen again. As you will note for my Fig 2, I suggested the use of the variable Bovine Excrement on a log scale from left to right to indicate the possibility of a particular events occurring. I hope that conveyed my sentiment that sophisticated tools, body parts and weapons are most likely near term outcome.
You claim that machines only do what we want them to do. I think you must accept the fact that it is possible for an upset in hardware and complex software for equipment not to do what is intended. Let me give you an example, from my background in rad hardening. I will use the "muchsimplified" example of launch and leave strategic weapons, where the target information is in the memory. Assume the memory is not radiation hard, then incident radiation from what ever source could change the memory and change the target. You can extend the example to drones with people recognition equipment if you want. Yes I know about triple redundancy and error detection and correction (although of late in dealing with claims in relation to the capability of FEC and memory some personal doubts have crept in).
I use the radiation effects example because as well as other environmental effects radiation may have played a part in modifying the human genetic structure to produce small species variants that had a better chance of surviving and helped our species on our way.
I think the important thing to keep in mind is that until now machines were very limited in what they could do but autonomous machines are something that we must address because they will become a reality in the near future because we already have the technology to create autonomous attack drones see http://news.sky.com/story/1259885/ban-killer-robots-before-they-even-exist
What is clear is that ultimately machines will design machines and if computers can be designed to render themselves obsolete by designing their successor it will lead to recursive self-improvement in other words the computer will be able to make adjustments to its own capabilities without human intervention resulting in ongoing improvements. In effect each improvement could be more significant than the last leading to a rapid rate of evolution that could beat anything possible in nature in terms of speed. It is this theory that each step will yield exponentially more improvements then the previous one that is the basis of the theory of the singularity.
I don't believe in this recursive self-improvement idea. All big improvements in technology are the result of a diverse mix of technologies coming together and involve extensive experimentation in the real world. It won't just happen inside a computer. Also the real world sets hard limits on how far you can go.
Computers can store huge amounts of data and manipulate it very quickly. That's a nice trick but useful applications are not limitless. You can label certain types of computation as intelligence, if you like, but that doesn't change anything. Evolution shows that intelligence is not universally favoured, it just enhances survival chances in certain scenarios.
Evolution can be programmed into a computer... However, that evolution would be limited by the explicit or implicit parameters of the original program. Much like mammals are limited to a bone structure with 4 arms/legs... (not completely true, but I hope you get the point)
This basically means a 28nm machine "species" wouldn't be able to "evolve" a into 20nm machine without priori knowledge of physics & material properties (not likely to be done anytime soon), not without the help of human researchers.
What would evolution be useful for? software development & digital design? possibly. An evolutionary algorithm could be used to develop a better evolutionary algorithm or a better CPU so that the evolutionary algorithm can run faster. but that's wouldn't be a true AI as it would have been limited by the parameters of the initial program.
wc0 mentionned a great point: computer viruses replicate and spread "like life". However, it wouldn't be "like life" until they start to be programmed to self-mutate to adapt to various environments, and possibly enter in symbiossis with other forms of viruses in order to keep self-replicating. This aspect of mutation is, to my knowledge, still absent from computer programs, and until programs start to self-mutate (& possibly "die" in order to free resources to later generations), I won't be able to consider a program "alive". Let alone intelligent.
Would it be good to write such a program? depends... could we force into such a program the 3 laws of robotics?
The future is going to be very interesting with machines trying to become like more human and humans trying to become more machines. Its just the emotions and intelligence that will separate them. I guess many sci-fi movies are going to get real.
I think machines, are not enough concious to be judged for anything, humans are, and the people who make these things have to evaluate their potential conflicts of interest regarding more functionality on a system that can lead to dangerous side effects, or to study in depth what can be done without harming others such as anti malware Operating Systems, safe by construction languages and ethical matters, anyway, this has been done in Nuclear research with mild results, so should be done with good judgment, but in hands of psycopaths this is not looking good, for neither humans nor machines
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.