I guess we are collectively forgetting Asimov's 3 laws of robotics:
The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I hadn't forgotten them -- who could. I once read that before Asimov the vast majority of Robot stories were of the "Frankenstein" variety (man creates robot, robot runs amok) ... but after Asimov introduced his three laws writers worked within their constraints.
The thing is that if you create an artificial intelligence that's self aware -- how do you actually engineer these laws into it. If we create a small self-learning neural network with only a handful of nodes we find it difficult to work out how it's doing what it's doing ... so how can we force design the three laws into something that is a billion+ times more intelligent than a human?
I personally thing this is starting to be a very scary situation -- because you know some drongo is going to try to do it...
Yeah.... too often technology has been turned to bad purposes.
The real problem will be when people no long program the intelligent AI devices, and they become capable of programming their own AI devices.
Where is the control then?
Great topic, there are so many angles to cover, but I would focus for now in the outsmarting, & survival-of-the-fittest strategy.
If guerrilla wars were to be the means to scape their reach, as long as we can be self-sufficient and feed our organic bodies. All we have to work out is the means of limiting or eliminating their energy source. I think we have means the smarts to outlive any evil self-aware AI, but won't be easy.
This is a cool topic to cover from so many angles, specially after couple of drinks ;-).
I am an optimist that after the singularity occurrence, this self-aware entity will be benevolent and will serve the human race, and will do it well. I am inclinded to think that it will realize the need of the organics for its survival. Well, it may be delusional, wishful thinking but is one way to sleep w/o having nightmares! }:-)
The problem is that there are so many raving lunatics around ... suppose one of these creates an intelligence in his/her own image?
I also worry about nanotechnology... and artificial viruses... and (arrgghhh :-)
Scary just thinking of the possibilities.
Going back to the topic of Asimov's basic laws, I can see terrorism will sneak into that logic, the same way hackers sneak into smart phones and jail-brake from their dedicated purpose, just for the spotlight or other nefarious purposes as many virus creators do now.
Sorry, but what complete bilge-water this scaremongering is. Let’s have a reality check.
I have spent the last 20 years trying (and frankly failing) to give a robot enough intelligence to tie its own shoe laces together. Every other professional AI researcher is in the same boat too. Unless there is some fundamental eureka-moment breakthrough there won't be any real AI, only the laboriously programmed automata that we have now.
Oh, and ever year since the 1950's someone has predicted that androids will be here in 20 years time. They have always been 20 years away and probably always will be 20 years away. It's time we stopped believing these baseless predictions.
The predictors conflate Moore’s law of computers getting faster, with them getting smarter. Sadly although today’s computers are faster, they have no more real intelligence than the first punched-card machines.
So don't worry about intelligent robots taking over, there simply aren't going to be any in our lifetimes.
(and yes, as an AI researcher I am disappointed about it too).
[In the bizarre future event of androids going rogue, they would probably fight each other over who got the most crumpled clothes to iron; as they would have been designed so that that was what their primal urges were. Only humans fight about things that satisfy human primal urges.]
Max the Magnificent
Hi Nic -- re your comment: "I have spent the last 20 years trying to give a robot enough intelligence to tie its own shoe laces together"...
I bet that is an interesting "conversational starter" when you are at a party and someone asks what you do (grin).
I'm really not into scaremongering (watch for my next blog, which will be on the "12-21-2012 End of the World" garbage) but...
I'm not an expert, but I remember the old AI that was based on purely digital logic and sequential software techniques -- I agree that this is not going to take over the world.
But there are all sorts of things going on with pseudo analog neural networks -- and although quantum computing is in its infancy, think where we might be in 40 years time (remember it's only about 60 years since the first transistor ... and look at us now).
If you'd asked me 20 years ago I would have said that intelligent (self aware) robots (or whatever) were the stuff of science fiction ... now I'm not ashamed to say that I think that we might be closer than we think -- maybe not in my lifetime ... but then again...
Nic, not to be argumentative, I would not discount the concept of AI from the mass of interconnected devices. The so called singularity event, it has a high probability and feasible mechanism.
But I agree wth you it may not be within our life span, but I think it will be the nightmare of next generations.
When the privacy of the individual will be more cherished than is now, once automated mechanisms to collect & process data will be more prevalent and evolved. It is already a fact their presence in Mass transit, and other public places at big cities. The recognition algorithms are primitive, but evolving (I am thinking of examples: XBOX kinect, & security systems).
At present, IA issues in automatons is not a big concern for the moment, but there is still room for concern. I am just talking aloud, and wonder what others think in this interesting topic?
I was just talking to a company yesterday that is working on an XBOX kinect type "thing" that will be able to detect motion changes as small as 0.1mm and will be able to perform realtime object detection, 2D to 3D conversion, face recognition, gesture recognition, etc (down to the blink of an eye and detecting the difference between a frown, smile, or smirk...
I am reminded of Philip K. Dick's "Second Variety". ISTR that there was a follow-on story where a survivor of the war (which the humans won because the robots fought with each other) was walking on the surface and kindly reactivated a robot, which then restarted a factory--which, I think, was then nuked (but not from orbit) by the humans--the only way to be sure, I guess.
No matter what you are designing you will still be using your own brain and it is a known fact that you cannot use more than 10 percent of your whole brain simultaneously.How can anything designed by your own brain can exceed the intelligence of the designer?
Well, based on your 10% point, suppose we designed something that had 50% of our full potential and used 100% of this .. that way it would still be 5X better than us :-)
But the real point is that we can already create self-learning artificial neural networks -- and even with simple ones we may understand the theory but we find it hard to work out how they are doing what they are doing...
And we use machines to build better machine sand we use computers to design better computers ...
imho, what most fail to consider, is the trade-off between freedom and control. Ultimate freedom comes from 0 government, and no systems. Besides laws and cultural restrictions, we are increasingly constrained by massive 'systems'. Credit bureaus and banking which try to analyze you. Automated trading systems which fight for microsecond advantages in prediction. DARPA and NSA monitoring, prediction and attack. The phenomenal expansion of drones (1/3 of all military aircraft now). The medical industry - driven by a tangle of laws, corruption and compute power. The search engines. Network routers. EDA tools even. Just think of all the paradigms upon which compute power and programmed intelligence restrict, manipulate or enhance your lives. Consider that there are many of each, and they compete with each other. They evolve. They are pure greedy algorithms. Their efficiency and intelligence rewarded by success. There are thousands of them. There in only ONE thing you can do to save yourself: Build an ethical AI system now. Get 10,000 engineers to contribute. Get it to monitor and predict end-points of other nascent AI systems.
Actually the solution to controlling the robots is easy. You put a republican algorithm controlling the right side of the robot and a democratic algorithm controlling the left side --both of which can be turned on simultaneously by remote control. Very quickly you will discover the robot is useless.
Check out this video of a Japanese Female Android Mannequin http://www.youtube.com/watch?v=JR0GhdioJKs and also this video of a Shape-Shifting Robot Mannequin http://www.youtube.com/watch?v=pF1qgqDaAt4
What are the engineering and design challenges in creating successful IoT devices? These devices are usually small, resource-constrained electronics designed to sense, collect, send, and/or interpret data. Some of the devices need to be smart enough to act upon data in real time, 24/7. Are the design challenges the same as with embedded systems, but with a little developer- and IT-skills added in? What do engineers need to know? Rick Merritt talks with two experts about the tools and best options for designing IoT devices in 2016. Specifically the guests will discuss sensors, security, and lessons from IoT deployments.