REGISTER | LOGIN
Breaking News
News & Analysis

Expert Panel Debunks AI Hype

Neural networks seen as huge but limited
6/26/2017 11:51 AM EDT
6 comments
NO RATINGS
Page 1 / 3 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
Katie OQ
User Rank
Author
The FANN
Katie OQ   6/28/2017 3:44:16 PM
NO RATINGS
The short history of artificial neural networks has made many a false start.

Learning and Inference, and all of a human brain's magnificent attributes

including Curiosity, Insight, Inventiveness and many more, are elusive and

may never – or at least in our lifetimes – be emulated to equal what can

be done with an immensely rich parallel organic net of neuronal elements.

 

But to conclude that a "real" – that is, a Fully-Analog Neural Network, or

FANN – is unable to achieve such capabilities seems a little short-sighted.

Consider what a honey bee can do with its estimated 1-million neurons:

http://jonlieffmd.com/blog/the-remarkable-bee-brain-2

This serves as a humbling reminder that "real intelligence" is clearly the

product of a neural assembly of even moderate size. This, at a time when

contemporary IC-level CPUs routinely manage to use 5 BILLION transistors,

with 10 billion at the research stage; while the honey-bee's brain runs on

less than a microwatt and is packed into less than a 0.5 mm cube.

 

The problems facing the realization of FANNs of even modest capacity are

doubtless immense. But they seem to me to be soluble, and quicker than

setting several billion dumb, dead and unconscious transistors loose on a

wild goose-chase. Even bird-brains can discriminate between left and right:

https://en.wikipedia.org/wiki/Bird_intelligence#Brain_anatomy

 

Barrie Gilbert

rstjobs
User Rank
Author
Re: AI or AT
rstjobs   6/27/2017 11:00:55 PM
NO RATINGS
Totally agree, Machine Learning is basically Machine Training, still, we need millions of data to train us to recognize something.

This needs to be change

resistion
User Rank
Author
AI or AT
resistion   6/27/2017 6:50:10 PM
NO RATINGS
What's called AI really should be named automated training or automated teaching (AT). The method for recognizing a pattern and recording it is automated. Of course any automated recognition can be bypassed with some exception, just like regular learning. Actually any conceivable pattern has exceptions. The exceptions, by definition, do not follow the pattern.

Jayna Sheats
User Rank
Author
AI and humans
Jayna Sheats   6/27/2017 1:51:53 PM
NO RATINGS
Alan Turing's most famous contribution (the "Turing Test") is equally applicable to the equivalance (or superiority) of AI to humans.  When a machine can do everything that a human can do, then humans do become superfluous since the machine can easily do more (compute large numbers, for example).  
So when computing machines can have babies and provide for them, among other human capabilities, our obsolescence will be within the horizon.  And there is no need for us to worry about that, because said machines will be far more sensitive and understanding of how to take care of us than we are now about each other.
Until then I am not worried.

Olaf Barheine
User Rank
Author
To err is human,...
Olaf Barheine   6/27/2017 4:20:30 AM
NO RATINGS
...and in the future it is also machine-made.

perl_geek
User Rank
Author
The real benefit of AI research
perl_geek   6/26/2017 2:05:59 PM
NO RATINGS
1 saves
Is that it teaches us what was wrong in what we thought we knew.

By concentrating attention on particular aspects of thought or knowledge, it forces us to examine our assumptions, and reveals the discrepancy between them and reality. Things which seemed to be well understood turn out to be deeply obscure, while other areas prove to have less than meets the eye. (In some areas, it turns out to be easier to model a PhD than a two-year-old, possibly because the academic's behaviour is still at a conscious level.)

There is probably a lot of benefit, too, in the interaction between the studies of psychology, biological systems and the attempts to emulate them with computers and robots. Each field informs the other about what works and what doesn't.

The extraordinary variations in rates of progress in the field is another interesting factor. Eliza (1956) will soon be pensionable, yet chatbots generally can't be trusted not to morph into Hitler Youth. Machine translation was going to follow shortly after Eliza, yet the results are still frequently risible.

Sometimes the question may be "Are we training the systems, or are they training us?" If I can find information with online searches, how much is Google's databases and algorithms, and how much is my skill at constructing queries?

As a pure speculation, I'll suggest machine intelligence will emerge from the sort of layered architecture described by Herbert Simon in "The Sciences of The Artificial", combining agents in the manner of Minsky's "Society of Mind". They won't be able to introspect or explain their processes any better than humans, but they're going to be much easier to instrument.

Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed