Breaking News
News & Analysis

Deep Learning: Achilles Heel in Robo-Car Tests

10/3/2016 09:00 AM EDT
< Previous Page 2 / 5 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
User Rank
DanC550   10/6/2016 2:12:00 AM
Up to a point cars can learn like it is now the machine learning, like small kids are learning alone by playing. Beyond this point kids are taught. Cars can be also taught ; ). Why not !?

User Rank
Brittleness, machine learning's GOFAI problem
Mapou   10/4/2016 12:54:38 AM
Great article. It goes to the heart of the problem which is brittleness. AI's biggest success, deep learning, is GOFAI redux. A deep neural network is actually an old fashioned rule-based expert system. AI programmers just found a way (gradient descent, fast computers and lots of labeled or pre-categorized data) to create the rules automatically. The rules are in the form, if A then B, where A is a pattern and B a label or symbol representing a category.

The biggest problem with expert systems is that they are brittle. Presented with a situation for which there is no rule, they fail catastrophically. This is what happened back in May to one of Tesla's cars while on autopilot. The neural network failed to recognize a situation and caused a fatal accident. This is not to say that deep neural nets are bad per se. They are excellent in controlled environments, such as the factory floor, where all possible conditions are known in advance and humans are kept at a safe distance. But letting them loose in the real world is asking for trouble.

Guaranteeing the safety of neural nets in self-driving cars is a pipe dream. The human brain can instantly see a new pattern or comprehend a new situation it has never seen before. Neural nets are blind to such things. I'm afraid that truly autonomous vehicles that can safely drive around in our cities under existing traffic conditions are impossible to achieve with the current state of the art. We will need to emulate the capabilities of the human brain. Unfortunately, nobody in the mainstream AI community knows how to do this.

User Rank
realjjj   10/3/2016 2:24:49 PM
On the software update side one has to think of it as an entire fleet not individual vehicles. Fleet learning in almost real time must not be hindered.

User Rank
Re: Why doesn't my car behave like it's on tracks?
realjjj   10/3/2016 2:19:22 PM
Collecting data won't take that long with the right strategy.

Tesla's fleet just reached 3B miles but their fleet is relatively small. Given their production plans, they could exit 2018 with their fleet traveling 1 billion miles per month and that with a fleet of some 800k vehicles. By, lets say, 2025,a CaaS with a fleet of 10 million vehicles could do 3 billion miles in a day.

Adding a few short vids where the computer saves the day.




User Rank
Why doesn't my car behave like it's on tracks?
sixscrews   10/3/2016 12:32:26 PM
A bit of history is always useful when opening a debate as wide ranging as this one.

When automobiles were first being developed there were four main modes of transportation available:
  1. Shanks mare (walking).
  2. Real mare/gelding/stallion pulling wheeled contrivance or ridden by human.
  3. Boat/ship/something that floated and could be moved from one place to another by human or mechanical means and steered by human.
  4. Rail road - assumed that consort (train) stayed on rails and rails were clear of obsturctions (mis-set switch, other trains, train robbers, etc.).  This was rapidly shown to be an incorrect assumption, but that's another debate.

So our highways were desigened in relation to (4) above - the vehicles would stay on the road and the road was free of obstructions.  Reference the Hutchinson Parkway in NY - designed for 'modern' 45 mph traffic and now traversed by vehicles moving at 60+ mph - not a nice place to be if you can't stay on the road - curbs, narrow exit/entrance ramps, stalled traffic etc.

Now we are dealing with a situation that mixes (4) with 1, 2 and 3.  The recent tragedy in NJ and a long history of RR crashes show that the assumptions in (4) are faulty.

IMHO it's foolish to mix autonomous vehicles with human-operated vehicles given all the issues raised by the US DOT and resulting comments.

Humans are creative; computers do what you tell them to do, not what you want them to do.  Human vs. computer will always favor the human when it comes to a situation that the computer is not designed to handle - of course, the human may make the wrong decision  with possibly catastrophic consequences, but the computer will always make the wrong decision in a situation that it cannot recognize/was not designed for.  

Having the computer 'give up' and surrendur control to the human (who may not be prepared to deal with whatever situation they are faced with) is a rather cowardly way out, but, at present, the only one available (I guess the computer could halt the car or run it car off the road but that's not a safe option in most cases).  Perhaps we can classify this as a 'wrong' decision but at least it takes the computer out of the decision loop.

I would rather see automous vehicles in their own lanes on the highways.  Convert the express lanes to autonomous vehicle lanes for a few years, gather data and see what we have.  This is not a 'Pittsburgh left' test but it would give us a lot of vehicle mile data under controlled conditons. 

But this isn't going to happen - maybe I should just stay on my Wisconsin farm where the biggest moving vehicle risk is my Amish neighbor's buggies and work horses.  

But I'm sure one of my non-Amish neighbors is going to buy an automous tractor one of these days - and it will come straight out of my woods after taking a wrong turn on his south 40...oops.


Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed