REGISTER | LOGIN
Breaking News
News & Analysis

Experts Weigh in on Mobileye's AV Safety Model

10/26/2017 00:01 AM EDT
8 comments
NO RATINGS
Page 1 / 3 Next >
More Related Links
View Comments: Newest First | Oldest First | Threaded View
victoriakeating
User Rank
Rookie
Re: Really large Roombas?
victoriakeating   12/20/2017 1:40:06 AM
I drain green and I think MSU has an incredible shot beating Penn State two weeks from now. It is difficult to state if MSU beats Penn State they https://www.dissertationhub.co.uk ought to get to the Rose Bowl.

Greg504
User Rank
Author
AV Safety
Greg504   11/3/2017 10:52:44 AM
NO RATINGS
Following the rules and not being at fault is not sufficient.  Many states require last clear chance to avoid an accident.  Furthermore tort law looks at what a reasonable person would do in order to second guess the driver.  We have already seen a case where a car turned in front of an automated vehicle and they crashed.  The driver that turned in front of the automated vehicle was found at fault.  However, the automated vehicle was driving the speed limit while going next to another lane of traffic that was stopped.  Reasonable people would generally drive a bit slower in that situation which might not keep the accident from happening but could lessen the severity.

 

It would not surprise me to see tort lawyers go after manufactuers if the above situation happens often.  Also, the public will not accept buying a car that behaves in this way since they would feel the car endangers them.  The requirement will be that it handles situations better than a human driver would.  In the above situation most reasonable humans would have handled the situation better than the software did.

 

In the end when it comes to consumer acceptance it does not matter if you buy a car that is never at fault.  Your car not being at fault does not stop you from being dead and if you as a human handling the situation would have reasonably kept you alive then society will find automated driving to be unacceptable.

bobinorlando
User Rank
Rookie
Mobileye paper assumptions outdated
bobinorlando   10/31/2017 5:20:23 PM
NO RATINGS
It's great to see papers being written and more people thinking through the issues of self driving cars and all that they imply. However, I can't help but notice that many of the fundamental assumptions in the MobilEye paper appear to be outdated. e.g. the concept of fault or blame – the insurance industry has largely moved past fault to no fault policies, and the concept of accident – having just completed 4 hours of pain and suffering through a driver's education course to avoid getting a few points on my license for a simple speeding ticket, I can tell you that the driver safety industry has disavowed the concept of accident entirely and instead uses the term crash. They use crash because accident implies something unexpected occurred whereas most "accidents" turn out to have been bound to happen due to driver impairment caused by alcohol or drugs or distraction. And that raises the other fundamental assumption that appears to be outdated: the idea that you're engineering a vehicle. You're not. You're actually engineering a Driver. And therein lies all the difference. Is an AV capable of only being driven by itself or can it be driven by a human? If it can also be driven by a human then the vehicle merely needs to be able to be driven. In other words its capabilities and design standards should be like that of other vehicles. When it comes to engineering the Driver then the driving capability should start by complying with the traffic laws which are quite detailed regarding how one should drive and best practices for avoiding, as the traffic safety people say, crashes. So can the AV pass a driver's license exam – both the written test and the road test? That would be a good standard to code against. If it can pass those tests then we will welcome the AV to the road and consider it ready to learn from experience like the rest of us. It it can't then it belongs on the test track (which should not be local streets and highways!).

junko.yoshida
User Rank
Author
Re: I wrote a blog post about this
junko.yoshida   10/31/2017 9:45:36 AM
NO RATINGS
@Yoav Hollander, thanks for the link to your blog. Much appreciate your additional thoughts!

Yoav Hollander
User Rank
Rookie
I wrote a blog post about this
Yoav Hollander   10/30/2017 3:18:21 PM
NO RATINGS
Koopman and Cummings said most of what I wanted to say about the topic. Please see my blog post, summarizing this and adding a few more comments, here: 

https://blog.foretellix.com/2017/10/30/on-mobileyes-formal-model-of-av-safety/

 

realjjj
User Rank
CEO
....
realjjj   10/29/2017 6:47:22 AM
NO RATINGS
With ownership, cars are in use some 10% of the time, is anyone considering using the idle time to put the available compute to some good use- running simulations maybe. A one million vehicles fleet could run through a million corner cases in an instant. At the very least it could be used to test and validate software updates.

Bert22306
User Rank
Author
Proving safety
Bert22306   10/26/2017 9:19:29 PM
NO RATINGS
On just the point about proving safety, I've used one criterion which seems pretty good: calculate the "first expected error." If that "expected error" occurs after the life expectancy of the product, it's reasonably safe. In practice, what happens is that as the reliability of each component is improved, to meet this criterion, you'll soon calculate that "first expected error" to be way beyond the life expectancy of the product, which hopefully accommodates any modeling errors. (Modeling error are inevitable, and designers should be well aware of this.)

In my experience, this approach has worked extremely well. To the extent that problems which had been experienced in the field vanished completely.

The other points I would make are that the safety of autonomous driving hardware and software should be treated as all other vehicle hardware and software, plus, mixed in with that, with the way culpability is assigned in human driving situations. I think the biggest question here is in the actual driving performance of the algorithms. So run the algorithms through any number of driving tests, to the point of creating accidents, and then determine whether the fault WOULD HAVE BEEN placed on a human driver, in the same scenarios. Such an approach should intrinsically incorporate such variables as sensor malfunctions or sensor inadequacies, just as in human driving.

realjjj
User Rank
CEO
Really large Roombas?
realjjj   10/26/2017 11:32:19 AM
NO RATINGS
On the hardware side, brute force can be a solution with CaaS or trucks and buses. Nvidia's two Xavier and 2 large discrete GPUs is brute force, Tesla is the opposite with fewer sensors and not much compute.

If the 99% is the easy part, brute force data collection and testing tries to solve the 1% that matters. A large fleet with sensors and the AI in shadow mode means a lot of miles in a short timeframe - a 1 million fleet at 40 miles miles per day reaches 1 billion miles in 25 days. It is viable for a car maker with the right strategy that adds the hardware and connectivity to high volume vehicles. The hard part is to put all that data to good use. For folks that can't do that, spreading the load is the obvious solution. How does one deal with corner cases and predicting how other actors will behave in any given situation, if not with brute force? Wasn't this why AI was required in the first place?  

Like Us on Facebook
EE Times on Twitter
EE Times Twitter Feed