I'd also expect an autonomous car to know that the presense of snow on the road implies a longer stopping distance. Similarly for ice, water etc. on the roadway.
Actually, a sensor to detect black ice would be really useful. My pickup gives me a warning but it's based only on temperature with no regard for the presence of ice/snow on the road.
The car must be able to be autonomous in all weather conditions in which a human driver is capable of driving. In winter, even light snow on the ground without new snow coming down, will cover up road markings. To work properly, the system must be able to identify the edges of the road without assistanc from road markings. A human driver can identify the road edge by the change in contour from the pesense of curbs under the snow, the difference in appearence of driven upon snow on the road and undriven upon snow to the side of the road, the presense of snow piled up to the side of the road by snow plows, etc. The car must be able to do the same.
There was a Northern California Tesla "incident" just before this accident. The San Francisco-Oakland Bay Bridge was gridlocked (as usual) and a Tesla failed to move forward when traffic opened up. Other drivers called 911. The police found the Tesla driver passed-out in the seat. The car wouldn't move forward without hands on the wheel. The driver blew twice the legal limit for alcohol. The driver said "That's OK. The car was on autopilot". The driver was arrested and the car impounded. Actual tweet from the CHP: ""no, it didn't drive itself to the tow yard." http://www.sfgate.com/bayarea/article/Inebriated-Tesla-driver-arrested-on-Bay-Bridge-12510750.php
The Tesla does not steer to avoid obstacles (the feature is practically a lane keep assist) so the only choice was to slow down and then brake. Under normal conditions it should come to a full stop in less than 1.5 seconds. The distance to the vehicle in front is set by the driver, there is a 1 to 7 range for the setting, the unit is time based.
As I see it, the car needs to detect the obstacle and decide that it's not a false positive but it also needs to know where the road is. Doubt it can count on the map so the camera needs to see the road markings. At that point it could start slowing down and alert the driver but it can only fully apply the brakes when an impact is certain (there is no time for the driver to steer).
The vehicle was going 65 mph before the crash ~= 29 m/sec. With a 160 m radar, that would leave about 5.5 seconds for reaction. A general rule for humans is 3 seconds following distance. Given an automated system has millisecond instead of second reflexes, closer would have been reasonable. Especially since 3 seconds on an LA Freeway (or any other metro area) means vehicles are constantly cutting you off.
However, if the Tesla was following a vehicle that suddenly slowed down, it would have used the following distance as a buffer. If the vehicle suddenly moved right, the Tesla would have not seen the fire truck until just before the impact.
At that point, the Tesla has the "trolley problem" used in ethics. Does it slam on the brakes and hit the obstacle or does it veer to the right possibly causing a major chain accident?
As self driving cars get more mileage, and press, it will likely become evident to the manufacturer's that the Level-5 (all or nothing) performance may be necessary and have to be active all the time to avoid nusance lawsuits of humans blaming their inattention on automation that was not activated. I would find it hard to believe the sensors could not pick up all the flashing lights and HUGE Red truck :-).
Can't or should not? Is anyone doing more with radar for Level 1/2?
Volvo notes in their manual
"Pilot Assist is not a collision avoidance system. The driver must intervene if the system does not detect a vehicle in front. Pilot Assist does not brake for humans or animals, and not for small vehicles such as bicycles and motorcycles. Nor for low trailers, oncoming, slow or stationary vehicles and objects."
These discussions tend to become unfocused, bringing in many tangential topics. Let us first of all assume that autopilot was active. My thinking is, just like the Florida case, the problem is very, very basic. Whatever sensors are used, radar, lidar, or cameras, there is never an excuse for not detecing objects in the path. Stationary or moving objects.
The vague claim that stationary objects have to be ignored is nonsense. Of course yes, ignored when they are not in the direct path. Ignored if the are vehicles parked on the side. Ignored if they are road signs. Not ignored when they block the path completely.
Clearly, sensors have to be able to differentiate between an object on the side of the road and an object right in the car's trajectory. In the Florida case, the conjecture was that the very big, wide trailer blocking the path was "mistaken" to be an overhead road sign. Maybe, maybe not, but now I would have to be convinced that the Florida case was not identical to this one. Huge stationary object, directly in front of the car's path, apparently unseen.
Discussions about L2 and L3 are fine and good, or warnings in the owner's manual, but they become a distraction at a certain point. We should focus on what set of conditions creates this baffling state of blindness.
The other point is, just looking at the photo, it seems unlikely that the car hit the stationary truck while traveling at 65 mph? At that speed, it would either have been sandwiched under the truck, or it would have bounced up in the air and landed in some random position. Just guessing.
You cannot ignore a stationary obect in the lane. The object type and mass have to be discerned. You have to either avoid it or in some cases plow right through it. And example being a box you discern is empty or less of an issue to hit than something else or being rear ended.