Autunomous cars. Will the car have insurnace cover in it's own right. I can see getting what we call Third party liability cover in England could be interesting. The discussion with the Insurance company could sound almost as good as a Bob Newhart comedy recording.
"Hi Mr Crusty you wnat insurance, yes please, are you the named driver, Well no it's Google." " How long has Mr Google held a driving licence? Hows Mr Googles eye sight"?
The class actions against the autonomus software could be fun and go on for years.
Seriously speaking, I am actually puzzled because many industry analysts I talked to believe that insurance companies would "lower" the rate if you drive a car with ADAS features. The assumption is that by getting the human out of the equation, there will be fewer driving errors, thus more safety is assured.
Certainly, the NCAP star rating also indicates it.
I wonder if the assumption is that if carmakers get ADAS features correctly implemented, surely they will make much "safer" autonomous cars?
Thats an interesting consideration that the industry will get it right? Seems to me that they have had thier fair share of turkeys already. It suprising with automated landing and take off that they still require a captain to fly a plane?
Seems to me that they have had their fair share of turkeys already.
I am with you on that one. As you pointed out, the comparison of autonomous cars with autopiloted aircrafts is appropriate. And yet, we have not seen or heard yet, at this point, something equavalent to FAA is to do the oversight of autonomous cars.
I think it depends on the mindset of the manufacturer.
Cars programmed by Toyota, or any auto manufaucturer for that matter, I would trust lesss than a system programmed by, say, Google.
The auto industry has management that doesn't get this new fangled software aspect. They are too used to their old methods and will be making silly design decisions and staring at their watches expecting the software to be cranked out the same way they expect fenders to be cranked out.
I used to work for a company that was dragged kicking and screaming all the way into the computer age and by God they weren't about to change their methods of project management. Which led to some spectacular failures that were blamed on the programmers.
There are definitely different phases of automatic driving and I agree that it will be a long time before we get fully automatic cars.
One of the things I think about are 'unwritten rules'.
As an example, I remember driving in this New England town where there was a left turn lane at the light but no left turn light and the light didn't have car sensors. It would get busy during rush hour, which means that it would be near impossible to ever make a left turn. So what drivers would do is turn left immediately after the light turns green and before oncoming traffic would cross the street. At least two or three cars could make it through this way. Everyone knew the situation, so the oncoming cars would always start slow to let the few cars get through.
If they didn't do this, then the left turning cars would never be able to make their turn. A Google self-driving car would be sitting there forever. On the other hand, if you tried this in California, you'd probably get yourself killed.
Would even a semi-autonomous car let you do this or would it slam on the brakes when it sees the oncoming cars slowly approaching?
Every state, and especially every country, has a large and different set of unwritten rules that we don't conciously think about when we drive. We usually pick them up after being in an area for a while. I don't know how a fully autonomous car could do this.
Specifically on this turning left at a stoplight issue, this is another one of those things that creates road rage.
Everywhere I've been, the correct way to turn left at a stoplight is, when it turns green, you move up to the middle of the intersection, blinkers on, and wait for a clearing in oncoming traffic. Then, worst case, the light turns red in your direction, and the two or perhaps three cars that are waiting patiently to turn left skidaddle through the intersection before being blocked by traffic.
Instead, you get people who forgot their driver training, and plant themselves in your way, potentially going through several cycles before they make it through.
But very good point on local customs. The first time I drove in Cyprus, years ago now, I was surprised to hear everyone honking their horns at me, because I was waiting for the light to turn green. Turns out, Cypriots look for a yellow in the other direction, and then they start moving through the red light. Okay, I thought, I can work with this. Never had a problem since.
This thread of discussions on unwritten rules actually proves the point why Google is doing driving testing one city at a time. They are now focused on driving Mountain View, Calif., but there are just so many more cities they need to "master"!
junko.yoshida wrote: ... there are just so many more cities they need to "master"!
Mountain View is a decent place to start, especially at non-rush hour. But at some point they need to try out the "Bloody Bayshore" AKA US 101. We'll see if they're able to survive that one any better than the human drivers.
One of the things I think about are 'unwritten rules'.
Similar to my drive home in suppertime rush hour. There are a couple busy intersections which are preceeded by driveways into a post office and a community college. These friendly north Texas drivers NEVER block these driveways while waiting for a red light, thay have all learned to hang back and let traffic in and out of the driveways. Everybody gets to move.
Would a google car have enough smarts to give other drivers a break?
My guess is that within about five years, aftermarket self-driving kits will start to show up in the backs of car magazines. They will be advertized as "for off road use only", just like a lot of the aftermarket ROM chips are.
The biggest thing holding back the car manufactureres will be liability concerns. Eventually, the NTSB will start requiring greater and greater levels of autonomy.
As far as the complexity involved and issues go; consider just how much information a phone collects and manages today. Consider that, while not yet fully refined, some cars have self parking, lane holding, emergency braking, adaptive cruise control, etc. It won't be long at all before all of these seemingly insurmountable challenges will be better handled by computer.
I agree with you, Junko, that autonomous cars are probably decades away. But that might also depend on how we define autonomous. To me, it means you enter a destination address, and then the car does its own thing while you're taking a nap or eating a three-course dinner. Or sound asleep lying down in the back seat.
Other people, in particular those who benefit from the hype, instead seem to define active ADAS as autonomous driving.
In your series of articles, and comments written responding to them, I've come to this conclusion. We're using different definitions. And never mind the refrain about lawsuits.
The future of autonomous vehicles is package delivery. Think UPS or Pizza Hut. Here's a vehicle with no passengers. Only drives local roads in predefined routes at less than 30 mph. Couple it with a drone to drop the package on your doorstep. The drone only has to carry a small weight a short distance before returning to a charging station on the vehicle. It's a totally automated delivery system.
It may be slightly early to start using autonomous cars but thats the future going to be. Anything that a we are highly dependant on like our phone, automobile or home its going to be very very smart in future.
As the article points out there are societal, legislative and liability hurdles to overcome. It's difficult for the car makers to predict if and when (and by what measure) they are able to jump over these hurdles. But from the technical implementation perspective there are some known hurdles. Specifically when we talk about ADAS systems taking control of the car dynamics, be it braking, steering, whatever. As the IHS contributor mentioned there are many components inside these ADAS systems: Microprocessors, FPGA's, DSP's, memories and so on. If the ADAS system is to really take control of the car, then the components used in the decision making need to be robust enough to guarantee operation over the life of the car. Similarly there needs to be built in redundancy to ensure that should a component fail, it does so in a safe way (so called graceful degradation of vehicle functionality). The functional safety standard for automotive systems is called ISO26262 and so far not too many ADAS processors are capable of being assesed against this standard. nor are they typically robust enough to guarantee operation over life for an automobile. So the component manufacturers and tier 1 integrators have some work to do in order to deliver truely fail safe systems for semi autonomous cars. I think this is the first challenge facing the industry.
Yes i agree AEC Q100 (at least grade 1, preferably grade 2) would be required. For camera systems typically ISO26262 ASIL B required for functional safety today. For sure this will increase for semi-autonomous driving. Adaptive cruise control systems which are available today based on RADAR technology seem to be assessed up to ASIL C minimum with a preference for ASIL D.
There's a new application area emerging where the information from RADAR and camera sensors are fused to create some kind of 3D environmental mapping of the vehicle surroundings. Scenarios are run based on a long list of potential risk factors and "safe" decisions are made about vehicle dynamics. This requires a LOT of processing power and car makers are trying to figure out how to achieve ASIL safety at very high performance without too much redundancy (since redundancy costs money..).
Absolutely agree with Junko that fully autonomous car is 30+ years away. There are so many hurdle ahead even we are technologically ready. Just a couple days ago, Google self driving car has hit 1 million miles. It is an impressive milestone. However, it is said as self driving. It doesn't mean there is attendence. There is a person sitting in the driver seat just in case. Why is there a driver in a driverless car?
Folks in transportation department are typically conservative. There is a good reason for it - better be safe than sorry. To get those folks to agree a fully autonomous car without a driver actively monitor the environment isn't an easy task. Prove of concept and confirm with mileages. Is 1 million miles enough? Probably 10 or 100 million will give the general public more confidence. A billion will surely impress the regulatory agencies.
In addition to regulatory restriction, to sort out the responsibility in case of accident isn't an easy task. Insurance, car makers and owners will all need to agree a term. A term and condition will definitely require attorney expert in the area to build.
Next, the general public, the market need to have enough confidence and demand of the product. Toyota prius takes year to penetrate the market. The change is simply the addition of alternative fuel and the change of drivetrain. Imagine the willing of people adapting the technology. In general, people are excited before the product hit the shelves. Once it is on the shelves, people become indecisive. There are no doubt early adapter. To hit the mass market will nonetheless take time.
In the Siiicon Valley, people talk a lot of self driving car and show excitment about it. To my experience, people in valley are more inclined to new technology. You will see more Tesla in the valley than anywhere else in the nation. Most of the iProducts spread out in the valley pretty rapidly. I can see the first mass market self driving car appear in the valley first. Will it roam in a crowded city or just self drive along highway, only time can tell?
While I agree that the days of all vehicular traffic being composed of autonomous vehicles is far away, I don't agree that there will need to be uniform standards worldwide on those vehicles. I think the rollout will come in stages. It might start with a highway lane being made available only to autonomous cars, and autonomous operation only allowed on highways. Then once a level of comfort with the vehicle performance is achieved, it might open to city traffic, then suburbs, then rural areas. Or rural first, but in stages regardless.
And if crossing a boundary means that different regulations are in effect, well, the car knows where it is and can be instructed to stop at the border and insist that the human take over because it is not allowed to drive in the place they are entering. No need for uniformity, just the ability to adapt to the different requirements.
We worry a lot about possible failure of our autonomous systems, and in these early days of the technology perhaps those concerns are valid. But I am convinced that the systems will ultimately perform better than the average driver and a whole lot better than some of the yahoos that infest the roadways around my home. Ultimately, I think autonomous vehicles will prove far safer in passenger miles per accident than with human drivers, at which point autonomous will become mandatory rather than special case.
@DanielMast: In 30 years, a robot will be driving antique cars built today.
They should organize the first test drives in Sheffield, England.
My 83-year-old mother recently purchased a Smart Car (or something similar). Now she happily tootles around all over the place. You can see when she's recently driven past somewhere by observing the "civilians" climbing down from trees and extracting themselves from bushes and hedgerows.
By the time your robot comes along, the combination of my mother and evolution will have created a city full of people with hair-trigger reflexes who can leap out of the way of a robot-controlled car at a moment's notice LOL
As far as I'm concerned, fully autonomous cars will never happen. A human driver can drive off-road and do anything he wants in unmapped places. In order for a machine to be able to do the same, it would require a full AI: not going to happen.
The best I'm hoping for is a car that keeps driving itself on known roads, and asks for assistance from the human driver when it enters or approaches a difficult situation (detects a deadlock with other autonomous cars, needs to go off-road/unmapped-path, or even through a difficult intersection)
This will enable the driver to take a phone call (or whatever) 99% of the time, drive through a tough spot for 5 seconds and resume the autopilot.
The machine is never bored and never falls asleep, and the human driver can take over for short periods of time when needed. But to make sure the autopilot doesn't do anything stupid, you need a robust ADAS in order to garantee the safety of the car, passengers and environment. That's what the ADAS is for. Put it in "active mode" when the machine is driving (actually prevents the machine from doing something stupid like running over an old lady or crashing into a tree) and on "warning mode" when the human takes control (so that he can do something "dangerous" to clear a difficult situation if needed like driving "far too close" to an obstacle or ignoring some rules in order to resolve a deadlock)
Drones are, in essence, flying autonomous vehicles. Pros and cons surrounding drones today might well foreshadow the debate over the development of self-driving cars. In the context of a strongly regulated aviation industry, "self-flying" drones pose a fresh challenge. How safe is it to fly drones in different environments? Should drones be required for visual line of sight – as are piloted airplanes? Join EE Times' Junko Yoshida as she moderates a panel of drone experts.