Cars that can "see," offer "advice" to drivers, and even react autonomously to changing road, traffic, and weather conditions are in our future.
Much of the required technology to put the necessary automotive vision in motion to achieve such capabilities already exists. Two key technologiesdigital still cameras and programmable digital signal processor chips (DSPs) that turn raw images into actionable informationhave reached price/performance targets that place automotive vision on the verge of practicality. However, the engineering challenges we will be facing in the future have more to do with product definition, system integration, and building a smart-vehicle infrastructure. We are a long way away from the day when our cars will not need us to safely drive from point A to point B, but there are many opportunities for today’s automobile.
Although automotive vision still has more than a few engineering hurdles in its path, the biggest challenge may lie in defining the human/machine boundary. More autonomous vehicles means less autonomous drivers. This raises many questions about safety, convenience, and even the role cars play in our daily lives.
Will the family vehicle of the future be an extension of the living room? Or, is the song of the open road inextricably linked to a foot on the accelerator and eyes on an ever-changing landscape? Are these two visions of the drive-by-wire future mutually exclusive? Only time will tell. One thing is certain: Automotive vision is a good case study in the intelligent application of technology to the real world.
A technology roadmap for automotive vision starts with a plan for rolling out the technology. That, in turn, starts with an understanding of the ways vehicles and occupants can interact. There are three levels:
Collect and display. The vehicle acts primarily as an extension of driver’s senses, collecting visual and other pertinent information and displaying it for the driver, most likely on an LCD screen. This basic level of interaction requires that the vehicle be aware of its surroundings through a variety of sensors and be able to convert the data it receives into information that is meaningful to the driver. But, the driver is solely responsible for making decisions.
Interpret and interact. The vehicle engages in basic decision making and advises the driver of a condition that merits attention or even to take a specific action. Example: Monitoring the driver for drowsiness and issuing an alert. In order to do this effectively, the vehicle’s decision-making technology must have a pre-programmed reaction for every possible situation. It must also be capable of acting quicklyin millisecondsenough for the interpreted information to be useful to the driver. In this case, the driver makes the decision given the recommendation from the automotive vision system.
Act autonomously. The vehicle collects and interprets information and executes a change in its operation without intervention from driver. A simple example of this third level of interaction would be a vehicle steering itself back into its traffic lane when the car drifts out of the lane. The performance bar for speed, accuracy, and machine intelligence (ability to react to all possible scenarios) is quite high for this level of interaction.
One could argue that there is a fourth levelthat of monitoring the driver. This concept is used in commercial vehicles to monitor the activities of the driver and or vehicle while it is in use. But we will not consider this further here.
Features of automotive vision
More than a dozen functions are being considered as automotive vision applications. Some are already in use in commercial fleet vehicles and some will be introduced in luxury models in the next few years. They include:
Occupant monitoring. If asked to define the particulars of automotive vision, the average person would not likely suggest that the car should observe its occupants. But this could easily be one for the first automotive vision applications. It is expected to yield impressive safety results; and since the car’s interior is a predictable environment, it is also a relatively straightforward implementation with the possible exception of the inference software required.
Monitoring drivers for drowsiness, inattentiveness, or intoxication is likely to roll out in commercial fleet vehicles as early as 2006. A more difficult aspect of occupant monitoring calls for the vehicle to observe the position and posture of the passengers and, in the case of an accident, deploy airbags to account accordingly. Still in the research phase, vision-based smart airbag deployment will take several years to perfect.
Infrastructure monitoring. A vehicle’s ability to recognize and interact with stationary objects that exist around it or have been added as part of the highway infrastructure can add to the safety of the occupants of the vehicle and the convenience of the driver. Such things as embedded RFID chips in the ubiquitous reflectors set in traffic lanes, for example, could enable the car to alert the driver that it has drifted out of its lane. Or, with that RFID or vision-based information, the car could actually steer itself back into the proper lane. Similarly, sensors such as cameras or radar units mounted on the car could detect objects such as median barriers, trees, buildings, and people, in addition to those equipped with sensors.
Other specific applications that fall into the category of infrastructure monitoring include: parking assist proximity sensors for parallel parking (already in production in Europe); rear view cameras instead of mirrors; and blind spot cameras, which are also useful in monitoring other vehicles. Still another aspect of infrastructure monitoring is the interaction of automotive vision technology with existing technologies such as GPS. Knowing a vehicle’s position could be helpful in accident avoidance if, for example, it could advise of an accident or road hazard in the vicinity.
Monitoring other vehicles. This category is distinguished from the previous category by the inherent difficulty of tracking other moving objects. Examples include intelligent cruise control (to maintain a safe distance between vehicles), blind spot monitoring, rear view observation, and night vision.
Safety and forensics. Some applications do not fit easily fit into the three categories mentioned above. Black-box recorders can be used for accident reconstruction, for example, are already being used in some commercial applications. Technologies that will help autonomous vehicles learn how to avoid an accident or how to react in a crash situation are being developed in research labs.
The term automotive vision suggests mimicking the visual perception of the human eye with cameras. But vehicles encounter many conditions in which a “few extra eyeballs” distributed around the vehicle are not enough to provide all the safety and convenience possible.
There is general agreement among engineers in the automotive and associated industries that a variety of sensors can be used to collect a more complete set of useful information. These sensors could possibly include radar, laser, and infrared in addition to digital cameras. Data from all of these sensors will be combined and interpreted in a concept called data fusion.
In the near term, however, the focus will be on cameras because they offer good price/performance and can effectively collect-and-display information for the driver. Data fusion will be an important aspect of automotive vision in the future; but for this article the types of data are not as important as how the data are used. The industry is still grappling with several questions about the location of sensors, their resolution, and where the intelligence that will interpret the data collected should reside.