Tomorrow's smart car is likely to give drivers considerable control of their interaction with the telematics interface. The driver will most likely use a voice recognition interface when driving and a touch-screen interface when stopped. After all, driving-by itself-makes many demands of the human in the driver's seat. So it's important to ask:
- What is likely to happen when telematics brings into the vehicle lots of additional sensory detail?
- Will the system turn out to be more hindrance than help? Will drivers have enough control over the systems to protect themselves from distracting input?
- If drivers don't have control, who or what will make those decisions and on what basis?
These and similar questions must be answered before we can realize the full potential of tele-matics. But to get the answers, we must first ground ourselves in the findings of human factors research, especially research into how the brain deals with input.
Most such research related to driver and passenger behavior relies on indirect measures to quantify the effects of potential distractions. Most of these are related to the eyes, given the importance of vision to driving. The duration, frequency and scanning patterns of drivers' glances are fairly common measures. Also useful are driver-vehicle performance measures, such as lane-keeping, speed maintenance, car-following performance and driver reaction times to objects and events.
Another category of measures focuses on driver control actions, such as steering wheel inputs, accelerator modulations, gear shifting, brake pedal applications and hands-off-the-wheel time. These are used to infer the extent to which the driver is distracted during an experiment.
Finally, measures of task completion time have been used, or are proposed for use, as an index of the distraction potential of a device. As a last resort, subjective assessments of driver workload and device design are brought to bear.
Recent research in which IBM participated at the University of Michigan Transportation Research Institute has started to explore the differences in the level of driver distraction based on the type of speech (recorded or synthetic) as well as type of message (navigational messages, short e-mail messages or longer news stories).
The current research into human factors leads us to think of the role of technology in terms of three broad categories: safety, comfort and efficiency.
Safety is above all. Ideally, a telematics system will implement diagnostic routines to let you know when your car is in a dangerous state and you have to pull off the road. For example, when your engine is overheating. If you have an accident, an automatic collision-notification component of the system will try to analyze accident severity and report your location along with transmitting an automatic distress call. In case of vehicle breakdown, your location and the car's diagnostic information will be reported automatically to a dispatcher.
In terms of comfort, telematics systems will expand on the role already played by embedded systems in many vehicles. The car's temperature, seat and the position of steering wheel and mirrors will all be adjusted through the system.You will get traffic updates and accident reports combined with a navigation system to route you around problem areas.
To successfully address the many roles technology can play as a driver assistant, a framework for the human-computer interactions must be developed.
One of the main problems is the potential for information overload from all the incoming messages or notifications. Therefore a notification framework must be developed that specifies an organized, systematic way for how the device will communicate with the driver. Such a framework for an in-vehicle driver information system (Ivis) must take into account two distinct types of communication possible between the driver and the Ivis: synchronous and asynchronous.
In synchronous communication the driver acts and the Ivis responds with visual or audible feedback. Because the driver initiates synchronous communication, the response from the Ivis is expected. By contrast, in asynchronous communication the Ivis notifies the driver of an event, such as an emergency, a phone call or new e-mail. The driver does not anticipate this communication and therefore it must be disseminated carefully, maybe even sparingly, to reduce the likelihood of distracting the driver.
Driver notification in today's conventional cars is limited to sound cues and a few status lights on the dashboard. The driver is notified that something requires attention, but few details are available until the driver takes the car to a service center. In an automobile with a speech-based Ivis it will be tempting to have the Ivis begin speaking as soon as the system becomes aware of a problem or concern. But how many of these can the driver handle and still drive safely? To answer this question we need to consider how drivers manage their tasks and, specifically, how they respond to speech-based vs. visual-based input.
One approach to the brain's processing of input focuses on the ways drivers manage their tasks. Some initial data suggests that drivers may not prioritize and manage speech-based tasks effectively. For example, studies have shown that a concurrent verbal task may increase a driver's propensity to take risks and that he or she may not compensate appropriately when verbal interactions slow reaction times. In short, drivers may not fully recognize the distractions of speech-based interaction and may fail to compensate for the distraction they cause. Unfortunately, research on speech communication with cell telephones and standard concurrent verbal tasks is not helpful because they are fundamentally different from talking to a computer.
The special demands of navigating a complex menu structure may introduce a cognitive load that may compete with the spatial demands of driving in a way that a conversation would not. In addition, current in-vehicle computers cannot modulate their interaction with the driver as a function of the immediate driving situation.
Interaction with a speech-based system may prove to be less distracting than conversations with a passenger because it's easier to end a conversation with a machine. Further research is needed to clarify these matters for us before we fully implement telematics systems.
What we know about driver task management makes clear that telematics designs must take into account the brain's processing of speech-based vs. visual-based input. But the research on the suitability of one over the other is mixed. One set of answers is based in what's called the "multiple resource" capability of the brain. According to this theory, speech-based interaction will distract a driver less than a visual display from the primary task of manually controlling the car. Speech-based interaction demands the resources associated with auditory perception, verbal working memory and vocal response, while driving itself demands the resources associated with visual perception, spatial working memory and manual response. Because these resources are independent, time sharing should be quite efficient.
However, other points of view are in ready supply. Some researchers, for example, are concerned that the attention demands that speech-based interaction may place on common central-processing resources might undermine time sharing and compromise driving safety. We'll be able to better understand how speech-based interaction might undermine safety if we set aside theories about time sharing and focus instead on theories that see the brain as a single-channel entity with limited capacity.
A long-term goal of telematics human factors engineers is the establishment of a workload-management framework. Much effort in this regard is in progress at the University of Michigan. A workload-management component would decide what information to give the driver and when to give it. Decisions would be based on a wide range of variables, including weather, road conditions, driving speed, driver's heart rate, time of day and importance of the message. The workload manager would decide whether to allow a distraction to reach the driver.
While ideas about workload management are brought to maturity, IBM is working on an Ivis that alerts the driver to pending information with a subtle tone (earcon) or by turning on a status light (driver's choice). This lets the driver make a decision about when to listen to the message