11
Risk prediction model for drivers’ in-vehicle activities – Application of task analysis and back-propagation neural network Yang-Kun Ou a , Yung-Ching Liu a,, Feng-Yuan Shih b a Department of Industrial Engineering and Management, National Yunlin University of Science and Technology, 123 Section 3, University Road, Touliu, Yunlin 640, Taiwan b F14 MOS2, Manufacturing Division, Taiwan Semiconductor Manufacturing Limited, Tainan, Taiwan article info Article history: Received 24 February 2010 Received in revised form 1 October 2012 Accepted 14 December 2012 Keywords: Back-propagation neural network (BPNN) Driving simulator Driver’s behavior Glance time Movement time Risk model, Task analysis abstract This study aims to develop a risk prediction model for in-vehicle tasks performed by driv- ers by using two methods: task analysis (TA) and back-propagation neural networks (BPNNs). Sixty-six participants volunteered to participate and were divided in two groups with different in-vehicle secondary tasks (traditional vs. in-vehicle information system/ IVIS) and participated in a driving experiment simulating low/high driving load road con- ditions. We assessed driving performance (i.e. longitudinal velocity and lateral acceleration variance), hand movements (i.e. number of movements and movement durations), visual judgment behaviors (i.e. glance duration and glance frequency) and response time. Task analysis results allowed us to generate input and output variables for further BPNN mod- eling. The overall risk prediction accuracy rate of our model was as high as 60%. In addition, an analysis of variable importance demonstrated that the longitudinal velocity was the most important variable in predicting traditional in-vehicle tasks, whereas the number of glances was the most important variable for predicting IVIS in-vehicle tasks. This study may help researchers better understand safety considerations related to in-vehicle second- ary tasks and in-vehicle interface design. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction In order to reduce the risk of accidents, drivers should not engage in any nonrelated activities that may be distracting, unless these activities affect the driver in a positive way, for example, driving on a monotonous country road with low stimulation. However, in reality, almost all drivers to a greater or lesser extent engage in activities that are not related to controlling their vehicle (e.g., adjusting the air conditioner, changing the radio channel). In addition, in-vehicle information systems (IVIS) are now common and are used by many drivers (e.g., to read or input navigational information). Drivers have become accustomed to these in-car actions, and usually they are not linked with immediate and significant danger. Drivers often ignore any associated risk. Anttila and Luoma (2005) found that driving performance of the surrogate in-vehicle information systems (S-IVIS) tasks were more difficult for left turns than right turns in the urban environment. The visual task added unnecessary waiting time when there was no other vehicle. In addition, some drivers accepted the short gap when engaged in the visual task, thus causing potential dangers. These results are not only displayed in the urban environ- ment, the driver performing the visual tasks also showed shorter time-to-collision and reduced anticipation of braking 1369-8478/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.trf.2012.12.013 Corresponding author. Tel.: +886 5 5342601x5020; fax: +886 5 5312073. E-mail address: [email protected] (Y.-C. Liu). Transportation Research Part F 18 (2013) 83–93 Contents lists available at SciVerse ScienceDirect Transportation Research Part F journal homepage: www.elsevier.com/locate/trf

Risk prediction model for drivers’ in-vehicle activities – Application of task analysis and back-propagation neural network

Embed Size (px)

Citation preview

Transportation Research Part F 18 (2013) 83–93

Contents lists available at SciVerse ScienceDirect

Transportation Research Part F

journal homepage: www.elsevier .com/locate / t r f

Risk prediction model for drivers’ in-vehicleactivities – Application of task analysis and back-propagationneural network

1369-8478/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.trf.2012.12.013

⇑ Corresponding author. Tel.: +886 5 5342601x5020; fax: +886 5 5312073.E-mail address: [email protected] (Y.-C. Liu).

Yang-Kun Ou a, Yung-Ching Liu a,⇑, Feng-Yuan Shih b

a Department of Industrial Engineering and Management, National Yunlin University of Science and Technology, 123 Section 3, University Road, Touliu, Yunlin640, Taiwanb F14 MOS2, Manufacturing Division, Taiwan Semiconductor Manufacturing Limited, Tainan, Taiwan

a r t i c l e i n f o

Article history:Received 24 February 2010Received in revised form 1 October 2012Accepted 14 December 2012

Keywords:Back-propagation neural network (BPNN)Driving simulatorDriver’s behaviorGlance timeMovement timeRisk model, Task analysis

a b s t r a c t

This study aims to develop a risk prediction model for in-vehicle tasks performed by driv-ers by using two methods: task analysis (TA) and back-propagation neural networks(BPNNs). Sixty-six participants volunteered to participate and were divided in two groupswith different in-vehicle secondary tasks (traditional vs. in-vehicle information system/IVIS) and participated in a driving experiment simulating low/high driving load road con-ditions. We assessed driving performance (i.e. longitudinal velocity and lateral accelerationvariance), hand movements (i.e. number of movements and movement durations), visualjudgment behaviors (i.e. glance duration and glance frequency) and response time. Taskanalysis results allowed us to generate input and output variables for further BPNN mod-eling. The overall risk prediction accuracy rate of our model was as high as 60%. In addition,an analysis of variable importance demonstrated that the longitudinal velocity was themost important variable in predicting traditional in-vehicle tasks, whereas the numberof glances was the most important variable for predicting IVIS in-vehicle tasks. This studymay help researchers better understand safety considerations related to in-vehicle second-ary tasks and in-vehicle interface design.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

In order to reduce the risk of accidents, drivers should not engage in any nonrelated activities that may be distracting,unless these activities affect the driver in a positive way, for example, driving on a monotonous country road with lowstimulation. However, in reality, almost all drivers to a greater or lesser extent engage in activities that are not related tocontrolling their vehicle (e.g., adjusting the air conditioner, changing the radio channel). In addition, in-vehicle informationsystems (IVIS) are now common and are used by many drivers (e.g., to read or input navigational information). Drivers havebecome accustomed to these in-car actions, and usually they are not linked with immediate and significant danger. Driversoften ignore any associated risk. Anttila and Luoma (2005) found that driving performance of the surrogate in-vehicleinformation systems (S-IVIS) tasks were more difficult for left turns than right turns in the urban environment. The visualtask added unnecessary waiting time when there was no other vehicle. In addition, some drivers accepted the short gapwhen engaged in the visual task, thus causing potential dangers. These results are not only displayed in the urban environ-ment, the driver performing the visual tasks also showed shorter time-to-collision and reduced anticipation of braking

84 Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93

requirements in the rural environment. These results illustrate the potential dangers of visual in-vehicle tasks (Jamson &Merat, 2005). However, the aforementioned secondary tasks inevitably increase the driver’s glance duration and resultedin less attention on the road centre area ahead, thus distracting the driver (Victor, Harbluk, & Engstrom, 2005). In light ofthis, the effect of these secondary tasks on driver performance is the focus of our investigation.

Inattention during driving and driver distraction accounts for a substantial number of traffic accidents (Dingus et al.,2006; Klauer, Dingus, Neale, Sudweeks, & Ramsey, 2006; Wierwille, 1995). The National Highway Traffic Safety Administra-tion (NHTSA) and the Virginia Tech Transportation Institute (VTTI) reported that nearly 80% of all crashes and 65% of all near-crashes involved driver inattention just prior to (i.e., within 3 s) the onset of the conflict. Prior estimates related to driverinattention as a contributing factor have been in the range of 25% of all crashes. Inattention was a contributing factor for93% of rear-end-striking crashes (Dingus et al., 2006). In Taiwan, statistical results show that most traffic accidents arecaused by inappropriate driving behaviors, and one of the main causes of accidents is drivers not paying attention to the roadahead (MOTC (Ministry of Transportation), 2008). IVIS units can often occupy already-limited driver attention resources, in-crease driver workload and thus increase the risk of traffic accidents (Collins, Biever, Dingus, & Neale, 1999; NHTSA, 2000;Stutts, Reinfurt, Staplin, & Rodgman, 2001; Wickens & Hollands, 2002).

Most research on drivers’ attention resource requirements and workload has analyzed drivers’ visual behavior and drivingbehavior (Dingus, Antin, Hulse, & Wierwille, 1989). In visual behavior analytics, task analysis involving ‘‘glance duration’’ and‘‘glance frequency’’ metrics to analyze driver attention is a widely adopted and accepted approach (ISO (International Stan-dards Organization), 1995). This is appropriate because driving is a task that is highly vision-dependent (Hills, 1980). Withregard to task analysis, in accordance with the working activity, With regard to task analysis, in accordance with the workingactivity, Arnaut and Greenstein (1990) divided movement time into three types: (1) gross movement time; (2) fine adjust-ment time and (3) total movement time. In our study, we used three types of movement time to analyze the hands activities.

The duration of a driver’s glance is related to the task to be completed, and the average glance lasts 1.25 s (Rockwell,1987). When the task is difficult, a driver will compensate by increasing glance frequency, even though the duration of indi-vidual glances remains quite uniform (Bhise, Forbes, & Farber, 1986). Dingus et al. (1989) reported that ‘‘total glance time’’increased or decreased in relation to the difficulty of the task. For example, reading speed required 0.78 s, whereas decidingwhich way to turn and finding the road name when using a route guidance system required 10.63 s. Drivers do not neces-sarily capture the information they need to complete tasks with a single glance but adopt a ‘‘sampling’’ strategy by glancingat the same information back and forth a number of times. As road complexity increases, drivers complete a task by grad-ually increasing their glance frequency and shortening their glance duration. When a driver’s gaze leaves the road for onesecond, the driver’s driving behavior (e.g., lane position keeping) is affected. If the time period of a glance away from the roadis longer than 2 s, the driver’s vehicle appears to make an obvious lane deviation and thus indicates the importance of havingthe driver’s eyes on the road, even within 2 s of distraction, the driving risk is increased significantly (French, 1990; Zwhalen,Adams, & DeBald, 1988).

Driving behaviors, including both the longitudinal control measures (e.g., speed variance and mean speed) and the lateralcontrol measures (e.g., variances of the lateral acceleration, lateral lane position, and the steering wheel angle) are validbases for evaluation, and have been widely used in studying drivers’ attention resources. McDonald and Hoffman (1980)found that, if a driver makes a one-time over six degrees steering wheel reversal behavior, his/her attention is very likelyoverloaded. In addition, a drivers’ attention can be treated as overloaded if they exhibit excessive behaviors in the lateralacceleration variance (Dingus et al., 1989, 1997), lowering of mean vehicle speed (Srinivasan & Jovanis, 1997), lateral laneposition variance (Liu, 2000) and wide range variant in the steering wheel angle (Liu, 2001, 2003), or large vehicle speed var-iance (Antin, Dingus, Hulse, & Wierwille, 1990).

Drivers’ glancing and driving behaviors both reflect the attention load requirement when drivers carry out non-drivingsecondary tasks, and significant research has either focused on one or used both as indicators of driver attention load. How-ever, glancing behavior and driving behavior should exhibit a before-and-after relationship in reflecting driver attentionrequirements in the context of carrying out in-vehicle secondary tasks, as visual behavior should logically occur before driv-ing behavior. Consequently, establishing a relationship model for these two types of behavior will help with evaluating andpredicting the negative impact on driver safety of carrying out non-driving in-vehicle tasks.

Previous studies have modeled driving behavior using stochastic approaches. Therefore, developing a suitable drivingbehavior model required a probabilistic approach (e.g., the hidden Markov model [HMM] and Bayesian networks) (Kumagai& Akamatsu, 2004; Liu & Salvucci, 2001). However, these involve cumbersome statistical models, which necessarily needmany hypotheses (e.g., regarding multicollinearity and population distribution). If the hypotheses are wrong, then theassessment of an incident’s results is very likely to also be wrong (Lewis, 2000). Those restrictions often make the probabi-listic driving model impractical. By contrast, the Artificial Neural Networks (ANNs) approach does not require many hypo-thetical requirements. Plus, ANNs use input and output variables to learn the input and output variable internalcorrespondence rules. The ANN method seems to be an effective tool, it has been used in different fields of research (Hagan,Demuth, & Beale, 1996), and it is gradually being applied to transport problems.

Yang, Kitamura, Jovanis, Vaughn, and Abdel-Aty (1993) adopted the ANN to analyze driver route choice behavior, firstusing the experimental design method to collect driver route choice data on 32 days, and then by building an ANN modelto predict route choice behavior when a driver used an advanced traveler information system. ANN has also been used tocreate a travel model to compare the requirements of male and female travelers (Shmueli, Salomon, & Shefer, 1996). In com-bination with the fuzzy function, this approach has been employed to investigate discrete choice behavior, with car and train

Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93 85

time, cost, gender and age data used to build an effective prediction model (Vythoulkas & Koutsopoulos, 2003). Chang (2005)used road accidents, road information, and season and weather parameters to develop an ANN model to predict the serious-ness of road accidents. Results confirmed that the aforementioned ANN models were very accurate when applied to this kindof non-linear prediction scenario. Using the ANN model, the overall accuracy was at least 70% for a correct prediction ofintersection accidents (Abdel-Aty & Pande, 2005) and thus can be effectively applied in accident analysis (Dia & Rose,1997; Mussone, Ferrari, & Oneta, 1999) to determine the relationship between the seriousness of an accident and environ-mental factors (Sohn & Lee, 2003).

As a result of the ANN’s effectiveness in building prediction models, a substantial amount of research has been conductedinto using ANNs to predict driver behavior and traffic accidents (Dougherty, 1995). Similarly, our research effort aims to de-velop driver risk prediction models for drivers who perform different types of in-vehicle secondary tasks.

2. Methods

2.1. Participants

Sixty-six volunteers, each with a valid driver’s license, were divided into two groups (traditional in-vehicle task group: 15males and 15 females; IVIS in-vehicle task group: 22 males and 14 females). Participants were all between 24 and 35 years ofage and had to meet requirements for vision (at least 0.8, or 0.8 after correction); for hearing (able to communicate withexperimenters while the simulator vehicle was traveling at a speed of 90 km/h); and color acuity (able to pass the Ishiharacolor card blindness test). None of the participants had previous experience in the use of a driving simulator or a head-updisplay (HUD). Upon completion of the experiment, participants were paid US $10.

2.2. Apparatus

2.2.1. Driving simulatorAn interactive STI� low-cost, fixed-base driving simulator (developed by Systems Technology, Inc. Hawthorne, CA, USA)

was used in this study. The simulated vehicle cab, a VOLVO 340 DL, featured all standard automotive displays and controls(steering, brakes, and accelerator) found in a vehicle with automatic transmission. Different driving scenarios were projectedonto a 200 cm (L) � 150 cm (W), i.e. 100-in., screen which was located 3 m in front of the driver with sound effects of vehi-cles in motion being broadcast by a set of two-channel amplifiers and speakers.

2.2.2. Head-up displayDriving-related information, such as speed, and task instructions from the experimenter were projected on the HUD lo-

cated 3.1 m in front of the driver. The vertical projection angle was between 6� and 12� below the driver’s horizontal line ofvision, and the HUD area measured about 32 (w) � 22 (h) cm (approximately 15 in.2). The display resolution was800 � 600 dpi, and the presentation font (icon) size was 10 � 10 cm (approximately 1.8�).

2.2.3. Video camerasFour video cameras installed in the simulator were used to record the drivers’ movements during the entire experiment.

The first video camera, which recorded drivers’ eye movements to the dashboard, was located on the top of the dashboardpanel. The second camera was designed to record drivers’ right hand activities and was located at the height of the drivers’right shoulder when seated in the driver’s seat. The third camera was used to record drivers’ left hand activities and waslocated at the height of drivers’ left shoulder position. The fourth camera recorded drivers’ views of the road and was locatedat the drivers’ eye height in the windshield.

2.3. Driving scenario descriptions

The driving environmental scenario was developed using STI SDL (Scenario Definition Language) V.8.0 and was dividedinto low/high driving load conditions. The driving load condition was manipulated using the factors considered by Liu(2001) (i.e., lane width, number of sharp curves, speed limits, number of intersections, density of roadside buildings, locationof roadside buildings). Participants were asked to drive 45 km at a speed limit of 90 km/h on the high driving load road. Un-der the low driving load conditions, they were asked to drive 30 km at a speed limit of 60 km/h. Each scenario took approx-imately 30 min to complete.

2.4. In-vehicle information system (IVIS) display

The IVIS display used in this study was based on the design guidelines proposed by Green (1996). In Fig. 1, the IVIS displaylayout was divided into five information blocks: traffic sign (upper left), navigation information (upper right), warning sign(lower left), current vehicle speed (lower middle), and audible announcements (lower right). The figure was an example toshow the full information content on the display. Usually the driver only perceives the navigation and the vehicle speed

Fig. 1. Example of a screen layout sketch of a IVIS display with full information content. The display includes five timely information presenting areas, fromleft to right and top to bottom, the road sign information (e.g., one way road), the navigation related information (e.g., go straight and cross two streets, thecompass north, the distance 1.2 km and the time 1 min to the next turn), the road condition monitoring information (e.g., sharp curvy road ahead), thecurrent vehicle speed (e.g., 33 km/h), and a speaker icon indicating there is aural information coming.

86 Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93

information. As to road sign and road condition warning information, the display will notify the driver at the appropriatetime.

2.5. Tasks

Tasks were divided into two types: driving tasks and in-vehicle tasks.

2.5.1. Driving tasksParticipants were asked to drive within the simulated road environment, complying with all traffic rules and driving

within the speed limit. Participants had to observe the speed limit (high driving load road: normal – 90 km/h, curve –60 km/h; low driving load road: normal – 60 km/h, curve – 40 km/h).

2.5.2. In-vehicle tasksThe first in-vehicle task referred to normal in-car activities (e.g., turning on the air conditioner and switching the radio

channel) that we call ‘‘traditional in-vehicle tasks’’; the other task set simulated the drivers’ use of the in-vehicle informationsystem and required in-vehicle behaviors (e.g., looking at the navigation display and paying attention to warning informa-tion). We term this segment the ‘‘IVIS in-vehicle tasks.’’

(1) Traditional in-vehicle tasks

The traditional in-vehicle tasks included hand movement tasks and visual judgment tasks (a total of 38 traditional in-vehicle tasks were designed for this study). For the visual judgment tasks, participants only paid attention to the targetsand verbally responded to prompts (e.g., speed observation, mileage observation). However, for the hand movement tasks,in addition to searching for targets participants also moved their hands (e.g., repositioning the safety belt and adjusting therear-view mirror). These tasks appeared within 30–60 s of each other in random order and each task appeared only oncewithin each road environment. Participants were asked to hold the steering wheel with both hands before being informedof the task. To guarantee consistency and appropriate driver compliance, with the traditional in-vehicle tasks, every target(i.e. the knob, button, or the other control devices) was labeled with a yellow sticker to make them clear for the purposes of(1) making sure that each subject reached the same position and distance and (2) easily studying the driver’s movementswhile conducting the motion analyses.

(2) IVIS in-vehicle tasks

The IVIS in-vehicle tasks included a traffic sign search task and a navigation task. In the traffic sign search task, partici-pants were first notified by the HUD that an intersection lay approximately 1 km ahead. When participants reached the

Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93 87

intersection, they were required to respond verbally to the prompt. In the navigation task, the experimenter asked partici-pants to give the name of the next road. Participants were required to search the HUD and verbally respond to the question.There were eight total IVIS in-vehicle tasks, with each task appearing four times randomly within each driving load roadscenario.

2.6. Experimental design

The research consisted of a 2 (high/low driving load, within-subjects) � 2 (traditional in-vehicle/IVIS in-vehicle tasks, be-tween-subjects) mixed-factorial experiment. In this study, we balanced ‘‘high driving load’’ against ‘‘low driving load’’, withthe result that we were able to test two combinations: (1) ‘‘high driving load’’ first, then ‘‘low driving load’’ and (2) ‘‘lowdriving load’’ first, then ‘‘high driving load’’. As to the ‘‘movement tasks’’ required with respect to ‘‘Traditional’’ and ‘‘IVIS’’tasks, these appeared randomly, and the subjects were all randomly assigned their participation sequence as well.

2.7. Procedures

Participants first had to meet the vision and hearing requirements. Then the experimenter played a pre-recorded 10-minintroduction video for briefing purposes. After signing an informed consent document, participants were permitted approx-imately 5 min of practice driving to get familiar with the simulator controls, road environment, and the aforementionedtasks. Once the experiment began, there was a 5-min break between each road condition change.

2.8. Data collection

Drivers’ movements while performing the in-vehicle tasks were recorded by the cameras and were analyzed afterwardsby the Observer� 5.0 software to obtain the number of movements and movement times, visual judgment behaviors, andresponse times. The number of movements was a simple count of total movements, starting from when a driver’s hand be-gan to move, reached the target object, completed the task and then moved back to the steering wheel. Relevant movementtimes, according to Arnaut and Greenstein (1990), was divided into three types: (1) gross movement time (in s): from thetime the participant’s hand begins to move until it reaches the target object; (2) fine adjustment time (in s): from the timethe participant’s hand touches the target object, completes the task, and until it leaves the target object; and (3) total move-ment time (in s): the total time will be the sum of the movement times (1) and (2).

Relevant visual behaviors included single glance duration (in s), number of glances (total count), and total glance time (ins, comprising total time from when the participant’s eyes move to the target until task completion). The response time (in s)measured each participant’s reaction performance from the time the target traffic sign appeared to when the verbal answerwas given. The ‘‘traffic sign target’’ was used in one of the IVIS tasks to measure the driver’s response time (in s). The tasksituation was as follows: while driving on the road, a traffic sign icon appeared on the head-up display which the driverneeded to detect; thereafter, a real traffic sign posted on the roadside 1000 m ahead appeared on the HUD. The driverwas asked to respond to the roadside traffic sign verbally as soon as he/she recognized it. This way, the response time mea-sured each participant’s reaction performance from the time of the task (traffic sign search task) to the time when the verbalanswer was given.

In this study, we collected data on the driver’s driving performance three times each second. Hence, the unit for the speedis m/s; for the acceleration, m/s2; for the steering wheel’s angle variance, degree. Longitudinal acceleration is defined as thelinear acceleration of the vehicle along its lateral or X-axis. Lateral acceleration is defined as the linear acceleration of thevehicle along its lateral or Y-axis. The steering wheel’s angle variance is the discrepancy in angular movements of the steer-ing wheel made by the driver while driving.

3. Results

3.1. Results of the task analysis

Table 1 shows the mean and standard deviation values for driver performance while performing in-vehicle tasks. Tradi-tional in-vehicle tasks were divided into hand movements, visual judgments and related driving performance. For the IVISin-vehicle tasks, hand movements were substituted with response times.

3.2. Development of back-propagation neural network (BPNN) risk prediction models

The neural network model was divided into three layers: input layer, hidden layer, and output layer. The input variablescan be either numerical values or a binary code. The hidden layer determines the weights on the connections between theinput and the hidden layer. There were no particular rules to determine the number of neurons in the hidden layer. The num-ber of connections in the hidden layer was generally determined through experimentation, with their range being half thequantity to 2–3 times the quantity of the input variables (Berry & Linoff, 1997). The output layer presents the network output

Table 1Driver in-vehicle task analysis results.

Tasks Types Driving load High Low

Mean S.D. Max. Min. Mean S.D. Max. Min.

Traditional in-vehicle task

Handmovements

Number of movements 4.03 1.63 9.00 2.00 4.03 1.63 9.00 2.00Gross time 0.68 0.47 6.90 0.19 0.74 1.36 44.00 0.18Fine time 1.58 2.23 28.33 0.03 1.52 1.95 21.46 0.01Total task time 2.26 2.34 29.32 0.39 2.26 2.41 44.38 0.41

Visual judgment Number of glances 1.05 0.67 6.00 0.00 1.12 0.71 5.00 0.00Single glance time 0.70 1.25 33.60 0.00 0.81 0.63 4.27 0.00Total visual task time 1.79 2.00 33.60 0.00 1.80 1.73 19.20 0.00

Drivingperformance

Longitudinal velocity 78.44 9.71 115.74 42.46 54.89 8.08 119.99 27.85Longitudinal acceleration variance 3.54 1.04 5.90 0.08 1.03 0.90 7.61 0.02Lateral acceleration variance 1.18 1.58 10.62 0.005 0.79 0.70 5.98 0.01Longitudinal acceleration due use ofthrottle

1.17 1.08 4.82 0.00 0.98 0.86 4.28 0.00

Steering wheel angle 1.58 1.59 15.29 0.00 1.19 1.33 20.18 0.00

IVIS in-vehicle task Handmovements

Response time 2.96 1.54 12.57 0.73 2.83 1.44 8.52 0.63

Visual judgment Number of glances 4.95 3.31 17.00 1.00 5.58 4.05 17.00 0.00Single glance time 1.20 0.76 6.07 0.27 1.54 0.89 6.03 0.00Total visual task time 9.08 5.13 19.73 0.87 11.88 7.56 29.47 0.00

Drivingperformance

Longitudinal velocity 85.25 4.76 115.64 65.79 60.50 3.56 76.87 48.96Longitudinal acceleration variance 0.33 0.56 4.34 0.00 0.30 0.72 4.29 0.00Lateral acceleration variance 1.88 1.20 7.11 0.00 0.86 1.32 8.00 0.00Longitudinal acceleration due to useof throttle

0.43 0.84 4.51 0.00 0.30 0.72 4.30 0.00

Steering wheel angle 1.45 0.84 4.72 0.00 1.16 0.78 4.33 0.00

88 Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93

as determined by the experimenter. Baliley and Thompson (1990) concludes that the back-propagation neural network(BPNN) offers the highest accuracy.

3.2.1. Construction of BPNN modelsThe Clementine Client 12� software package was used to develop the BPNN driving behavior risk model. Relevant BPNN

training and testing processes were structured as shown in Fig. 2. To improve this BPNN model performance and avoid over-fitting problems, the data was randomly classified into five groups: four were randomly chosen to be the training groups, andthe one left out made up the testing group. Once that group was tested, that group and three other randomly chosen groupswere selected to be the training groups, and again the one left out was selected to be the testing group. This process wasrepeated until all the groups were tested.

The input and output parameters used in this BPNN model are shown in Table 2.The input variables contain 12 neuronsrepresenting the traditional in-vehicle task factors and 9 neurons representing the IVIS in-vehicle task. In the input variables,the driving load was defined by two categorical variables (0 or 1) and the other input variables were numerical. The outputvariable was defined by three categorical variables. The output variables feature three risk conditions associated with drivingbehaviors (i.e., safe, cautious, and hazardous) and was defined by the standard deviation. We presented the concept of a con-trol chart to define the level of crash risk. The concept of a control chart was borrowed from statistics quality control in orderto determine the kind of driving which the driver should engage in under three conditions, namely safe, cautious and haz-ardous. With the control chart divided into a center line, upper line and lower line, the steering wheel angle would varywithin one standard deviation of the center line for safe driving conditions; the steering wheel angle would vary betweenone and two standard deviations of the center line for cautious conditions; and the steering wheel angle would vary morethan two standard deviations for hazardous conditions.

The parameters of the learning rate 0.3 and the sigmoid transfer function were set in the BPNN models. We tested severallearning cycles (100, 200, 300, 400, 500, and 600) in the BPNN risk models. Because the effectiveness of BPNN training algo-rithm will depend on the number of neurons in the hidden layer, various numbers of neurons (ranging from 5, 10, 15, 20, 25,30) in the hidden layer were tested. The mean square error (MSE) and R squared are most commonly used to evaluate neuralnetwork forecasting ability (Campolo, Andreussi, & Soldati, 1999; Lion, Lim, & Paudyal, 2000). MSE squares errors give moreweight to larger errors. In addition, R squared is the coefficient of determination in multiple regressions. The equation is de-fined as follows:

The Mean Squares Error (MSE) as defined in equation (1) was used to evaluate our training and testing results (Haganet al., 1996).

MSE ¼ 1N � K

XN

i¼1

ðya � ypÞ2 ð1Þ

III. Epochs

Road environment, Movements, Visual judgment

1/5 1/5 1/5 1/5

I. Random partition into 5 groups

Test 1/5

II. Select test group

Training 4/5

ANN

Final model

Model train Re-select (x5)

1/5

Train

Fig. 2. The illustration of the BPN training and analysis processes. I. The data was randomly split into five groups. II. The optimal group was selected torepresent the test set. III. The optimal learning cycles for the BPN training model were found through experimentation.

Table 2Input (x) and output (y) ANN parameters.

Code Traditional in-vehicle tasks IVIS in-vehicle tasks Categorical/numerical code

X1 Hand movements – Numerical valueX2 Gross time – Numerical valueX3 Fine time – Numerical valueX4 Total task time Response time Numerical valueX5 Driving load Driving load 1 for high driving load, 0 for low driving loadX6 Number of glances Number of glances Numerical valueX7 Single glance time Single glance time Numerical valueX8 Total visual task time Total visual task time Numerical valueX9 Longitudinal velocity Longitudinal velocity Numerical valueX10 Longitudinal acceleration variance Longitudinal acceleration variance Numerical valueX11 Lateral acceleration variance Lateral acceleration variance Numerical valueX12 Longitudinal acceleration due to use of

the gas pedalLongitudinal acceleration due to use ofthe gas pedal

Numerical value

y1 Risk condition 1 for safe condition, 2 for cautious condition, 3 forhazardous condition

Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93 89

where ya is the steering wheel output value, yp is the model output value, k is the number of output neurons, n is the numberof observation data points.

R2 ¼ 1�Pðyp � yaÞ

2

Pðya � �yaÞ2

ð2Þ

where ya is the steering wheel output value, yp is the model output value, �ya is the mean of the y values.We concluded that the optimal neuron in the hidden layer was 15 for the traditional in-vehicle task, and, after 500 learn-

ing cycles, we obtained the optimal solution. For the IVIS in-vehicle tasks, the optimal number of neurons in the hidden layerwas 5, and the best solution was found after 300 learning cycles. Table 3 shows the results.

In the selection of the number of times the ANN training was required, we relied on whether the training results werestable or not. In Table 4, as you can see, in the ‘‘Traditional in-vehicle task’’, after 500 times of trainings, the learning curve

Table 3Prediction performance of the BPN models.

Traditional in-vehicle tasks IVIS in-vehicle tasks

Network Learning cycles Training Testing Network Training Testing

MSE R2 MSE R2 MSE R2 MSE R2

12-15-1 100 0.298 0.288 0.289 0.319 9-5-1 0.199 0.567 0.269 0.396200 0.341 0.185 0.315 0.260 0.183 0.602 0.239 0.463300 0.343 0.180 0.335 0.213 0.168 0.635 0.183 0.590400 0.343 0.181 0.333 0.217 0.173 0.624 0.270 0.393500 0.229 0.453 0.250 0.411 0.167 0.637 0.267 0.401600 0.341 0.184 0.335 0.211 0.186 0.595 0.282 0.368

Note: The optimal settings and solutions are in italicized; Network code: 12-15-1 which is for the 12 input neurons, 15 neurons in the hidden layer and oneoutput neuron.

Table 4Testing results of ANN used five times.

Learning cycles Traditional in-vehicle tasks IVIS in-vehicle tasks

100 200 300 400 500 600 100 200 300 400 500 600

BPNN-1 MSE 0.295 0.270 0.265 0.273 0.257 0.289 0.224 0.201 0.194 0.211 0.200 0.230R2 0.307 0.366 0.377 0.357 0.398 0.319 0.497 0.548 0.564 0.525 0.551 0.484

BPNN-2 MSE 0.282 0.273 0.280 0.267 0.250 0.254 0.207 0.215 0.197 0.205 0.215 0.249R2 0.336 0.358 0.342 0.372 0.411 0.402 0.535 0.517 0.559 0.541 0.517 0.440

BPNN-3 MSE 0.260 0.271 0.264 0.259 0.251 0.261 0.207 0.191 0.192 0.221 0.210 0.222R2 0.387 0.361 0.380 0.392 0.410 0.386 0.535 0.571 0.569 0.504 0.528 0.502

BPNN-4 MSE 0.258 0.287 0.255 0.258 0.255 0.251 0.216 0.184 0.187 0.206 0.197 0.222R2 0.393 0.325 0.399 0.393 0.401 0.410 0.515 0.587 0.579 0.538 0.559 0.502

BPNN-5 MSE 0.260 0.286 0.251 0.253 0.267 0.265 0.187 0.187 0.183 0.194 0.197 0.214R2 0.389 0.328 0.410 0.405 0.372 0.377 0.579 0.579 0.590 0.564 0.559 0.520

Mean MSE 0.271 0.277 0.263 0.262 0.256 0.264 0.208 0.196 0.191 0.208 0.207 0.227R2 0.362 0.348 0.382 0.384 0.398 0.379 0.532 0.561 0.572 0.533 0.535 0.489

S.D. MSE 0.017 0.008 0.011 0.008 0.007 0.015 0.014 0.013 0.005 0.009 0.008 0.014R2 0.039 0.020 0.026 0.019 0.016 0.036 0.031 0.028 0.013 0.020 0.019 0.031

90 Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93

reached its lowest point (the smallest MSE value) and became more stable (much smaller standard deviation value); for the‘‘IVIS task’’, after 300 times of trainings, the MSE and standard deviation values became much smaller and more stable.

Results in Table 4 indicate the suitable prediction models for both Traditional and IVIS in-vehicle tasks were achieved.Those input variables that appeared in Table 2 were then examined for their contribution importance while constructingthe prediction models. As can be seen in Table 5, all the variable importance values within each in-vehicle task modelwas greater than 0, indicating that each variable provided its contribution in developing the prediction model to some de-gree. Notably, for predicting the traditional in-vehicle task performance, the variable of the longitudinal velocity was themost important factor (variable importance: 0.26), while in predicting the IVIS in-vehicle task performance, the best predic-tive factor was the variable of the number of glances (variable importance: 0.21).

Table 5Variable importance of the BPN models.

Traditional in-vehicle tasks IVIS in-vehicle tasks

Variable Importance Variable Importance

Driving load 0.07 Driving load 0.08Number of movements 0.16Gross time 0.03Fine time 0.10Total task time 0.06 Response time (IVIS task) 0.13Number of glances 0.11 Number of glances 0.21Single glance time 0.03 Single glance time 0.09Total visual task time 0.04 Total visual task time 0.09Longitudinal velocity 0.26 Longitudinal velocity 0.11Longitudinal acceleration variance 0.05 Longitudinal acceleration variance 0.07Lateral acceleration variance 0.04 Lateral acceleration variance 0.10Longitudinal acceleration due to use of the gas pedal 0.04 Longitudinal acceleration due to use of the gas pedal 0.12

Table 6Classification results of the BPN model for the traditional in-vehicle tasks.

Predicted results Actual results

<1r <2r >2r Correctly predicted (%)

<1r 263 8 9 93.93<2r 90 27 20 19.71>2r 45 4 51 51.00

Note: Number of correctly predicted events are in italicized.

Table 7Classification results of the BPN model for the IVIS in-vehicle tasks.

Predicted results Actual results

<1r <2r >2r Correctly predicted (%)

<1r 62 25 7 65.96<2r 22 90 13 72.00>2r 15 23 31 44.93

Note: Number of correctly predicted events are in italicized.

Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93 91

3.2.2. Prediction performance of the BPNN modelThe classification results of the BPNN model were used to predict both traditional in-vehicle tasks and IVIS in-vehicle

tasks (see Table 6 and 7). In the table, the ‘‘Actual results’’ were obtained by the experimental driving behaviors; the ‘‘Pre-dicted results’’ were the prediction values calculated by the BPNN model; those values in the italicized cells indicate thenumber of correctly predicted outcomes. In the traditional in-vehicle task model, the overall prediction accuracy was65.96%, which is better than that of IVIS in-vehicle tasks’ accuracy. The accuracy of the two task models was almost the same.In addition, the best performance for the traditional in-vehicle task model was found when predicting safe driving conditions(predictive rate 93.93%), and, in the IVIS in-vehicle task model, the best predictive performance was for the cautious drivingcondition (predictive rate of 72.00%), followed by the safe driving condition (predictive rate of 65.96%).

4. Discussion

Previous studies have mainly utilized probabilistic methods to develop behavior-related driver models for testing andverification (Kumagai & Akamatsu, 2004; Liu & Salvucci, 2001) However, those statistical models are complex, and a numberof assumptions have to be made, making most of these models rather impractical and possibly flawed (Lewis, 2000). Theadvantages of artificial neural networks are that they are able to process a large amount of data without requiring that manyconstraint hypotheses be met, they exhibit outstanding non-linear model building capability, and they can effectively handleconnections between variables. However, when building a neural network model, one must be very cautious when selectinginput variables, because the model’s predictive ability is based on the connection between the input variables and the re-sults. In this study, based on the variables’ first-after occurrence sequence, we consider our input variables (first occurring)from two angles. The first was based findings reported in the literature which confirmed the relationship that the attentionworkload deteriorated first, the driver’s driving behaviors and visual glance behaviors were affected afterwards, or thedriver’s visual glance behaviors interrupted first and then the driver’s driving behaviors was affected afterwards. The otherangle from which we considered input variables was based on videos we taped of the experiment, we then ‘‘motion studied’’the sequences of the driver’s movements while he/she conducted a certain in-vehicle task. Based on the considerations of theinput variables from these two angles, we have confidence in our belief that the selection of our input variables was valid.

The driving simulator adopted in this study was able to effectively and safely assess the characteristics of driver behavior(Lee, Cameron, & Lee, 2003). Data related to each driver’s in-vehicle secondary tasks were collected and analyzed via the taskanalysis method and by implementing the back propagation neural-network algorithm to establish risk prediction modelsrelated to driving behavior. Results indicate that changes in driving performance can be effectively inferred from previousbehaviors when they carry out secondary tasks, proving that the behavioral factors associated with those two models arelogically valid. In addition, with a predictive accuracy of 60%, which is slightly higher than that cited in other research (Chang& Chen, 2005; Greibe, 2003), these models offer high predictive validity.

In regards to the primary influence factors within the risk prediction model for everyday traditional in-car tasks, the dri-ver’s longitudinal velocity was the main variable importance (0.26 influence rate). In the risk prediction model for in-carinformation system related tasks, the main variable importance was driver glances (0.21 influence rate). Excessive glancingcan affect a driver’s lateral lane position variance and negatively impact driving safety (Zwhalen et al., 1988). Furthermore,when attention load is high, any driver’s total glance time will increase, creating significant accident risk (Hankey, Dingus,Andrews, Hanowski, & Wierwille, 2000). This provides evidence that the two primary significantly influential factors

92 Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93

recruited in our models for predicting driver behavior are effective and reasonable. Moreover, the precedence order of theinfluence of secondary task behavior on driver behavior has confirmed the validity of our factor selection: the action camefirst, followed by the influence in terms of driving behavior.

When evaluating in-car interfaces, our predictive models can allow researchers to calculate the risks that may be pro-duced by a certain task. Our approach requires obtaining the main factor data and analyzing the level of risk as a resultof using a specific in-car interface. In this way, a traditionally complicated evaluation experiment can be simplified.

Future research should consider how to improve the predictive validity of the BPNN model, exploring the actual relation-ship between factors in more detail and more precisely. For example, by changing a given BPNN model structure (e.g., outputvariables are binary variables) and input variable selections (e.g., character traits, mood, motivation, and weather condi-tions), a more accurate driver risk prediction system can be constructed.

Acknowledgement

The work described in this paper was partially funded by the National Science Council (Taiwan) under the contract num-ber NSC-92-2213-E-224-025.

References

Abdel-Aty, M., & Pande, A. (2005). Identifying crash propensity using traffic speed conditions. Journal of Safety Research, 36(1), 97–108.Antin, J. A., Dingus, T. A., Hulse, M. C., & Wierwille, W. W. (1990). An evaluation of the effectiveness and efficiency of an automobile moving-map

navigational display. International Journal of Man-Machine Studies, 18, 581–594.Anttila, V., & Luoma, J. (2005). Surrogate in-vehicle information systems and driver behaviour in an urban environment: A field study on the effects of visual

and cognitive load. Transportation Research Part F, 8, 121–133.Arnaut, L. Y., & Greenstein, J. S. (1990). Is display/control gain a useful metric for optimizing an interface? Human Factors, 32(6), 651–663.Bailey, D., & Thompson, D. (1990). How to develop neural-network applications. AI Expert, 5(6), 34–47.Berry, M. J. A., & Linoff, G. (1997). Data mining techniques for marketing, sales, and customer support. New York, NY: John Wiley and Sons.Bhise, V. D., Forbes, L. M., & Farber, E. I. (1986). Driver behavioural data and considerations in evaluating in-vehicle controls and displays. In Presented at the

transportation review board 65th annual meeting, Washington, DC.Campolo, M., Andreussi, P., & Soldati, A. (1999). River flood forecasting with a neural network model. Water Resources Research, 35(4), 1191–1197.Chang, L. Y. (2005). Analysis of freeway accident frequencies: Negative binomial regression versus artificial neural network. Safety Science, 43, 541–557.Chang, L. Y., & Chen, W. C. (2005). Data mining of tree-based models to analyze freeway accident frequency. Journal of Safety Research, 36, 365–375.Collins, D. J., Biever, J. W., Dingus, T. A., & Neale, V. L. (1999). Development of human factors guidelines for advanced traveler information systems (ATIS) and

commercial vehicle operations (CVO): An examination of driver performance under reduced visibility conditions when using an in-vehicle signing andinformation system (ISIS). Publication No. FHWA-RD-99-130, US Department of Transportation Federal Highway Administration.

Dia, H., & Rose, G. (1997). Development and evaluation neural network freeway incident detection models using field data. Transportation Research Part C,5(5), 313–331.

Dingus, T. A., Antin, J. F., Hulse, M. C., & Wierwille, W. W. (1989). Attention demand requirements of an automobile moving-map navigation system.Transportation Research, 23A, 301–315.

Dingus, T. A., Klauer, S. G., Neale, V. L., Petersen, A., Lee, S. E., Sudweeks, J., et al. (2006). The 100-Car naturalistic driving study, phase II—results of the 100-carfield experiment. Technical Report No. DOT HS 810 593. National Highway Traffic Safety Administration, Washington, DC.

Dingus, T. A., Hulse, M. C., Mollenhauer, M. A., Fleischman, R. N., McGehee, D. V., & Manakkal, N. (1997). Effects of age, system experience, and navigationtechnique on driving with an advanced traveler information systems. Human Factors, 39, 177–199.

Dougherty, M. (1995). A review of neural networks applied to transport. Transportation Research Part C, 3(4), 247–260.French, R. L. (1990). In-vehicle navigation-status and safety impacts (pp. 226–235). Technical Papers from ITE’s 1990, 1989, and 1988 Conference, Institute of

Transportation Engineers, Washington, DC.Green, P. (1996). In-vehicle information: Design of driver interfaces for route guidance. In Paper presented at TRB annual meeting. Washington, DC: National

Academy of Sciences, Transportation Research Board.Greibe, P. (2003). Accident prediction models for urban roads. Accident Analysis and Prevention, 34, 273–285.Hagan, M. T., Demuth, H. B., & Beale, M. (1996). Neural network design. Boston, USA: PWS Publishing Company.Hankey, J. M., Dingus, T. A., Andrews, C., Hanowski, R. J., & Wierwille, W. W. (2000). In-vehicle information systems behavioral model and design support; Task F.

Final Report (Contract No. DTFH61-96-C-00071). Center for Transportation Research, Virginia Tech, Blacksburg, VA.Hills, B. L. (1980). Vision, visibility, and perception in driving. Perception, 9(20), 183–216.ISO (International Standards Organization) (1995). Road vehicle ergonomics TICS-MMI working group ISO TC22 SC13 WG8. Draft standard: Driver visual

demand measurement method.Jamson, A. H., & Merat, N. (2005). Surrogate in-vehicle information systems and driver behaviour: Effects of visual and cognitive load in simulated rural

driving. Transportation Research Part F, 8, 79–96.Klauer, S. G., Dingus, T. A., Neale, V. L., Sudweeks, J. D. & Ramsey, D. J. (2006). The impact of driver inattention on near-crash/crash risk: an analysis using the

100-car naturalistic driving study data. Technical Report No. DOT HS 810 594. National Highway Traffic Safety Administration, Washington, DC.Kumagai, T., & Akamatsu, M. (2004). Modeling and prediction of driving behavior. In The second international symposium on measurement, analysis and

modeling of human functions (pp. 357–361), Genoa, Italy.Lee, H. C., Cameron, D., & Lee, A. H. (2003). Assessing the driving performance of older adult drivers: On-road versus simulated driving. Accident Analysis and

Prevention, 35, 797–803.Lewis, R. J. (2000). An introduction to classification and regression tree (CART) analysis. Torrance, CA: Department of Emergency Medicine, Harbor-UCLA

Medical Center.Lion, S. Y., Lim, W. H., & Paudyal, G. N. (2000). River stage forecasting in Banglandesh: Neural network approach. Journal of Computing in Civil Engineering,

14(1), 1–8.Liu, Y. C. (2000). Effect of advanced traveler information system displays on younger and older drivers’ performance. Displays, 21, 161–168.Liu, Y. C. (2001). Comparative study of the effects of auditory, visual and multimodality displays on drivers’ performance in advanced traveller information

systems. Ergonomics, 44(4), 425–442.Liu, Y. C. (2003). Effects of using head-up display in automobile context on attention demand and driving performance. Displays, 24(4–5), 157–165.Liu, A., & Salvucci, D. (2001). Modeling and prediction of human driver behavior. In Proceedings of the 9th HCI international conference (pp. 1479–1483), New

Orleans, LA.McDonald, W. A., & Hoffman, E. R. (1980). Review of relationships between steering wheel reversal rate and driving task demand. Human Factors, 22(6),

733–739.

Y.-K. Ou et al. / Transportation Research Part F 18 (2013) 83–93 93

MOTC (Ministry of Transportation and Communications) (2008). <http://www.motc.gov.tw/mocwebGIP/wSite/mp?mp=1> Retrieved 20.04.09.Mussone, L., Ferrari, A., & Oneta, M. (1999). An analysis of urban collisions using an artificial intelligence model. Accident Analysis and Prevention, 31(6),

705–718.NHTSA (2000). NHTSA driver distraction expert working group meetings: Summary & proceedings. Washington, DC: National Highway Traffic Safety

Administration.Rockwell, T. H. (1987). Spare visual capacity in driving – Revisited: New empirical results for an old idea. In A.G. Gale, M.H. Freeman, C.M. Haslegrave, P.

Smith & S.P. Taylor (Eds.), Vision in vehicles II (pp. 317–324).Shmueli, D., Salomon, I., & Shefer, D. (1996). Neural network analysis of travel behavior: Evaluating tools for prediction. Transportation Research Part C, 4(3),

151–166.Sohn, S., & Lee, S. (2003). Data fusion, ensemble and clustering to improve the classification accuracy for the severity of road traffic accident in Korea. Safety

Science, 41(1), 1–14.Srinivasan, R., & Jovanis, P. P. (1997). Effect of in-vehicle route guidance systems on driver workload and choice of driving speed: Findings from a driving

simulator. In Y. I. Noy (Ed.), Ergonomics and safety of intelligent driver interfaces: Human factors in transportation (pp. 97–114). Mahwah, NJ: LawrenceErlbaum.

Stutts, J., Reinfurt, D., Staplin, L., & Rodgman, E. (2001). The role of driver distraction in traffic crashes. Washington, DC: AAA Foundation for Traffic Safety.Victor, T. W., Harbluk, J. L., & Engstrom, J. A. (2005). Sensitivity of eye-movement measures to in-vehicle task difficulty. Transportation Research Part F, 8,

167–190.Vythoulkas, P. C., & Koutsopoulos, H. N. (2003). Modeling discrete choice behavior using concepts from fuzzy set theory, approximate reasoning and neural

networks. Transportation Research Part C, 11(1), 51–73.Wickens, C. D., & Hollands, J. G. (2002). Engineering psychology and human performance. NJ: Prentice-Hall Inc, pp. 447–454.Wierwille, W. W. (1995). Development of an initial model relating driver in-vehicle visual demands to accident rate. In Proceedings of the third annual mid-atl

antic human factors conference (pp. 1–7). Blacksburg, VA: Virginia Polytechnic Institute and State University.Yang, H., Kitamura, R., Jovanis, P., Vaughn, K. M., & Abdel-Aty, M. A. (1993). Exploration of route choice behavior with advanced travel information using

neural network concept. Transportation, 20, 199–223.Zwhalen, H. T., Adams, C. C., Jr., & DeBald, D. P. (1988). Safety aspects of CRT touch panel controls in automobiles. In A. G. Gale, M. H. Freeman, C. M.

Haslegrave, P. Smith, & S. P. Taylor (Eds.), Vision in vehicles II (pp. 335–344). Amsterdam: Elsevier.