Rater Reliability to Assess Driving Errors in a Driving Simulator

Published: June 1st, 2015

Category: Peer Reviewed Publications

Classen, S., Yarney, A., Monahan, M., Platek, K., & Lutz, A. (2015). Rater reliability to assess driving errors in a driving simulator. Advances in Transportation, Section B 36, 99-108.

Synopsis:

Inter-rater reliability is necessary for studies reliant on rater judgments. This study measured inter-rater reliability on scoring simulated driving assessments in two rounds for one primary and two secondary raters. Because the raters did not achieve the 90% cut point for intra-class correlation after round one, the secondary raters received further training. After training, inter-rater reliability showed significant improvement, indicating that identifying differences in assessment and strategies such as problem identification can significantly improve inter-rater reliability.