Examining Design and Inter-Rater Reliability of a Rubric Measuring Research Quality across Multiple Disciplines

Marilee J. Bresciani, San Diego State University - Imperial Valley Campus
Megan Oakleaf, Syracuse University
Fred Kolkhorst, San Diego State University
Camille Nebeker, San Diego State University
Jessica Barlow, San Diego State University
Kristin Duncan, San Diego State University
Jessica Hickmott, Weber State University

Description/Abstract

The paper presents a rubric to help evaluate the quality of research projects. The rubric was applied in a competition across a variety of disciplines during a two-day research symposium at one institution in the southwest region of the United States of America. It was collaboratively designed by a faculty committee at the institution and was administered to 204 undergraduate, master, and doctoral oral presentations by approximately 167 different evaluators. No training or norming of the rubric was given to 147 of the evaluators prior to the competition. The findings of the inter-rater reliability analysis reveal substantial agreement among the judges, which contradicts literature describing the fact that formal norming must occur prior to seeing substantial levels of inter-rater reliability. By presenting the rubric along with the methodology used in its design and evaluation, it is hoped that others will find this to be a useful tool for evaluating documents and for teaching research methods.