Learning assessment and program evaluation: Examining the efficacy of Informal Reading Inventory practices using Running Reading Records and retellings

Date of Award


Degree Type


Degree Name

Doctor of Education (EdD)


Teaching and Leadership


Joseph Shedd


Literacy, Reading instruction, Curricula, Teaching, Educational evaluation

Subject Categories

Educational Assessment, Evaluation, and Research


For the past decade, concerned educators have demanded changes in conventional assessment and evaluation practices. Teachers need assessment procedures which yield timely information and contribute directly to students' learning. Administrators need information which allows them to make policy decisions and evaluate programs which affect large groups of students over a period of time.

This study is the first of several designed to determine whether common classroom assessment practices currently used in schools can be structured to yield information that simultaneously supports learning and assessment of the student, classroom, and program levels. The focus of this initial study is "How effectively can two specific Informal Reading Inventory (IRI) practices, Running Reading Records and retellings, profile a young reader's reading proficiency?" Within the broad context of this question, three questions are addressed directly: (a) Are narratives with lower readability easier to decode than narratives with higher readability; (b) Are simpler narratives easier to comprehend than complex narratives; and (c) Does scoring narrative retellings inferentially reveal a different student proficiency profile than scoring narrative retellings literally?

To explore these questions the focus was upon Running Reading Records and 1,014 scripted retellings taken during 206 children's first and second grade experience. A procedure was devised which used Mosenthal's (1994) implication hierarchy to determine the narrative complexity level of the 26 assessment passages, as well as to score the retellings literally. Cohesion inferences (Mosenthal & Kirsch, 1994) were used as a guide to score the retellings inferentially. Each retelling, therefore, was judged twice, determining the complexity level both literally and inferentially. Statistical analysis revealed that narratives with lower readabilities are not easier to read, simpler narratives are not easier to read, and inferential scoring of retelling provides a richer profile of a student's retelling performance than literal scoring.

The results of this study serve to inform practices regarding the design and use of Informal Reading Inventories. Importantly, the results, in addition, should contribute to practices that provide assessment profiles which optimize a student's reading proficiency, provide information about individuals and groups of students at the classroom level, and allow these same data to be used for policy making and program evaluation.


Surface provides description only. Full text is available to ProQuest subscribers. Ask your Librarian for assistance.