Date of Award

May 2019

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Design

Advisor(s)

Nick L. Smith

Keywords

Confirmatory Factor Analysis, Essential competencies for program evaluators, Evaluation practice, Multiple indicators multiple causes (MIMIC), Professional competency, Structural equation modeling

Subject Categories

Education

Abstract

The study aims to examine the interplay of two critical constructs in evaluation: essential evaluator competency and evaluator practice. The research questions in this study, according to Smith (2008), are essentially, what he defined as “fundamental issues in evaluation.” These issues fall into one or multiple of the four aspects identified in the fundamental issues in evaluation framework: theory, practice, method, and profession. The intertwined nature of these aspects implies the interactive relationships between the two constructs. The study utilizes the structural equation modeling (SEM) methodology, first to examine construct validity and psychometric properties of the measurement scales, and then explore how the two latent variables of evaluator competencies and evaluator practice interact when evaluators conduct evaluations.

A random sample of 2,000 was drawn from the American Evaluation Association membership directory (n = 7,700), and 459 evaluators from a variety of backgrounds responded. After analyses in the exploratory, confirmatory, and structural phases, the study confirmed five competency dimensions of evaluative practice, meta-competencies, evaluation knowledge base, project management, and professional development. In addition, analytical results confirmed factor structures of the eight evaluator practice subscales and also revealed four distinct practice patterns, similar to previous research results (Shadish & Epstein, 1987). Despite a small number of significant effects of covariates such as years of experience and evaluation background, multiple indicators multiple causes (MIMIC) model results concluded that the measurement models were mostly invariant across various population groups. Lastly, the structural phase analyses uncovered that the relationship between evaluator self-assessed competencies and evaluator practice patterns are interactive. The findings from the SEM model with self-assessed competencies as predictors indicated that evaluators with higher self-assessed evaluative practice competencies tend to engage in the academic and method-driven practice patterns; Evaluators with higher self-assessed meta-competencies tend to engage in the use-driven practice pattern more frequently. On the other hand, when evaluator practice patterns served as predictors, the results showed that evaluators engaging in the academic pattern more often tended to rate higher of their evaluative practice, meta, and evaluation knowledge base competencies; and evaluators engaging in the use-driven practice pattern tended to rate higher of their competencies in all areas except evaluation knowledge base.

The study extends previous research by confirming the factor structures of two critical constructs in the evaluation field and providing empirical support for future studies. The findings contribute to a better understanding of several fundamental issues in evaluation, evaluation professionalization and the general knowledge base of the field.

Access

Open Access

Included in

Education Commons

Share

COinS