Date of Award
1-24-2024
Degree Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Instructional Design, Development and Evaluation
Advisor(s)
Tiffany Koszalka
Keywords
construct validity;deeper learning;Delphi technique;expert instructional designers;learning resources;rubric
Subject Categories
Education
Abstract
Abstract Central to the landscape of education is the understanding that instruction is fundamentally purposeful, aiming to facilitate effective learning. Grounded in this premise, instructional design emerges as a complex and multifaceted profession. In the digital era, Instructional Designers (IDs) harness technological advancements to shape robust learning resources. However, with a burgeoning array of resources, each varying in efficacy, arises the pressing need for designs firmly anchored in evidence-based learning principles. The challenges encountered by IDs, especially in crafting resources that accentuate deeper learning, cannot be overlooked. Existing rubrics, while extensive, often fall short in addressing the nuances imperative for designing interactive and engaging learning resources. To bridge this theory-to-practice chasm, this study proposes the Learning Resources Rubric (LRR) – a tool tailored for IDs. Derived from principles of three well-established deeper learning theories: Generative Learning Theory (GLT), Cognitive Flexibility Theory (CFT), and Reflection Theory (RT), the LRR offers a scaffold that elucidates the design, selection, and evaluation of learning resources, fostering deeper learning. The genesis of the LRR can be traced back to the Research in Designing Learning Resources (RIDLR) working group, a confluence of scholars and practitioners who, through an integrative inquiry approach, navigate the creation of resources, informed by GLT, CFT, RT, and the overarching theme of learner engagement. In this dissertation, an online three-round modified Delphi method was used to validate the LRR (Version 3), built upon a prior study (Wang & Koszalka, 2023). Of 576 potential expert IDs identified via social media, a university alum listserv, and referrals, 351 completed Round 1. Given the consensus from Round 1, Round 2 was bypassed, leading to 6 focus groups in Round 3 with 22 experts. Round 1 featured a 74-item Qualtrics survey, rated on a 5-point Likert scale, and provided insights into the LRR’s validity. Round 3’s focus group interviews, guided by open-ended questions, gathered qualitative data on the rubric’s integration into instructional practices. A mix of quantitative and qualitative data collected across two rounds helped establish the rubric’s validity. Multiple methods, including surveys, focus groups, and online document analysis, cross-validated the findings. The dissertation honed the LRR, establishing its construct validity and potential in guiding IDs towards applying the LRR in real-life scenarios. Construct validity, rubric scores, and insights from surveys and focus groups are highlighted. Data, analyzed via Confirmatory Factor Analysis (CFA) and Structural Equation Modeling (SEM), endorsed the LRR indicators’ measurement model, with SEM underscoring their theoretical alignment with GLT, CFT, and RT frameworks. The findings solidified the LRR’s effectiveness in steering IDs to enhance learning resources. The updated rubric and suggested next steps hold the potential to advance instructional design practices and scholarship.
Access
Open Access
Recommended Citation
Wang, Lei, "Construct Validation of a Learning Resources Rubric [LRR]: A Modified Delphi Study" (2024). Dissertations - ALL. 1846.
https://surface.syr.edu/etd/1846