To make informed decisions about professional learning, educators use data gathered from formative and summative evaluations. In an era of rapid-cycle change as is common with continuous improvement within communities of practice, school improvement, or cycles of inquiry, evaluation becomes a core part of the process. Assessing the impact of well-designed professional learning as well as the enactment of the learning experiences is crucial. Too often the mere act of checking off completed actions is considered a substitution for or approximation of the intended results. In other words, the plan for professional learning or its implementation is not the same as its effects.
Effectiveness1 and effectiveness2
The evaluation of professional learning measures the effectiveness or quality of the learning experience (effectiveness1) and impact or results of the learning program (effectiveness2). If for example, district leaders want to evaluate a district’s coaching program, the evaluation may focus on the implementation of coaching, how coaches spend their time, how much coaching teachers are receiving, or how designated resources are used. Asking questions such as these allow policy and decision makers to determine if the coaching program is being implemented as planned and meets the criteria established for a high-quality coaching program.
Measures of implementation, resource use, and identification and handling of unanticipated consequences provide information for program improvement. They are measures of the quality or merit of a coaching program, yet they are not measures of its impact. Another common measure is the perceived value or worth of professional learning. Teachers may appreciate the support they are receiving, yet their appreciation is not evidence that coaching has changed their practice or student achievement. Principals may appreciate coaches and value their contributions to the school staff, yet it may be evident when examining how coaches spend their time, that they spend little time in classrooms supporting or refining implementation of instructional practices.
While effectiveness1 measures are essential for managing and improving professional learning, assessing impact is crucial to determine if it is changing instruction or student learning. Assessing the impact of professional learning requires data to measure effectiveness2, whether changes are occurring in both the primary and secondary clients.
Outcomes
Measures of outcome attainment provide evidence of impact, while measure of implementation, resource use, and identification and handling of unanticipated consequences provide information for program improvement. To assess the impact of professional learning, those designing, implementing, and managing professional learning first invest in designing a sufficient, well-resourced, and sustained professional learning program that intends to accomplish specific outcomes. Outcomes describe the expected changes professional learning seeks to achieve for both primary and secondary beneficiaries. They include changes knowledge, attitudes, skills, aspirations, or behaviors (Killion, 2018). When delineated clearly, outcomes shape the professional learning from its content, learning designs including duration, implementation support, and necessary resources.
Primary and secondary beneficiaries
Delineating primary and secondary beneficiaries of professional learning specifies the immediacy of and sequence of change. Primary beneficiaries of professional learning are those educators engaging in learning; secondary beneficiaries are those who are affected by educators’ learning. If teachers are participating in professional learning, they are the primary clients; their students are the secondary beneficiaries. If principals are engaging in professional learning, teachers and students are the secondary beneficiaries.
An 8-step process
The evaluation of professional learning follows an eight-step process. Each step, listed below, incorporates a series of decisions to make and processes to follow to conduct a fair, reliable, and thorough evaluation of professional learning.
- Assess evaluability to determine whether the staff development program is ready to be evaluated.
- Formulate evaluation questions. The questions an evaluation attempts to answer shape the evaluation’s design.
- Construct evaluation framework to determine the evidence needed to answer the evaluation questions, the data sources, the data collection methodology, logistics of data collection, and the data analysis methods.
- Collect data.
- Organize, analyze, and display data.
- Interpret data to determine merit, worth, and/or impact and to make recommendations for improvement.
- Report and use evaluation findings. Identify audiences to receive findings, the most appropriate format for communicating findings to each, and disseminate findings.
- Evaluate the evaluation. Reflect on the evaluation process, the knowledge and skills of the evaluation team, the resources and methodologies used, and the findings to improve future evaluations.
Whether large-scale professional learning programs, such as those to implement new curricula or pedagogy, or small-scale professional learning efforts such as those occurring within professional learning communities, evaluation holds stakeholders accountable for results, provides information for making ongoing adjustments, and measures quality and impact of professional learning.
Further reading from Joellen Killion:
8 Smooth Steps, JSD FALL 2003
Why Evaluations Fail, JSD FALL 2017
Assessing Impact: Evaluating Professional Learning, 3rd Edition This third edition guides administrators, professional learning leaders, school improvement teams, and evaluators step by step through the rigors of producing an effective, in-depth, results-based analysis of your professional learning programs.