Menu

FOCUS

Evaluating professional learning doesn’t have to be complicated

By Elizabeth Foster
Categories: Evaluation & impact
June 2025

We know evaluation data helps inform decision-makers about the time, human capital, and funding required for professional learning to be effective. Evaluation data also guides program improvement and sets leaders’ expectations for ongoing monitoring and accountability. Yet understanding and documenting how professional learning is leading to improved outcomes for educators and students is challenging for many reasons (Research Partnership for Professional Learning, 2023).

The complexities of the educational systems in which professional learning happens mean there are multiple factors that could potentially contribute to changes in teacher and student outcomes, making it difficult to isolate the effects of the adult learning. Chronic challenges such as educator turnover, shifting strategic priorities, and participant attrition can derail even the most carefully designed evaluation plan and make the results difficult to interpret. And sometimes there is a mismatch between the way researchers or evaluators approach a rigorous study and what professional learning facilitators need to know to inform decisions and improvements. 

But regardless of these challenges, we need to study the professional learning we design and facilitate so we know what’s working, what’s not, what to continue or scale up, and what to change. If we don’t know, for example, whether professional learning about implementing a new math curriculum results in changes in instruction and student learning, we don’t know whether to continue it as is or make changes. And if we don’t know how educators are experiencing the learning, we won’t know if our expectations for their learning are being met, why or why not, and what to do about it. No professional learning facilitator, whether internal or external to the district, should get to the end of an engagement without information about how many hours were spent, how they were spent, and what the participants think they learned that is relevant to their work and applicable in their contexts.

We need to be strategic about methodologies and resources to continue building local knowledge about professional learning efforts and the field’s larger evidence base about professional learning. Fortunately, there are a range of strategies for collecting evidence about the impact of professional learning. This journal’s research column regularly looks at studies that employ rigorous methods including randomized control studies, quasi-experimental studies, formal evaluations, and large-scale surveys. But the power of less formal data collection and analysis methods shouldn’t be overlooked. When we don’t have the resources to do a full formal evaluation, we shouldn’t throw in the towel and give up on measuring our efforts. We can’t let the perfect be the enemy of the good.

Data sources for professional learning evaluation

As I wrote in the December 2024 issue of The Learning Professional, “everyone can conduct some type of impact evaluation in a way that makes sense for their context, research questions, and needs.” This article adds to that recommendation with more detailed options for conducting professional learning research regardless of your resources. I must note that I have great respect for rigorous independent evaluations and expert evaluators. In fact, Learning Forward is currently seeking support for more formal evaluations of our contracts and services. But where we do not have those resources, we still gather and analyze data — qualitative and quantitative, formative and summative — that provide information about the impact of professional learning.

Following is a range of data sources that allow educators to collect evidence about the impact of professional learning, even in the face of funding constraints and other systemic challenges.

Existing data. Key first steps in evaluation are to get very clear about the purpose and intended outcomes of the professional learning and then thoughtfully identify the measure or measures that will best represent the change. It’s worth checking if your district or school is already collecting data that can be used as an indicator of the intended change. For instance, many districts collect student-level math achievement data by school and grade level. Analyzing this data in a way that compares a coached teacher’s class to a non-coached teacher’s class yields information about whether the coaching is having an impact.

Having regular conversations with the district’s data office can help avoid duplication of effort. Often duplication happens because of siloed departments and lack of cross-department conversation. Other times it happens out of a well-intentioned but unnecessary desire to be comprehensive. Evaluators should consult professional learning leaders, facilitators, and (if possible) participants about which measures to select, because those designing and supporting the professional learning are most knowledge about what information is relevant and useful to all stakeholders.  

Self-report by participants. Participants can provide a great deal of information about what they experience during professional learning and whether they will and do apply new knowledge afterward. These self-reports vary depending on the professional learning’s intended outcomes. For example, principals could complete a short reflection noting whether they attained new knowledge and skills needed to improve their school’s professional learning implementation or whether they feel comfortable using a classroom walk-through tool. Teachers might report that the professional learning they experienced increased their understanding of a particular instructional strategy and how to incorporate it into their classroom instruction — information that can be analyzed alongside principals’ observations to form a broader understanding of impact. 

Self-report data can often be combined with other data and documentation to paint a more complete picture. For instance, instructional coaches can systematically record whether and for how long they meet with the educators they coach (while maintaining confidentiality) and describe meeting content in general terms. This data can be analyzed alongside information from coaching logs that coaches and participants complete post-meeting, which can provide rich data about the content of the conversation, the relevance of the coaching to the participant, and any actions or strategies that will be undertaken based on the coaching conversation. Together, this information can provide a detailed picture of the coaching.

Survey data. Surveys administered by program or district leaders can surface participants’ input, which can drive improvement in learning designs, content, and more. A one-time survey, for example, can collect data about the content of a coaching conversation without compromising confidentiality. If the question is “What was the topic of the coaching conversation?” with a dropdown of responses to select from, the data will reveal whether the conversations are addressing the project’s priorities. If the charge for the coaches was to address feedback conversations but only 15% select that option, that provides guidance about the next coaches’ training. If 50% of responders choose “systemic challenges,” that is cause for a very different kind of conversation. End-of-session surveys are common and useful, but they are most useful if they ask specific questions about what aspects of the learning were most effective and how and when the learning will be utilized.

Survey responses to the same question over an extended period of time provide critical information as well. For instance, asking about barriers to implementation of learning in the first of several professional learning sessions provides information about what to address in subsequent sessions. Asking that same question over two years of professional learning can demonstrate a meaningful decrease in perceived barriers to implementation of new learning.

Pre- and postlearning assessments. Assessing educators’ knowledge or skills before and after a professional learning session or series of sessions can provide helpful information about whether the design, content, and facilitation of the professional learning was effective. When taken together, multiple skills and knowledge assessments can provide a picture of how well the professional learning is working overall, help identify blind spots in the professional learning, and inform the learning going forward.

For example, Learning Forward’s professional learning sessions designed to strengthen educators’ understanding of the Standards for Professional Learning (Learning Forward, 2022) include a preassessment about what participants know about the standards followed by a postassessment to determine if the learning has taken place as intended. The preassessment information is used to inform and guide the professional learning in real time as it is happening, while the postassessment data is used to determine whether the professional learning achieved its intended outcomes. Postassessment data is also used to determine what follow-up might be needed to address any missed learning opportunities, as well as how the professional learning might be improved before it is repeated.

Interviews with participants. Structured interviews and focus groups provide an opportunity for detailed feedback and insight. Open-ended questions allow participants to provide more detailed feedback than a survey or a question embedded in another conversation, such as a coaching conversation. Structured focus groups, grouped by site or by role, provide rich data as participants build off one another’s responses. Using a standardized protocol can help leaders or researchers yield comparable data from a large number of participants that can be coded so themes can be analyzed and illustrative examples can be pulled out and shared. Valid, field-tested instruments are available to support these efforts, and many of them are free.

One-on-one empathy interviews (Nelsestuen & Smith, 2020) are a valuable way to gather information from students, because they are a way to use open-ended questions to learn more about what a student experienced in a classroom or with a teacher, as well as to determine what needs they have that are not being addressed. This data can be aggregated in order to provide valuable information for teachers and professional learning designers and facilitators.

It’s important to note that the person conducting the interview may have an impact on the respondent’s answers. One of the cited benefits of an external evaluator conducting interviews with participants of professional learning is that they do not have existing relationships with the educators they are interviewing. Interviews are still valuable when conducted by a member of the program staff, however, and the interview transcripts can be coded and analyzed by theme while maintaining confidentiality.

Practical measures. For ongoing efforts to improve professional learning, data about day-to-day instruction can be quite meaningful. This is where practical measures can bring valuable information to the assessment of professional learning quality and relevance (Walston & Conley, 2022). For instance, as teachers shift their instruction to implement a new strategy learned from professional learning, they can collect a tally of whether students are responding to the instructional shift in the way teachers expected, such as whether students complete the task or talk about the content as intended. This kind of tally is direct evidence of whether the instruction that has been informed by the professional learning has changed in a way that is impacting students.

Simple measures can track both quantity and quality — for example, tracking whether coaching meetings are happening, for how long, and what the content of those conversations is can yield basic but valuable information. Once it is established that there is a change in quantity, measuring quality is the next step. What is the content of those coaching conversations, and are they leading to the expected next steps or application? As always, it’s important to consider the burden on the data collector (especially if it is a teacher) — will the collecting of data add to an already full plate? It’s also important to consider feasibility and the ability of the measure to reveal variance across the professional learning that is being experienced.

Take the time to analyze and share evaluation results

How to collect data is not the only important factor professional learning leaders should consider. It’s equally important to determine how you will share the results with participants and other key stakeholders in a timely and contextualized way. Educators want to see that their feedback and input were useful and valuable, that it informed the facilitators’ choices about design and follow up. A subsequent professional learning session is a great opportunity to share these results, but if no more sessions are planned leaders can share the information during a staff meeting or in a letter.

Regardless of the methods you choose, all of these evaluation steps require time for data collection, analysis, and sharing results. Time is precious, especially in schools where it often seems like there are too many urgent needs and not enough hours in a school day or year. Ensuring collaboration among all stakeholders is key for developing a shared understanding about what tools will be used and when, what they will measure, and how the results will be analyzed and shared. This helps set school and districtwide expectations that data will be collected, analyzed, and used consistently for the betterment of students.

Download pdf here.


References

 Learning Forward. (2022). Standards for Professional Learning.

Nelsestuen, K. & Smith, J. (2020). Empathy interviews. The Learning Professional, 41(5), 59-62.

Research Partnership for Professional Learning. (2023). Measuring teacher professional learning: Why it’s hard and what we can do about it. tinyurl.com/5fumdbbk

Walston, J. & Conley, M. (2022). Practical measurement for continuous improvement in the classroom: A toolkit for educators (REL 2023-139). U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southwest.


Elizabeth Foster
Senior Vice President, Research and Strategy | + posts

Elizabeth Foster is the senior vice president of research and strategy at Learning Forward. She leads the organization’s research efforts for partnerships, programs, and fundraising. Elizabeth co-wrote the Standards for Professional Learning (2022) with Tracy Crow and now facilitates learning sessions about the standards and develops resources that support their use and implementation.


Categories: Evaluation & impact

Search
The Learning Professional


Published Date

CURRENT ISSUE


Recent Issues

MAXIMIZING RESOURCES
August 2025

This issue offers advice about making the most of professional learning...

MEASURING LEARNING
June 2025

To know if your professional learning is successful, measure educators’...

NAVIGATING NEW ROLES
April 2025

Whether you’re new to your role or supporting others who are new,...

LEARNING DESIGNS
February 2025

How we learn influences what we learn. This issue shares essential...

×

Register your interest

This field is for validation purposes and should be left unchanged.