RESEARCH

Study examines teachers' perceptions of student achievement data

By Elizabeth Foster
June 2019
Vol. 40, No. 3

Conversations about effective professional learning communities often point to a focus on data as a way to get specific about differentiated instruction and maintain a focus on student progress.

Policymakers recently have paid quite a bit of attention to the potential of data-driven decision-making to learn more about and potentially improve instruction and address achievement gaps. Yet there is a wide range of interpretations about what data are most valuable, what teachers are expected to do with or think about student data, and how exactly the connections between data and instruction are supposed to be informed and addressed.

This line of thinking is especially interesting in light of several ongoing Learning Forward projects. We are particularly interested in how conversations about student learning can be specific about instructional strategies and continued educator learning about what strategies work for which students under which circumstances.

In one project, we’re collaborating with the American Institutes of Research, Charlotte Danielson Group, and TeachForward to explore how the teacher team continuous learning cycle described in Becoming a Learning Team, 2nd edition (Hirsh & Crow, 2018) can support teachers to identify teaching strategies that align to the teaching clusters described in Talk About Teaching, 2nd edition (Danielson, 2016).

In another, teams of educators in Learning Forward’s What Matters Now Network are working to name specific evidence-based practices and instructional strategies that will address the student learning gaps identified in their data analysis.

The Learning Forward perspective for looking at this issue aligns with the broader context of increasing policy and practice expectations that teachers examine data about student performance and then make instructional decisions based on that data.

ABOUT THE STUDY

A recent qualitative study by a team of researchers looked into how grade-level teams of teachers are thinking about causes and strategies based on looking at student performance data. What is interesting in these findings is how infrequently teachers attribute student results to instruction — just 15% of the time. Teachers in this study were much more likely to point to student characteristics such as behavior or effort or to an external factor such as a mismatch between student and the assessment.

The research team observed six well-established grade-level teams (grades 3-5) over the course of an academic school year as they examined student assessment data. These observations took place in three schools mandated by the district to meet biweekly for a 30-minute block and also to meet quarterly for 90 minutes to examine student performance data.

The three schools used these blocks for different purposes: one to primarily address Response to Intervention decisions by looking at large-scale assessments, one to look at both Response to Intervention decisions and inform broader discussions by using district assessments and classroom-level assessments, and one to have collaborative conversations about students while using a wide range of data, including teachers’ own student performance data.

The researchers observed more than 44 hours of meetings and conducted 16 individual interviews and six group interviews with participating teachers. They were particularly interested in what teachers really talk about when they talk about student data and how what they know about students (and their context) influences their conversation and their next steps.

The researchers noted when teachers focused on a particular data point and also the process of “sense making” among the team — when they related quantitative data to what they know about instruction, students, context, or other influencing factors. Part of what makes this study compelling is that it explores whether the hypothesis that teachers look at student data as a way to connect and adapt their instruction is borne out in practice, especially given a body of research that indicates that student performance is more often attributed to student or contextual factors.

The authors provide a useful review of literature related to attributions teachers make when they talk about student results. In brief, the extant literature on data-driven decision-making talks about two main relationships between teachers’ instruction and students’ academic performance: Teachers can use student data to inform future instructional decisions, and student data can be seen as representing the effectiveness of past instruction.

For example, teachers could use assessment data to identify a group of students who have not understood a major concept and then plan a lesson revision or a way to reteach the concept to that group of students. This is often referred to as data driving instruction.

In contrast, student data could be used to better understand how effectively a teacher taught a particular concept. The research suggests that teachers do not consistently relate student performance to their own instruction, but rather are more likely to point to student characteristics such as effort or level of attention.

There are instances where teachers attribute low student performance data to students’ race, gender, or economic status. It seems that the research about what teachers attribute student performance to is contrary to the assumption that adding data will spark a conversation about instruction.

In addition, research suggests that teachers perceive the impact of their own instruction on student achievement data in different ways and that the way they view that relationship impacts their choices about next instructional actions.

“When teachers do not see their work as a contributing factor to students’ performance, then they may not consider student data as informative for their instruction,” the researchers note.

If teachers feel that students are unwilling to learn or exert effort, they may be less likely to try a new strategy. And yet the purpose of data-driven decision-making assumes that teachers will look at student data and readily make inferences about their own teaching. This study looked at six data teams to surface the real-time conversations among educators about data.

FINDINGS

This study found that the observed teachers did not analyze student performance through the lens of instruction but rather were fairly quick to attribute the data to student characteristics or, in some cases, to a mismatch of student abilities to the type of assessment given (especially regarding multilingual learners).

The researchers coded some of the discussions as explanations rather than attributions because of the lack of analysis of the reasoning. “We did not observe teachers attempting to systematically identify root causes of student performance,” they state in their findings.

The researchers also did not observe teachers discussing whether student performance was fixed or able to be influenced by different instructional strategies.

The researchers found that the teachers’ explanations of student performance fell into the following categories: 32% related to behavioral characteristics of the student, such as attitudes or attention; 21% related to a mismatch between the students and the assessment (often related to English fluency); 18% related to suspected or established underlying student conditions, such as dyslexia or emotional issues; 13% related to students’ home lives or family circumstances; and 15% related to teachers’ instruction.

The researchers also noted that teachers were more likely to discuss student data when performance was low or declining rather than when there was growth or a strong showing. While the researchers expected the teams to discuss what made scores increase when they did, this did not occur.

This may be a lost opportunity because a more explicit discussion of both instructional choices and improvement in student scores would allow teacher teams to map these changes and infer some causal connections to explore further.

There is also a question raised in this study about why teachers don’t have a productive discussion about instructional strategies related to students who have disengaged or a group of students who appear not to care about the content.

The schools in the study approached the data conversations with different purposes and priorities. Leadership decisions about how to use these data meetings, how they were set up, and what data were examined and prioritized likely impacted the study’s findings.

The norms and expectations of ways in which the educators contributed to the data conversations varied from some commenting only on the data at hand while others were regularly engaged in a broader conversation.

It is also worth noting that, despite the differences across schools in the makeup and operation of the data discussions, most teachers across the sample did not perceive their practice of data-driven decision-making as particularly meaningful.

Keeping in mind that this is a relatively small sample size, this study’s findings raise interesting questions about what teachers are talking about with regard to student data. It reminds us to be clear from the outset what we are expecting when we ask teachers to look at student data: Is the conversation best focused on student characteristics and the classroom supports needed, or is the conversation better pitched to be about previous teachers’ instruction and therefore concepts that need to be retaught? Or is the conversation designed to provide just-in-time feedback about current teaching strategies?

Most teachers across the sample did not perceive their practice of data-driven decision-making as particularly meaningful.

These questions are important for school and district leaders as well as for those who design and lead professional learning. Leaders and designers could explore some of the following points and questions in working with data-driven decision-making initiatives:

  • What is the intent of the data-focused discussions? Is that intent clear among the team?
  • Does the type of data you are looking at correspond to the purpose of the discussion?
  • The composition of teams, the agenda for meetings, and the norms about discussions impact how well individuals and collaborative teams learn and process. Is this factoring into the work?
  • Are there prompts for teams to consider both learning abilities and language abilities?
  • Is there a process for following up on perceived learning challenges (e.g. referrals)?
  • What would it look like for teams to focus on improvements rather than deficits?
  • Which is the greatest need in terms of professional learning — professional learning focused on instructional strategies or on cultural context?
  • What is the sequence of supports needed for the collaborative team, and which educators need to be included?

This study highlights the need to check professional learning related to data-driven decision-making against several of Learning Forward’s Standards for Professional Learning, primarily the Leadership, Learning Designs, and Data standards.

The Learning Designs standard asks us to think about what we are trying to achieve with our professional learning. If we are interested in focusing on instructional changes, that needs to be a clear expectation of the data conversations and the data team at the outset. The supporting professional learning needs to have a clear focus on instructional choices related to the concepts at hand and an explicit effort to connect student data to instruction.

If we are interested in what student characteristics need to be better understood or addressed as a result of the data conversations, that is a different expectation and requires a different design and set of resources.

The corresponding professional learning needs to be developed related to identifying strategies to address specific student challenges about learning disabilities or perceived conditions. In addition, the research base that informs this study is a reminder that, at times, educators would benefit from professional learning related to biases about gender, race, family background, and socioeconomic status.


Elizabeth Foster

Elizabeth Foster (elizabeth.foster@learningforward.org) is associate director of standards, research, and strategy at Learning Forward. In each issue of The Learning Professional, Foster explores recent research to help practitioners understand the impact of particular professional learning practices on student outcomes.

THE STUDY

Evans, M., Teasdale, R.M., Gannon-Slater, N., La Londe, P.G., Crenshaw, H.L., Greene, J.C., & Schwandt, T.A. (2019). How did that happen? Teachers’ explanations for low test scores. Teachers College Record, 121(2).

References

Danielson, C. (2016). Talk about teaching (2nd ed.). Thousand Oaks, CA: Corwin.

Hirsh, S. & Crow, T. (2018). Becoming a learning team (2nd ed.). Oxford, OH: Learning Forward.


Download the PDF version


Search
The Learning Professional


Published Date

CURRENT ISSUE



Recent Issues

ENGLISH LEARNERS
April 2019

Nearly five million students come to U.S. schools speak..

TRANSITIONS AND TURNING POINTS
February 2019

Transitions are a part of life in schools, sometimes pl..

INSTRUCTIONAL MATERIALS
December 2018

All students deserve access to great instructional mate..

EQUITY
October 2018

Educational equity is easy to get behind but challengin..