Written by the HGSE team
In the last post, Caitlin Farrell outlined an observation protocol designed to measure how district leaders use research in interactive and social settings. Here, we describe our survey of research use, another instrument that is part of NCRPP’s effort to create a suite of measures.
The NCRPP survey has two primary goals. First, we aim to measure critical features of research use: practitioners’ attitudes toward research, their understanding of research methods and concepts, and specifics about how they use research in their contexts. Second, we want our survey sample to gather data representative of the largest 1,000 school districts in the country.
Data from the survey will allow us to gain a broader understanding of research use than we have currently. For one, the survey is comprehensive, asking questions that will allow us to get a fuller picture of research use among educational leaders. The representativeness of the sample will also allow us to describe both the average level of and variation in research use in our nation’s largest 1,000 school districts. We plan to sample a wide range of educational leaders in districts, too – from curriculum coordinators to supervisors of principals to building principals.
These two goals—to meaningfully measure research use and obtain a representative sample—have presented us with several dilemmas. Below, we detail a few of these challenges and how we’re addressing them.
Design Challenge: Measuring individuals’ skills in “reading” research
One of the survey constructs is district leaders’ skill in interpreting and analyzing research studies. Since few researchers have attempted to measure this topic, there are only a limited number of existing surveys from which to to build.
To generate ideas for survey items, we conducted interviews with current district and school leaders and asked them to critique the methods, findings, and conclusions of studies we described to them. Through this process, we learned that there are many ways these professionals interpret and critique studies. While one district leader might focus on the validity of measures used in a study, another might find problems with the sampling procedure. In many cases, it is difficult—even impossible—to determine whether a specific interpretation is “wrong” or “right,” because that judgment depended upon a leader’s assumptions about the context of the study.
To address these challenges, we have created items focused on a specific feature of a research study and provided response choices with right or wrong answers. Below is an example of an item designed to get at the interpretation of correlations between two variables in a research study. What do you think the answer is? Scroll to the bottom of this post for the correct choice.
Of course, there are trade-offs with using closed-response questions to measure skill. For instance, there are concerns related to a study’s external validity we might have asked but could not in this format. Concerns about whether the findings would generalize to “districts like mine” could not be included in our item pool because the right answer would vary across respondents. We know from other studies of district leaders that concerns like this really matter for research use.
Sampling Challenge: Drawing a sample and accommodating varied job titles and responsibilities
Two additional challenges arose as we attempted to obtain a nationally representative sample of school and district leaders. First, while we wanted to survey individuals who may use research to improve teaching and learning (such as administrators in charge of mathematics curriculum and instruction), there was no definitive list of such leaders from which to construct a sampling frame. Having an accurate sampling frame, or list of all members of the population eligible to participate in the study, helps ensure that we can repeat this survey in the future to answer new research questions or to track change over time.
To solve this problem, we turned to a firm that maintains an updated database of school and district leaders’ names and emails using a technique called “web scraping.” Web scraping is a process of automatically gathering information from websites using specialized computer software. We used initial lists generated by web scraping, confirmed their accuracy, and manually updated district rosters via telephone. This sampling approach is relatively novel. While telephone rostering has long been used in survey research, to our knowledge, ours is the first sample of potential respondents to be constructed via “big data” techniques such as web scraping.
The second sampling challenge is that people’s actual job responsibilities did not always match the information collected via the web scrape or even telephone rostering. For instance, individuals may have shifted positions between the time of our call and receiving the survey. Individuals may also hold two or more positions within a district. To overcome this uncertainty, respondents report their job responsibilities in the survey itself, enabling us to verify our sampling accuracy using information provided directly by respondents.
Through our survey administration, we hope to generate a nationwide portrait of research use in the largest U.S. school districts. The survey is part of NCRPP’s larger goal to increase understanding of research use and to ultimately make educational research more useful and relevant to practitioners. We look forward to hearing district and school leaders’ thoughts as we move forward with this work.
Thanks to the entire staff of NCRPP for the development of the survey and its continued administration and analysis.
Answer to the survey question: C