Glossary
Average Test Score
Learning Rate
Trend in Test Scores
Educational Opportunity
SES
Grade Level
Geographic School District
Most traditional public-school districts in the U.S. are defined by a geographic catchment area; the schools that fall within this boundary make up the geographically defined school district. The physical boundaries of geographic catchment areas have changed over time; we use the 2019 EDGE Elementary and Unified School District Boundaries to define geographic districts included in SEDA. Note that geographically defined school districts exclude special education schools (as identified by the Common Core of Data’s school type flag), regardless of the school’s physical location.
Data on in the Opportunity Explorer is reported for geographic school districts. Data for administrative school districts can be downloaded on the Get the Data page.
Administrative School District
Administrative school districts are defined per National Center for Education Statistics (NCES), and the schools operated by each administrative district are identified using the NCES leaid
.
Data on in the Opportunity Explorer is reported for geographic school districts. Data for administrative school districts can be downloaded on the Get the Data page.
Gap in School Poverty
Gap in Percent Minority Students in Schools
Cohort
Average
Standard Deviation
Standardized
Achievement Test
Test Score
Achievement Level (or Proficiency Level or Proficiency Category)
BIE
Cut Score/Threshold
Proficiency Data
Proficiency Rate
Linking
NAEP
EDFacts
HETOP
ECD
FRPL
Understanding the Data
Where do the test score data come from? What years, grades, and subjects are used?
The data are based on the achievement tests in math and Reading Language Arts (RLA) administered annually by each state to all public-school students in grades 3–8 from 2008–09 through 2018–19. In these years, 3rd through 8th graders in U.S. public schools took roughly 500 million standardized math and RLA tests. Their scores—provided to us in aggregated form by the U.S. Department of Education—are the basis of the data reported here.
We combine information on the test scores in each school, geographic district, county, or state with information from the National Assessment of Educational Progress (NAEP; see https://nces.ed.gov/nationsreportcard/about/) to compare scores from state tests on a common national scale (see the Methods page).
We never see nor use individual test scores in this process. The raw data we receive includes only counts of students scoring at different test-score levels, not individual test scores. There is no individual or individually-identifiable information included in the raw or public data.
What are “educational opportunities”?
Educational opportunities include all experiences that help a child learn the skills assessed on achievement tests. These include opportunities both in early childhood and during the schooling years, and experiences in homes, in neighborhoods, in child-care and preschool programs, with peers, and in schools.
For more information see our interactive discovery, “Affluent Schools Are Not Always the Best Schools.".
What do average test scores tell us?
To understand the role of educational opportunities in shaping average test-score patterns, it is necessary to distinguish between individual scores and average scores in a given school, geographic district, county, or state.
Differences in two students’ individual test scores at a given age reflect both differences in their individual characteristics and abilities and differences in the educational opportunities they have had. However, because the average innate abilities of students born in one community do not differ from those born in another place, any difference in average test scores must reflect differences in the educational opportunities available in the two communities.
Why are there three different summaries of test scores (average scores, learning rates, and trends in scores) in each place? What can we learn from each of these?
The three scores tell different stories.
-
Average test score: The average test score indicates how well the average student in a school, district, or county performs on standardized tests. Importantly, many factors—both early in life and when children are in school—affect test performance. As a result, the average test scores in a school, district, county, or state reflect the total set of educational opportunities children have had from birth through middle school, including opportunities at home, in child-care and preschool programs, and among peers. Average test scores therefore reflect a mix of school quality and out-of-school educational opportunities.
-
Learning rate: The learning rate indicates approximately how much students learn in each grade in a school, district, county, or state. Because most educational opportunities in grades 3–8 are provided by schools, the average learning rate largely reflects school quality.
-
Trend in test scores: The trend in scores indicates how rapidly average test scores within a school, district, county, or state have changed over time. It reflects changes over time in the total set of educational opportunities (in and out of schools) available to children. For example, average scores might improve over time because the schools are improving and/or because more high-income families have moved into the community.
For more information on how the average, learning rate, and trend in test scores are computed, see the Methods page.
What estimates are shown on the website?
What schools are included in the data?
All public schools—traditional, charter, and magnet schools—that serve students in any grade from 3 through 8 are included in the data. Schools that enroll both high school students and students in some of grades 3–8 are included, but the reported test-score measures are based only on the scores of students in grades 3–8. For example, schools serving grades 7–12 are included in the data, but the test scores we use are only those from students in grades 7–8.
Schools run by the Bureau of Indian Education (BIE) are included in the data. Specifically, we include estimates based on reading and math tests taken by students in BIE schools in the years 2008-09 through 2011-12, and in 2015-16 and 2016-17. The BIE did not report test score data for other years to EDFacts.
In most states, students in private schools do not take the annual state accountability tests that we use to build SEDA. Therefore, no test scores from private schools are included in the raw or public data.
We have some special inclusion rules for the following school types:
- Charter schools: Estimates for charter schools are included in the school-level data files and on the website. Geographic district and county estimates include data for charter schools; however, charter schools are assigned to the geographic district or county in which they are physically located. In other words, a charter school may be counted as part of a geographic district even if that district does not operate the school.
- Special education schools: Estimates for special education schools are included in the school-level data files and on the website; however, data for special education schools are not included in the geographic district or county estimates.
- Virtual schools: Estimates for virtual schools are not shown on the website but are available for download in the school-level data files. Virtual school data are not included in geographic district or county estimates.
A data file indicating which schools are counted as part of geographic districts, administrative districts, and counties is available on the Get the Data page.
How accurate are the data? What should I do if I find an error in the data?
We have taken several steps to ensure the accuracy of the data reported here. The statistical and psychometric methods underlying the data we report are published in peer-reviewed journals and in the technical documentation described on the Methods page.
Along with each estimate, we also report a margin of error equal to a 95% confidence interval. For example, a margin of error of +/- .1 indicates that we have 95% confidence that the true value is within +/- .1 of the estimated value. Large margins of error signal that we are not confident in the estimate reported on the site.
Nonetheless, there may still be errors in the data files that we have not yet identified. If you believe you have found an error in the data, we would appreciate knowing about it. Please contact our SEDA support team at sedasupport@stanford.edu. (For messages about the Segregation Explorer, please email segxsupport@stanford.edu. )
Are these data available for researchers or others to use?
Are the test-score measures (average scores, learning rates, and trends) adjusted to take into account differences in student demographic characteristics or any other student or school variables?
No. The measures of average test scores, average learning rates, and average test-score trends are based solely on test-score data.
On the website, we flag schools that serve exceptional student populations (large proportions of students with disabilities, students enrolled in gifted/talented programs, or students with limited English proficiency). These students’ characteristics should be taken into consideration when interpreting the test-score data and comparing performance to that of public schools serving more general populations.
-
Special education schools or schools with a high percentage of students with disabilities: We flag schools that are explicitly identified as special education schools by the Common Core of Data (CCD) or the Civil Rights Data Collection (CRDC). We also flag schools where more than 40% of students are identified as having a disability in the CRDC data. Students with disabilities are identified per the Individuals with Disabilities Education Act (IDEA).
-
Schools with a high percentage of students in gifted/talented programs or with selective admissions: We flag schools where more than 40% of the students are enrolled in a gifted/talented program according to the CRDC data. We also flag some schools with selective-admission policies (schools where students must pass a test to be admitted), but we do not have a comprehensive list of such schools, so not all selective-admissions schools are yet identified in our data.
-
Schools with a high percentage of limited English-proficient students: We flag schools where more than 50% of the students are identified as limited English proficient (LEP) in the CRDC data. LEP students are classified by state definitions based on Title IX of the Elementary and Secondary Education Act (ESEA).
The downloadable SEDA data files include student, community, school, and district characteristics that researchers and others can use.
How is socioeconomic status measured?
For each geographic district or county, we use data from the Census Bureau’s American Community Survey (ACS) to create estimates of the average socioeconomic status (SES) of families. Every year, the ACS surveys families in each community in the U.S. We use six community characteristics reported in 5-year rolling surveys from 2005-2009 through 2015-2019 to construct a composite measure of SES in each community:
- Median income
- Percentage of adults age 25 and older with a bachelor’s degree or higher
- Poverty rate among households with children age 5–17
- Percentage of households receiving benefits from the Supplemental Nutrition Assistance Program (SNAP)
- Percentage of households headed by single mothers
- Employment rate for adults age 25–64
The composite SES measure is standardized so that a value of 0 represents the SES of the average school district in the U.S. Approximately two-thirds of districts have SES values between -1 and +1, and approximately 95% have SES values between -2 and +2 (so values larger than 2 or smaller than -2 represent communities with very high or very low average socioeconomic status, respectively). In some places we cannot calculate a reliable measure of socioeconomic status, because the ACS samples are too small; in these cases, no value for SES is reported. For more detailed information, please see the technical documentation.
What does the gap in school poverty measure?
What does the gap in percent minority students in schools measure?
The gap in percent minority students in schools is a measure of school racial segregation. We calculate the Black-White, Hispanic-White, Native American-White, and Asian-White gaps in percent minority students in schools as (1) the percent of minority students in the average Black, Hispanic, Native American, or Asian students' school minus (2) the percent of minority students in the average White student’s school within a given district, county, or state. When this gap is zero, students in both the racial/ethnic groups attend the same schools or have equal proportions of minority students on average (no racial segregation). A positive gap indicates that there are more minority students in the average Black, Hispanic, Native American, or Asian student’s school compared with the average White student’s school. A negative gap indicates the opposite.
For these calculations, we define the percent of minority students as the percent of students in a school who are Black, Hispanic, or Native American. On average, these racial/ethnic groups have had limited educational opportunities due to low socioeconomic resources, historical societal discrimination, and the structure of American schooling; these limited educational opportunities have led to low average achievement on standardized tests. We do not include Asian students in the definition of this measure because Asian students tend to have higher socioeconomic status than White and other racial/ethnic groups; that is, Asian families are not, on average, socioeconomically disadvantaged relative to White families in the U.S.
There are multiple limitations to this definition of minority. First, we recognize that by grouping Black, Hispanic, and Native American students, we are not able to observe differences in average educational experiences of students belonging to the different racial/ethnic groups. Second, the exclusion of Asian students from this measure based on their average performance does not account for the diversity of the Asian subpopulations. Moreover, it is important to acknowledge that Asian families have been discriminated against socially and economically within the U.S., and this discrimination likely affects their educational experiences.
For SEDA, we report measures of differences in minority composition because this has long been the conventional way of thinking about segregation. However, our data shows that this measure is not predictive of unequal opportunity once we take into account differential school poverty composition.
Why are the data here different from the results reported by my state?
SEDA results may differ from publicly reported state test scores.
States typically report test scores in terms of percentages of proficient students. They may also report data only for a single year and grade or may average the percentages of proficient students across grades. Measures based on the percentage of proficiency are generally not comparable across states, grades, or test subjects, and are often not comparable across years because of differences in the tests administered and differences in states’ definitions of “proficiency.” See the Methods page for more details.
States often rank their schools or provide summary ratings for schools. These may take a number of factors into account, not just test scores. This makes them very difficult to compare across states and grades and over time.
In contrast, the test-score measures we report here are based on more detailed information about students’ test scores. They are adjusted to account for differences in tests and proficiency standards across states, years, and grades.
Why are there no data for my school, district, or county in the Explorer?
There are several reasons that data explorer may not display data for a school, geographic district, or county:
- The unit is too small and/or has too few grades to allow for an accurate estimate.
- More than 20% of students in the unit took alternative assessments rather than the regular tests.
- Data for the unit were not reported to the National Center for Education Statistics.
For more details, please see our Methods page.
Why is my school or district flagged as missing data for recent years (2016-2019)?
If a school or geographic district is flagged (see image below) as missing more than 50% of test score data for recent years (2016-2019), this means that estimates are based on older data (before 2016). Why? Less data were reported between 2016-2019 due to state test opt out which began in the Spring of 2015. Reasons for opt out vary and the number of parents opting students out of taking state assessments also vary year-to-year. To learn more about opt out policies for students in your area, we recommend that you visit your local school, district, or state website.
Whom should I contact with additional questions?
How can I learn more?
Using the Opportunity Explorer
How can I get help using the Educational Opportunity Explorer?
The Educational Opportunity Explorer offers a FAQ within the Explorer itself. Press the button to get answers to your questions on using the Explorer and interpreting its map and charts.
If your questions are not answered there, please reach out to help@edopportunity.org for questions on how to use the Educational Opportunity Explorer and website.