KASB’s Report Card 2016 Methodology Notes

KASB’s Report Card 2016 Methodology Notes

We have had some folks asking us about the methods and data used for the 2016 KASB State Education Report Card, so I wanted to provide some additional information on why we used the approach we did.

The Report Card represents an update to both the Kansas Educational Achievement Report Card 2015 and the Comparing Kansas series of reports produced from August 2015 through January 2016 (all the reports can be found at kasbresearch.org/publications). Along with providing the most recent years available for statistics used in earlier reports, the 2016 Report Card utilized additional measures and used a different method for determining Aspiration States.

In the earlier work, we used fourteen measures of student achievement and attainment:
  • Student Attainment
    • Freshman Graduation Rate
    • Cohort Graduation Rate
      • All Students
      • Economically Disadvantaged Students
      • Limited English Proficiency Students
      • Students with Disabilities
    • Percent of 18-25 year olds with a high school diploma
  • Student Achievement
    • Percent performing at or above “Basic” on the NAEP assessment
      • All Students
      • Students Eligible for the National School Lunch Program
      • Students Not Eligible for the National School Lunch Program
    • Percent performing at or above “Proficient” on the NAEP assessment
      • All Students
      • Students Eligible for the National School Lunch Program
      • Students Not Eligible for the National School Lunch Program
    • Percent meeting all four ACT benchmarks (adjusted for percent participation)
    • Average composite SAT score (adjusted for percent participation)

We used as many variables as we could because we knew a multiple measure approach (similar to a portfolio assessment approach) would yield better information about student outcomes than any one measure (subject to potential bias and measurement error) could. We included measures of both student achievement and attainment because we felt it was important to define student success not only in terms of test scores, but also in terms of how many students actually graduate from high school.

In addition, we included the overall statistics as well as any subpopulation statistics available because we felt it was important to look at how the students in the state were doing as a whole and how specific subgroups were doing. Some might say this means we “double-counted students,” however, it is important to clarify this is state-level aggregate data analysis, which is vastly different than student-level data analysis where concerns over including the same student’s results in multiple groups would be a concern.

In terms of the assessments used, we chose to use NAEP, ACT, and SAT because they are the only three K-12 measures collected and reported for every state. The NAEP has limitations based on the size of the sample used and based on recent research that has called into question how comparable its results are across states, but it is nonetheless the only measure available that provides comparisons in reading and math at the fourth and eighth grades.

Both ACT and SAT results were included because typically each state has a much higher participation rate in one exam over the other; however, all states have results for both exams. Research shows a state’s participation rate has a huge impact on its overall ACT and SAT results, with higher percent participation predicting lower average results statewide. Because of this, we devised a method of ranking states’ ACT and SAT outcomes on the amount by which each state deviated (above or below) the outcome predicted based on its percent participation (using the same linear regression model mentioned in the research linked above).

Taking all of these measures together, we looked for states that had better outcomes than Kansas on at least 8 of the 14 measures. Initially in August 2015 we found only five states met this criteria (New Hampshire, New Jersey, Massachusetts, Vermont, and Minnesota), meaning Kansas ranked sixth in the nation on the combination of these measures. We ran the analysis again with updated statistics in January 2016 and found that seven states outperformed us based on the same criteria (Indiana, Iowa, Massachusetts, Nebraska, New Hampshire, New Jersey, and Vermont), meaning Kansas had moved from sixth to eigth. We did it again in May 2016 and found the same seven states outperforming Kansas.

By August 2016, new data was available for many of the statistics we were using, but we also decided it was time to evaluate those statistics and decide if they were the right ones to be using. We decided to drop the Freshman Cohort Graduation Rate because it was no longer being reported by states. In addition, based on the goals outlined in the Rose Capacities and KSDE’s Kansans CAN initiative, we felt we needed to include outcomes related to postsecondary success. We chose the other two measures of educational attainment for 18 to 24-year-olds available from the U.S. Census Bureau, the percent with some college or higher and the percent with a four-year degree or higher.

The postsecondary measures chosen are not perfect, by any means. We would like to be able to report on the number of Kansas high school graduates enrolling in postsecondary institutions in any state, but this data is simply not available. We would also like to discuss postsecondary remediation rates, but these are likewise not available in a format comparable from state to state. So we are using the 18 to 24-year-old measures knowing there are issues, such as the fact it includes people who moved to Kansas after having obtained high school diplomas in other states, excludes Kansas high school graduates that enroll in out of state institutions, and so forth.

With the elimination of one measure and the addition of two, we were now up to 15 student outcome measures. These measures were organized into three categories: postsecondary, graduation, and assessments. Rather than looking for the number of states that outperformed Kansas on a majority of measures, our Associate Executive Director of Advocacy Mark Tallman wanted to be able to produce an overall rank so we could also see to what extent each of those states outperformed Kansas.

This is where the researcher and statistician in me had a bit of difficulty, as calculating an average of ranks and then ranking that average felt a bit like writing a book report on a bunch of other book reports tied together. But then I reminded myself of a key mantra I have been repeating to myself since college – research provides indicators, not facts. We acknowledge that these state-level statistics represent aggregate measures that can mask a lot of the things going on at the lower levels, such as at the county, district, school, classroom, and especially student levels. They are not perfect measures and they are not designed to produce perfect conclusions. We are simply trying to get an idea of where Kansas stands based on a bunch of aggregate measures considered together.

We averaged the ranks and then calculated a rank based on those averages. Kansas landed at number 10, with the following states in the top nine positions:
  1. New Hampshire
  2. Massachusetts
  3. Vermont
  4. New Jersey
  5. Nebraska
  6. Iowa
  7. Minnesota
  8. Indiana
  9. North Dakota

However, after additional discussion, we decided we should be using a weighted average. The reason we decided on a weighted average was largely due to the statistics where subpopulations and multiple measures were used, such as the NAEP statistics and the Cohort Graduation Rate measures. Because we had six statistics for NAEP, that meant that NAEP overall would have six times the impact the ACT results would, for example. In addition, we wanted to make it so each of the three types of indicators (postsecondary, graduation, and assessments) got equal weighting.

In the end, we came up with a ranking that weighed each of the measures as follows:

As you can see, the postsecondary measures in blue, the assessment measures in green, and the graduation measures in orange each make up one-third of the the total. In addition, the six NAEP measures taken together have the same weight as the ACT measure and as the SAT measure.

As it turns out, the weighting did not impact Kansas’ rank at all. We were still in tenth place. As for the states ahead of us, two dropped off the list, two more were added, and the remaining states move around a little bit:

  1. New Hampshire (no change)
  2. Massachusetts (no change)
  3. New Jersey (up from #4)
  4. Iowa (up from #6)
  5. Nebraska (no change)
  6. Vermont (down from #3)
  7. Illinois (new – replaced either Indiana or Minnesota)
  8. North Dakota (up from #9)
  9. Connecticut (new – replaced either Indiana or Minnesota)

So, to summarize, KASB initially created a method for identifying states that perform better than we do in terms of student outcomes, and found that only five states fit this criteria, putting us as number 6. Later we ran the same analysis with updated data and found Kansas had fallen to number 8. Then we decided to revise our methods to take new factors into consideration, to produce an overall ranking, and to base that ranking on an average that utilized weightings for the fifteen factors included, and we found Kansas to be at number 10.

Seeing the general downward trend based on the earlier rankings, KASB also did something this time around we hadn’t done before. We looked at change over time.Though we didn’t use this information as part of the Aspiration States identification, we worked to note how many states moved ahead of or fell behind Kansas on the individual measures, utilizing the earliest data available going back to 2005. Unfortunately, in most cases there were more states moving ahead than there were falling behind.

The trend data allowed us to expand the conclusions that we could draw from the comparisons. Previously we could assert that the data suggests Kansas student outcomes were high despite having a funding rank somewhere in the middle. By looking at the historical data and determining how many states moved ahead of Kansas and how many fell behind, we could also say the data suggests that Kansas is losing the lead it has on other states, or to put it another way if the trends suggested by this analysis continue, it is likely that Kansas’ ranking in terms of education outcomes will continue to fall.

KASB feels the new method for determining which states we should be looking to for ideas on improving educational outcomes in Kansas is sound, and is an improvement on what we have used in the past.