Locating the sweet spot of dumbness: the rationale for NECAPS in the fall

...determining the test scores needed to meet the standard - will be based on proficiency expectations at the time of testing not at the end of the previous school year

A RATIONALE FOR FALL TESTING

Michael Hock
Vermont Department of Education

There really is no perfect time during a school year to schedule state assessments. Spring and fall testing both have distinct advantages and disadvantages. No matter when the testing window is scheduled, it will cause some level of disruption to a school’s instructional routine and inconvenience for staff and students. It is also nearly impossible to find an entire month for testing when there are no potential conflicts with school events, in-service days or holidays. On balance, however, there are several factors favoring a fall testing window. This paper outlines regulatory, test design and instructional factors that provide a rationale for fall testing. 

Regulatory Factors

The Federal No Child Left Behind Act requires states to use assessment results to make accountability decisions prior to the beginning of the school year that follows testing. This allows parents of children attending identified schools to access school choice and/or supplementary service options on a timely basis. Unfortunately, even when everything goes as planned, the process and procedures necessary to turn test scores into AYP reports typically take months, particularly when tests include constructed response items that need to be hand scored. In addition, to ensure the accuracy of assessment results, the Vermont Department of Education gives local administrators an opportunity to verify student roster reports prior to public release of state level results, adding weeks to the interval between testing and accountability reports.  Meeting NCLB deadlines with a spring assessment window is daunting at best, and allows little flexibility for identifying and correcting unforeseen problems. Fall assessment provides sufficient time to administer tests, produce and verify reports, and make AYP decisions well within the NCLBA time frame. 

Test Design Factors

Vermont’s Grade Expectations (GEs) define the skills and concepts students should have learned after completing a specific grade level. All of Vermont’s current assessment development is focused on creating tests that are linked directly to the GEs. Assessing students’ proficiency on one year’s GEs at the beginning of the next school year results in measurement of learning that is deep and enduring – the skills and knowledge a student remembers after being given an opportunity to forget. Some administrators and teachers have expressed concern that the delay between learning and assessment that occurs with fall testing will reduce the probability that schools will meet annual measurement objectives, the assumption being that over the summer students will forget most of what they learned the previous school year.  This concern is unwarranted for two reasons: (1) the standard setting process - determining the test scores needed to meet the standard - will be based on proficiency expectations at the time of testing not at the end of the previous school year, and (2) if a student really does forget everything over the summer then there is a reasonable argument that the student hasn’t really achieved the GEs.

Instructional Factors

There are a number of factors supporting fall assessment that relate to planning and evaluating instructional programs. They include:

  • Spring testing, typically scheduled for late March and early April, occurs before students have completed a full year’s course of study. As result, a test that is designed to assess GEs would very likely contain skills and concepts that had not been covered by the date of a spring administration. Fall testing has the advantage that it occurs after students have completed the entire instructional sequence for a particular grade level;
  • According to the University of Iowa Center for Assessment, fall testing is more “actionable” than spring testing, meaning that it comes at point in the school year when results are timely and can be used to best advantage. The next several bullets address this advantage;
  • With fall testing we anticipate having results returned to schools near the mid-point of the school year. This will allow the teacher who gave the test to receive results in the same school in time to evaluate and change programs for individual students or groups of students;
  • Schools will receive results in time for spring action planning, for targeting summer professional development activities, and for making any related budgetary adjustments;
  • Fall test scores can also be used for planning summer services and identifying prospective students. This is rarely possible with spring testing. According to a 2003 study conducted by the Center for Evaluation and Education Policy at the University of Indiana, 82% of the states that administered spring assessments (14 out of 17) did not get results back from their assessment contractors in time for use in planning summer school;
  • The typical school year, particularly in the elementary and middle grades, begins with orientation and review of skills covered during the previous school year. As result, fall testing can be scheduled at a natural transition point between review and introduction of new units and materials, minimizing disruption to the instructional cycle. Conversely, spring testing generally comes at a point when units are in progress and must be suspended until testing is completed.
  • Because the teacher who administers a fall test is not the teacher who taught the skills and concepts that are being measured, fall testing can help dispel the misperception that the current teacher is responsible for how well the students perform.  Vermont’s state assessments are designed to reflect the cumulative effects of a school’s curriculum, instruction and student support system, not the contributions of a single teacher. Fall testing can help reinforce this system-wide approach to data-driven program improvement.
  • Related to the previous point, in the weeks prior to spring testing teachers often feel obligated to take time away from standard instruction to do intensive, “catch-up” tutoring (particularly if they believe they will be held personally accountable for their students’ performance). This practice is problematic on several levels. First of all, it rarely makes a difference (reviewing practice tests and teaching students test-taking strategies would probably be more helpful). It also represents an inappropriate and unintended influence of assessment on curriculum and instruction. Fall testing relieves pressure on the current year’s teachers (since they didn’t teach the skills that are being assessed), and from a logistical perspective, provides only enough time to do prescribed test preparation. Any catching up is a part of the review that occurs naturally at the beginning of every school year.

 

References 

Wilhelms, M.T. (2003).  INSTEP with ISTEP+: Practical and Consistent Reasons to Maintain Fall Testing.  Indianapolis: Indiana Department of Education

University of Iowa (No Date).  Iowa Testing Program Website. 

      Spradlin, T. E., (2005).  ISTEPing in the Right Direction? An Analysis of Fall versus Spring Testing.  Bloomington, IN: University of Indiana Center for Evaluation and Education Policy

NECAP timing challenged

I had an interesting experience the other night when I was meeting with a school board and the issue of teacher effectiveness arose. When I said that the NECAP scores are intended to measure student achievement/growth during the previous year, another participant told me that wasn't really true and that the kids had 6 weeks or so more of teaching in the new year so that I couldn't legitimately credit a previous year's work. Later in the discussion, a battle of the test scores arose when the school's NWEA tests allegedly showed one thing and the NECAP another. Add in DIBLES and StarMath, and there was a recipe for confusion. Linking student achievement with teacher evaluations is significantly more complicated than it appears!