School Beat: More on Standardized Testing—Timing is Everything

by Susan Gold on May 10, 2007

In response to Lisa Schiff’s recent article titled ‘Why I Hate Standardized Testing’ I would like to add one more important element to the argument that standardized tests are not adequate measures of student achievement: timing.

What if you were taking a class and the final exam was scheduled before the end of the semester? What if it were scheduled after you had attended only 80% of the course? What if the professor told you the reason for this was to enable you to get your results two months later in the middle of your semester break? What if the professors and the college were evaluated on yours and other students’ scores on the final exam? What if an administrator could be fired if the scores were low?
Bizarre? Yet that’s the way we do things in California’s public schools. The battery of standardized tests are given within a window of 21 days, no more than 10 days before and ten days after 85% of the school year has passed. In San Francisco we normally begin testing as soon as the window opens.

Testing itself takes up two weeks of instructional time, but that is minimal compared to the amount of time that goes into test preparation. Since the test has a special format, students need to be taught how to demonstrate their knowledge within that particular framework. Therefore, a minimum of two weeks prior to the test, most teachers must focus on how to master the tricks of the trade: how to differentiate a good answer from the “best” answer, how to filter information, and how to pace oneself. Schools, especially those whose test scores in past years have been “below basic,” let subject matter go by the wayside as early as March. Many schools devote at least a month to test preparation.

The tests themselves take four to seven days depending on the grade level. In secondary schools the entire schedule is impacted. Testing currently takes up a minimum of five weeks. Would this time be better spent on helping students meet rigorous state standards rather than assessing whether they already have? After all, only 70% of the school year has passed when this process begins.

In addition, schools are currently responsible for showing improvement in test scores for each ethnic category of their student population. While this seems like a reasonable idea, the way it plays out looks like the southern states after reconstruction. Boys and girls who are in danger of scoring low on the test, a large percentage of whom happen to be African American and Latino, are excluded from electives and placed in supplementary reading classes where they learn to decode and comprehend random passages that have little connection to each other, to say nothing of their lives. What if your reading time everyday were 40 minutes of college entrance exam (SAT or ACT) texts and their accompanying questions? Would you care much about reading?

Why can’t testing be done with minimal impact on instruction? Why can’t testing occur in the last days of the school year? I asked that question of a San Francisco Unified School District’s Office of Assessment, Evaluation, and Research administrator who cited several factors including turnaround time for results, high school graduation activities, AP exams, and falling attendance rates at the end of the semester. However, instructional time seems more important than the other factors cited. Since test scores are now so high-stakes, it would seem that districts would want them to be given after the maximum amount of instruction had taken place.

A few of the considerations are moot. For example, the school district has no records of attendance rates falling off in the last weeks of school. In my experience as a teacher, I lose children at random times during the year to family vacations and emergencies. When I have done special activities on the last days of school I have had high attendance rates. Is it possible that children, parents, and teachers consider the year over because the tests have been given? Is it possible students would apply themselves up to testing time no matter when it happens?

Using AP exams and high school graduation activities to determine testing time for elementary and middle school students seems a rather unfair reason to test so early. And is it actually relevant at all? I went to school in New York State a million years ago when computers were in their infancy. Regents Exams, the equivalent of our high school exit exam (CAHSEE), were given at the very end of the school year. If you didn’t pass, you didn’t graduate. Somehow the scores got back to students in time to make that determination. In an age where for $15 extra dollars you can get back your SAT scores within a week, why do school districts have to wait almost four months to get results?

While it can be beneficial to learn how to take multiple-choice tests, should it be at the expense of learning skills and content that can actually be applied to situations outside the education system? More professional exams are incorporating case studies and performance-based demonstrations of knowledge to ensure that students have the necessary education to be competent in real situations. In the midst of an operation are doctors going to try to figure out the next step from a multiple choice test? I hope not.

I was gratified to learn that next year San Francisco will be testing at the end instead of the beginning of the window. While that still seems too early for many of us, other educators I’ve mentioned this to are relieved to know we will have a little more time to teach the standards before our students are expected to demonstrate what they have learned.

Susan Gold is a teacher at Presidio Middle School and a San Francisco Education Fund Teacher Leadership Institute fellow.

Filed under: Archive