Data Collection: A Farce | School Matters Foundation
top of page

Classroom Data Collection:
A Farce

The Truth About Collecting Unverified Data as a Measure of Teaching Performance.

Valuing growth over achievement:

the idea that if we analyze students' standardized test scores from one year to the next

and see an increase, then we have won the battle in getting kids to retain knowledge. 

 

I remember when the concept was introduced around 13 years ago in our faculty meeting.  At first, it was simply an unofficial way for us teachers to analyze where our students needed more help.  Within a few years, it became a crucial element in judging our merits as educators, affecting both our employment status and our salaries.  In other words, measuring growth in students became an official part of teacher evaluation.   But that concept opened a huge can of worms.  How can growth in student learning be measured equitably?  More on that later.

​

But first, consider this:  

Why does a child demonstrating growth in a cherry-picked skill, such as identifying the main idea

of a passage, positively affect a teacher's career?  

​

    This is a question that can only be adequately addressed if one understands the tremendous backlash against the supposed poor standing of American schools against the rest of the industrialized world.  I say "supposed" because when comparing our standing with the rest of the world, we are not comparing apples to apples.  Where America reports all the scores of its public school students, regardless of demographic or intellect, many other countries report scores selectively.  China is one of those countries.

​

   This backlash led to a national movement to hold teachers accountable, in a serious way, for their job performance.  But how do we do that?   Using standardized tests as a measurement tool of teacher effectiveness might seem the obvious choice, but not every teacher gives those tests and not every student takes them.  Think art, music, PE, and kindergarten, for example.  (And not every student takes those tests seriously, either.  The teachers' unions had a bit to say on this matter.)

​

    In came a system for measuring teacher effectiveness that was supposed to be doable by every teacher,

regardless of their grade level or subject matter: monitor the growth of one's students on selected skills

within any given school year.  

       Here's the requirement:

  • Teachers set a skill-oriented goal for students and give a baseline assessment to create a starting point for measuring growth.  Obviously, the worse the baseline scores, the better the circumstances for growth to occur.    Not only that, but the teacher creates the performance scale that awards her the points for her evaluation.  The only obstacle is getting one's principal to approve the goals and performance scale.  If one is in favor with one's principal, this is no real obstacle.  If one is out of favor, one goes through the vetting process for weeks on end.

  • At the end of the year, the teacher gives a final assessment and reports the percentage increase in skill level among her students to her supervisor.

  • The teacher is rewarded for her increase in achievement on her final evaluation.  

        

Here's the kicker:

 No one but the teacher verifies the growth in her students

Yet this data, combined with the principal's subjective review on her teaching, forms the bulk of her performance-based evaluation at the end of the school year.  How is this practice not ripe for fraud?  Not once in my four years of officially reporting scores did I ever hear a colleague lament that her students had not achieved at least 80% growth over the baseline given at the beginning of the year.  (80% was the standard we were told to strive for, for something called Universal Level of Achievement.)  And not once were any of us asked to provide the tests that back up the scores we reported.  I know;  I asked this question of my colleagues at the end of each school year.

 

    In my final year of teaching in that district, I ran an experiment.  Where I had earlier done my best to implement the goals and assessments in an honorable and honest way, that year, I gave only a baseline assessment and fabricated both the progress monitoring and the final assessment.  I did put thought into the "scores" I recorded onto my grid, not wanting to assign a rating that was not accurate according to what I had witnessed in regular classwork.  When I reported my final growth measurement,  the principal accepted it without question.  She never asked to see any of the assessments I had given.

      The amount of time I saved just entering scores that I thought looked feasible allowed me more time to plan for my students and more time in the classroom to actually instruct them.  I have to wonder how many teachers across my former school district have discovered this, as well, and are merely going through the motions to satisfy a pointless requirement.  

 

    Collecting growth data is only one part of a pay-for-performance scheme, but it's the part that is the most ripe for fraud.  Imagine if you were required to fulfill some bit of paperwork that no one but yourself would ever check for authenticity, yet the results could net you a raise in pay, or at the very least, keep your boss off your back.

     Would you be entirely earnest about it?  Or would you find a way to make the odiousness of it just go away.
 

Performance-based teacher eval

ACCESS ALL OF OUR VIDEOS HERE,
AND SUBSCRIBE TO OUR 
YouTube
CHANNEL

bottom of page