I’ve been wondering how to address this, both in my head, and in how I present it to the world.
We got our test scores back – the raw numbers, at least.
Number of kids that I had who tested at Proficient or Advanced: Zero.
That was kind of a shock. I’m used to my kids doing about 5-10% better than other groups that teach the same demographic1.
When I brought this up with my non-teaching friends, they immediately pointed out that I’d been set uo to fail. I’m not ready to buy that. I can’t just look at that zeroe, blame it on someone else, and walk away. I need to find something worth chasing in there.
I’m still struggling with this. I know that I got some kids from Below Basic to Basic, and some from FBB to BB. Maybe that’ll be enough.
Thanks to Chris, who had a very timely post. For all our talk of data, these raw clumpings tell me very little – so little in fact that I can’t even align them with my own assessments. That’s something I need to be able to do – both from my side it how I develop those assessments, to the form that the testing companies give me the data in.
On the flip side of this, I’m also looking at our school’s scores overall – we’re doing dismally. The worst is, we have an advanced math program where we push 7th graders into algebra, and the 8th graders into geometry. In both of those, more than half are testing below Basic. We’re not doing those kids any favors, and this coming year those are the kids I’m going to be teaching.
Maybe it’s time to print this out and go in for a visit with my admins.
1 I realize 5-10% isn’t huge, but it’s an indication that I’m doing something right.