In comments to Friday's snarky post, I was chided for not engaging with the critique of standardized testing offered by Washington Post education blogger Valerie Strauss. I had intended to say more about the general topic, as there have been a bunch of much-cited articles in a similar vein crossing my RSS reader recently, but I sprained my ankle playing basketball at lunch, which kind of blew a hole in my afternoon...
Looking at her posts, though, it's hard to really engage with her critique, because there's next to nothing there to engage with. In the most recent post, the closest thing to a critique of standardized testing is the repeated assertion that the skills tested on the Florida exams in question are not relevant. Which is great, if proof by repetition works for you, but I disagree with that view. If you enjoy testing yourself against online versions of these things, the Post has made selected math and reading questions available as online quizzes; they're not substantially harder than the example tests I looked at back in December, and the math skills tested seem pretty useful and relevant to me.
There's also a link to an older post about Strauss's bad test experience:
Second grade. Everglades Elementary School in Miami. Mrs. Hirsch, my classroom teacher, passed out the first standardized test I had ever taken. I took the exam and thought I had done well.
I hadn't.
In fact, I got every answer wrong.
How does a good student from a highly educated middle-class background do that in second grade?
I was supposed to circle only one answer, as most multiple choice problems require. I picked more than one and then annotated the answers in the margins.
This, to my mind, is not an interesting critique of standardized testing. In fact, it fails on both the adjective and the noun. It's not useful as a critique because it hangs on an analogy between a badly designed second-grade test, probably circa 1970, and it's not interesting because this sort of "I was too smart for the test" story is as tediously common as it is annoyingly precious.
I find this really annoying, because it's not like I'm a hard-core proponent of high-stakes standardized testing. Quite the contrary-- I think there are ample reasons to be wary of the whole notion, especially because pinning financial incentives to test scores encourages "teaching to the test" at best, and blatant fraud at worst. As has been demonstrated over and over again.
And yet, this kind of "standardized tests are bad because they fail to capture my essence as a beautiful unique snowflake" crap pushes me to take the pro-test side. Because, really, this is not a significant source of problems. The vast majority of people who fail math tests do so because they can't do the math, not because the system is stifling their inner genius. There are problems with the idea of high-stakes testing, but this is so far down the list that it's not even worth discussing.
Sadly, this sort of nonsense is inevitable any time the subject comes up. It's just another variant of "My kid is a genius but gets bad grades because he's bored by normal school." Which is an argument I have very little sympathy for.
I have a similar sort of reaction to the anti-Teach for America article that got a lot of buzz a couple of weeks ago. The author, Andrew Hartman, makes a good case that Teach for America is not the inspirational success story it's sometimes made out to be, but he ends in a way that puts me off the whole thing, with sneering insinuations that wealthy people are deliberately funding Teach for America as a way of keeping poor people down, and a weird rant against the entire idea of public schooling. Which is delivered with the best of intentions-- the implied content of his ideal education is all good stuff-- but in a manner that make me want to take the opposite side.
The third much-linked recent piece on education that really bugs me is this post about the irrelevance of high school education:
High school students know that they will almost certainly be using computers in any desirable job that they manage to get after high school. They know that a computer is a requirement for success in today's higher eduction environments. They know that, in the "real world", college students don't write papers in longhand on loose-leaf notebook paper; they know that, in the "real world", people don't create business presentations with markers and paste on poster board or tri-fold displays; they know that, "in the real world", people who engage in any type of research may still occasionally use books, but they conduct the majority of their research using online tools. They know that, "in the real world", bankers do not keep their accounts in paper ledger books, or do their financial forecasting only with the aid of a calculator. Yet high school students are regularly asked to write in longhand on notebook paper, make presentations using kindergarten tools, research mostly using books, and do their calculations on paper. Why should anyone be surprised that they don't find their high school experiences "relevant"?
The problem here is that this is presented as mostly a failure of curriculum-- particularly a few links down the chain-- as if this is being done solely out of hidebound pedagogy. In fact, I suspect that the worst of these irrelevancies are less a matter of program design than resource constraints. If you want every student to do their final report on a computer, you need to provide computer access to every student, and that costs money. Which is in short supply everywhere these days, but particularly in education. Go take a look at the teacher requests at DonorsChoose-- some of them are asking for money to buy basic furniture for their classrooms. I'm sure they would be thrilled to have every student typing their papers in the latest version of Word and printing slick posters, but when you've got teachers begging for money to buy chairs, poster printers aren't really on the menu. (To be fair, the original author seems to be aware of this, but doesn't make it a point of emphasis. By the second- or third-order links, though, it's being presented as a failure of imagination, with nary a word about economics.)
(There's also the problem that some of these "irrelevant" items actually serve a pedagogical purpose. We often make students do problems on paper that they could do with a computer because that's how you learn to understand what the computer is doing. And if you don't have a solid understanding of how the computer gets the answer, sooner or later you're going to run into trouble.)
So, anyway, there's a collection of recent writing about education that has made me grumpy.
- Log in to post comments
We often make students do problems on paper that they could do with a computer because that's how you learn to understand what the computer is doing. And if you don't have a solid understanding of how the computer gets the answer, sooner or later you're going to run into trouble.
Thank you for this.
In fact, it goes all the way to the highest levels. Although I am a theoretical physicist whose papers include computer-generated graphs of results from computer simulations, do you know how those simulations started? That's right, I wrote out some equations by hand. On a dead tree. Sometimes we even plugged in numbers by hand for simple versions of the simulation (i.e. 2 iterations on a tiny system) to verify that we knew what we were doing before we wrote code.
I suppose that we could have done this on a tablet instead of a piece of dead tree, but even if a tablet were as convenient as paper (I maintain that for a creative style of theoretical physics, sitting at a table with papers strewn around next to a board full of equations is still a useful way to do certain types of theoretical work) it would still be using fairly unsophisticated aspects of the power, not the real power of computation.
Me too. I'm a statistician and use pieces of dead tree often. That's typically when I am doing the most important things. The rest is just getting it done.
The part that irks me is the "I don't have to understand". Folks say our kids don't need to know how to compute a best fitting line for example, cause computer will do that. I use a computer too, but at least I know what the hell the "best" means, and from that can reinvent how to compute that line.
There's something like a celebration of "I don't have to know" stuff going on that is completely alien to me.
Most writing on education makes me grumpy, because most writing on education is done by people who haven't spent any time in classrooms recently. Go volunteer in a classroom for a year, spend time actually helping children learn, but don't generalize from your one experience, and then maybe I'll be less grumpy reading you.
BTW, in my daughter's high school, she does all her papers on the computer, her presentations are PowerPoint or video, research is done mostly on-line, and a graphing calculator is required for her math class. Most of that's been true since about 3rd grade, except for the graphing calculator. In fact, for her papers, she's required to do them on the computer, since she has to submit them to one of the anti-plagiarism sites.
This post reminded me Asimov's http://en.wikipedia.org/wiki/The_Feeling_of_Power.
He was a visionary.
"I'm sure they would be thrilled to have every student typing their papers in the latest version of Word and printing slick posters, but when you've got teachers begging for money to buy chairs, poster printers aren't really on the menu. "
2011 was probably the last year for textbooks being cheaper than computers. Add Neo-office ($0) instead of the MS office suite and you're done.
FYI, it looks like the math questions are straight off the 2006 exam.
I remember walking out of the GRE exam, and overhearing a guy in front of me grousing about "who cares whether the third sailboat is blue" (in reference to a logic problem on the then-brand-new logic section of the exam). I quickly (although not instantly, sad to say) realized that this was exactly what made me good at what I did for a living -- as opposed to the guy saying it, who (based on his wearing a letterman's jacket from the campus we were on) probably had not earned a living yet at all.
On (an) other hand, the last decade's moves towards high-stakes standardized testing have not, in my opinion, taken into account just how hard (read: expensive) it is to create and maintain a good test.
Most writing on education makes me grumpy, because most writing on education is done by people who haven't spent any time in classrooms recently. Go volunteer in a classroom for a year, spend time actually helping children learn, but don't generalize from your one experience, and then maybe I'll be less grumpy reading you.
I have some sympathy for people who want to offer an outside perspective on what the rest of the world is wanting the education system to do. As long as they concentrate their critique on outcomes and methods, I see some value in it.
However, what's fascinating is that even among educators, people who teach fewer classes, or who teach at different levels, have very different views than people on the trenches teaching a lot of classes at whatever level the other guy is bloviating in.
Professors of education have opinions that don't always reflect what's happening in k-12 classrooms.
Professors who have light teaching loads have very strong opinions on how those of us with heavy teaching loads should teach our classes. Fascinatingly, one of the best ways to get a light teaching load is to be an alleged expert on education and/or pipeline issues, at which point you are given ample free time to tell me what to do.
Alex @8:
Bingo. And the reason that professors of education have a poor understanding of what takes place in K-12 classrooms is that most of them have NEVER been there. It is quite possible to teach how to teach 5th grade math without ever having taught math to 5th graders.
People who teach physics at a university will be actually doing physics when they do their research. Imagine if physicists teaching graduate quantum mechanics only did some sort of meta studies of how other physicists did research.
Would we be better off if the people developing new teachers actually taught grade school for a semester every year?
I sort of get why there's a disconnect between k-12 and the College of Education. It's a big problem, but I sort of get why we have this problem.
What I don't quite get is the sort of evangelical zeal in people who haven't taught 12 units in a long time, but sit on the same hallway as people teaching 12 units and have strong opinions on how those of us teaching 12 units should organize our classes. Sometimes the reason they don't teach 12 units is because they are heavily involved in curriculum, pipeline, and pedagogical issues. These people are in the same building as us, and have no excuse for a disconnect.
Last part reminds me of this Eugene Wigner quote: "It is nice to know that the computer understands the problem. But I would like to understand it too."
I've noticed in my states grade level tests, that there is only publication of the overall scores, but absolutely no mention given to something as simple as reporting standard errors for the tests. My school traditionally does very well on these sorts of tests, but I was absolutely floored a couple of years ago, when the math section for two different grades gave the wrong answer. It was two grades I had taught, so I knew all of the students from science class. I also knew which math classes they had been placed into. There was a huge gap between the two groups (close to a full grade level in actual math placement on average). Yet the lower grade scored better on the math test. As someone else hinted at the big name tests generally do what they are supposed to achieve some level of interpretability. I have significant doubts that most of the state tests hit near that level. The goal has become all about reporting numbers for the sake of numbers rather than meaningful numbers, a hallmark of a pseudoscientific endeavor if ever there was one.
Here is what I take from the anecdote about Strauss: "Standardized tests miss knowledge that is worthwhile".
And the reason for that is basically the same reason they are used- standardized tests are high-throughput. We don't have the person-hours to put into reading every student's comments in the margins, and it is harder to design a scoring system such that comments in the margins are assessed fairly.
Standardized tests are a lot like BMI- they are used because they are easy to administer, and more informative than nothing. They are not used because they are a very close estimate of what we actually care about.
Standardized tests and BMIs both work ok as population-level methods to analyze where we need to improve. If you have two moderately similar groups, and one has awesome test scores and the other lousy, it's worth looking at the difference.
That said, for individual level analysis, the situation is more complex.
I will grant your point that many students fail the math tests because they don't know the math being tested.
However, there are enough students that don't do optimally because they don't know how to do the math *quickly*, or because they aren't thinking the way the test makers assume they should, or because they psych themselves out, or because they screw up where they're bubbling things in, that we must be careful what role we allow standardized tests to play in determining an individual student's educational options.
As a side note, I think your "geniuses don't fail out" is coming from your (extremely understandable) personal opinions of those who don't want to work and do want to be admired for being smart... but it is not backed up by hard data. Gifted students probably drop out of high school at the same (or higher) rates as other students (and, of course, minority and low SES gifted students do so at higher rates than other gifted students).
Of course, it may well be different at the college or grad school level, and it may well be that a much larger segment of gifted people end up successfully going back to school later, but I don't think there is data that supports the idea that 'geniuses are immune to failing out'.
I read the Hartman article and was puzzled by the fact that despite the valid points that he raises (and that I agree with), Hartman seems to have missed the big-picture himself. My background: my spouse is a TFA alum, taught at the KIPP "Mothership" in Houston, and still teaches at a KIPP school in NorCal.
He said:
Well, those particular big-picture questions are actually integral part of KIPP schools (tagline "Be Nice. Work Hard."), which try to instill character, and concepts such as social justice, in their curricula. Most KIPP schools also try to give their students a well-rounded education by having year-end camping trips or offering extracurricular options during Saturday school (soccer, chess, etc).
I think that the more important big-picture questions that TFA could and should ask are things like: Why is it still acceptable to have students entering middle school not being able to read? Why is nothing being done to address the obvious link between poverty and students' academic performance?
The gap between the theoretical and practical sides of teaching is fairly old. My mother got a masters in education in the early 40s, but she always remembered the shock of her first classroom assignment. She had to unlearn and relearn.
In K-12 there is a big problem with motivation that does allow relatively smart kids, even kids from middle and upper middle class families, to fail out. John Scalzi at Whatever had a post on being poor: "Being poor is having to live with choices you didnât know you made when you were 14 years old." A lot of kids don't understand what they're playing for. We've been seeing some of this first hand.