This isn’t a satirical piece. (The last two posts were.)
I meant to say something about the comment I reposted the other day, the one that appeared on Eduwonkette’s blog in January 2009. The commenter warns against that sly sort of conformity that can take over a person in academia—the temptation to say those things that will lead to publications, speaking engagements, and grants—and urges Eduwonkette (and all of us, I suppose) to remain willing to admit to error.
It struck me as a wise, thoughtful letter from someone who knew a thing or two. Two sentences, though, gave me pause, not because I disagreed with them, but because I saw room for qualification: “But you didn’t want to play the policy game. All you cared about was data.”
Now, the person was addressing the comment to Eduwonkette, but I want to consider it more generally. It is easy to see how the first sentence could be true for someone. Many of us don’t want to play the policy game. The second sentence—is it possible? Does anyone with a serious interest in education care only about the data? Would that even be a good thing?
I have noticed a common assumption on both (or multiple) sides of the education debates: that a person should “follow the data,” and that a reform or policy is viable only if it has “evidence” to support it. (Evidence and data are not identical—the word “data” is often misused—but that’s another matter.)
It makes sense that one should look at the evidence behind any initiative. Why should anyone pursue a reckless, unproven reform on a large scale? Why subject schools and districts to experiments that have little evidence in their support? Granted, one has to experiment now and then, but one should do so with caution.
But “evidence” has meaning only in relation to your goals and values. Evidence supports or does not support a conclusion—but that conclusion must have some bearing on your aims. For instance, a certain kind of instruction could be correlated with higher artichoke consumption, but unless you’re trying to get the kids to eat more (or fewer) artichokes, this won’t mean much. Often policymakers speak in terms of “achievement” and “success”–but achievement of what? Success at what? They are usually referring to test scores—a limiting goal, given the nature of the tests.
If your goal (or one of the goals) is to give students background in literature, mathematics, science, languages, history, and arts, then the evidence of the curriculum’s worth can be found in its content and the lessons. If students are reading Chaucer, if the teacher is leading lively discussions and directing students’ attention to Chaucer’s word-play and satire, then the good is self-evident (if you think learning Chaucer is a good thing). But this goal is fairly unpopular with with those who want to see quick, concrete results.
Another goal might be to prepare students for the challenges of adulthood, such as college or the workplace. Hence “college and career readiness.” Policymakers supporting this goal might ask employers and college admissions officers what they seek in their applicants. They would then judge curricula and instruction by these criteria. Such an approach makes sense on the surface but has flaws; employers and admissions officers may take certain kinds of knowledge for granted and not think to mention them. In addition, the point of getting into college or getting a job is to start building one’s life (not just to get in). Thus, one needs qualifications that transcend the immediate ones.
If the goal is to prepare people for civic life, then the evidence-gathering becomes trickier still. One might ask: who are the people whose editorials get published, who hold political office, or who speak at town hall meetings? What kind of rhetoric and knowledge do they show in their writing and speaking? What kind of education did they have? This is faulty, though, because there are many ways of participating in political and cultural life besides writing editorials, holding political office, or speaking at town hall meetings. If civic participation means engagement in constructive discourse on matters of public concern, then we’re in better shape, as we can identify the attributes of such discourse and the knowledge required for it. Even so, we might disagree over the priorities and emphases.
One could stick with the goals that carry the clearest evidence—graduating from high school, getting a job, etc.—but such goals justify only the most basic and pragmatic sort of education. Some would say that’s just plenty; if students want more than that, they should find it on their own. But that would put vast numbers of students at a disadvantage. Those who could afford a fuller education would study literature, history, languages, and so forth (in private schools or in wealthy public schools), while others would have little exposure to such things.
Thus it makes no sense to speak of “evidence” except in relation to what one is trying to do. Once one has established the goals, there are still more challenges, since the appropriate evidence can be difficult to define and gather. Much of it shows itself only over a long period of time. Much of it comes from individual experience. Many people dismiss individual experience as limited and biased, but it also has strengths, if we treat it responsibly.
My point here is not that we should disregard evidence or data. That would be irresponsible. Rather, we should ask, evidence of what? What are we trying to do, and why? What sort of information would help us see whether or not we are accomplishing it? How should we go about interpreting such information? How can we stay true to our goals but admit to mistakes and misunderstandings? How can we speak and work with those whose goals are different?