Action Research and Its Misconceits

When I was in education school and teaching at the same time, we were all required to complete an “action research” project. I objected to this on ethical grounds but ended up doing it anyway. (I made my project as harmless and unobtrusive as possible.) At the time, I ransacked the Web for critiques of action research. The only substantial critique I found was an intriguing article by Allan Feldman, professor of science education at the University of South Florida, who suggested that action research should be more like teaching practice and less like traditional research. (Feldman has written extensively on this subject; he supports action research vigorously but recognizes its pitfalls.)

Action research is research conducted by a practitioner (say, a teacher, nurse, or counselor) in action—in his or her normal setting. It is often informal and does not have to follow standard research protocol. The researcher poses a question, conducts a study or experiment to investigate it, and arrives at conclusions. The action research study may or may not ever be published.

To some degree, teachers conduct action research every day. They frequently try things out and make adjustments. The difference is that (a) they don’t call this research and (b) they don’t have to stick with a research plan (if something isn’t working out, they can drop it immediately). Action research, by contrast, calls itself research and requires more sustained experimentation (if it is to have meaning at all). There lie its problems.

First of all, action research (that I have seen) adheres neither to traditional nor to alternate standards. To call it research is to muddy up the very concept, unless one clearly states what it is. What can action research be? It clearly cannot follow traditional research design. First, it is difficult, if not impossible, for a practitioner to conduct a scientifically valid experiment while performing his or her regular duties. Second, most practitioners have mitigating influences: their prior experience in that setting, their knowledge of the individuals, and their preferences and tendencies. This almost inevitably shapes the findings. If action research is to follow an alternate protocol, then this must be defined with care.

Now for the second problem. Although most action research projects probably do little or no harm, it is ethically problematic to require them. First of all, the teacher may distort her work more than she would do otherwise; she may find the project more distracting than helpful. Second, because of the sheer number of required action research projects, there is rarely much supervision. Teachers conducting such “research” are not required to obtain permission from parents or even notify the students. If education schools were to institute a protocol for action research, they’d double or triple their own work and that of the teachers. That would be impractical. Thus many novice teachers conduct their experiments without even the schools’ knowledge.

I originally thought that the ethical problem was the primary one. I am no longer sure. Most teachers conducting these projects are just getting their bearings in the classroom. An action research project usually amounts to a mild distraction at worst and an interesting investigation at best. However, there should be a standard protocol for such experiments, and they should be voluntary.

The greater problem, in my view, is the intellectual one, with all its implications for policy. We already have enough trouble with the phrase “research has shown.” Again and again we find out that research hasn’t quite shown what people claim it has shown. Because few people take time to read the actual research (which can be cumbersome), researchers and others get away with distorted interpretations of it. Add to this a huge body of quasi-research, and anyone can say that “research has shown” just about anything.

Proponents of educational fads can almost always find “research” to support what they do. Some of it is action research of dubious quality. For instance, the Whole Brain Teaching website cites, on its “Research” page, a “study” titled “Integrating Whole Brain Teaching Strategies to Create a More Engaged Learning Environment.” (I am linking to Google’s cached version of the “Research” page, since the original “Research” page is now blank.) As it turns out, the study took place over the course of a week. The author was testing the effect of “Whole Brain Teaching” on student engagement. She made a list of nine student “behaviors” and observed how they changed between Monday, October 19, 2009, and Friday, October 23, 2009.

One could write off Whole Brain Teaching as some fringe initiative, yet it made its way into an elementary school where I previously taught. It touts itself as “one of the fastest growing education reform movements in the United States.” (I have written more about it in my book and in my blog “Research Has Shown—Just What, Exactly?”) Important or not, it cites shaky research in support of itself—and so do many initiatives. One way to combat this is to insist on basic research standards.

Now, I recognize Dr. Feldman’s argument that action research should try to be less like traditional research and more like actual teaching practice. But in that case, its claims should be different. Its purpose should be to inform the practitioner, not to produce findings that can be generalized. Even in that case, it should have some sort of quality standard. In addition, those conducting the research should exercise caution in drawing conclusions from it, even for themselves. Any action research paper should begin with such cautionary statements.

I am not suggesting that action research be abolished; it has plenty of useful aspects. Of course, teachers should test things out and question their own practice—but voluntarily and perspicaciously. Should such investigation be called research? I’d say no–but the name isn’t really the problem. The challenge here–and for education research overall–is to dare to have a modest and uncertain finding.

Teacher Ratings and Rubric Reverence

Some seven years ago, when I was taking education courses as a New York City Teaching Fellow, we had to hand in “double-entry journals”—that is, two-column pages with a quotation or situation on one side and our response on the right. On one occasion, I needed far more room for my response than for the quotations, so I adjusted the format: instead of using columns, I simply provided the quotations and my comments below each one.

The instructor chided me in front of the class. She said that this was a masters program and that I should learn to produce masters-level work. (She wasn’t aware that I already had a Ph.D. from Yale.) If the instructions specified a double-entry journal, well, then I was supposed to provide a double-entry journal. She had no quibbles with my commentary itself, which she found insightful. She just took issue with my flouting of the instructions. I have no grudges against the instructor, who meant well and knew her stuff. But it was an eye-opener.

Up to this point, I had not encountered such rigidity regarding instructions. In high school, college, and graduate school, we were expected to use certain formats for term papers, publishable work, and dissertations. But on everyday assignments, it was substance and clarity that mattered most. The teacher or professor even appreciated it when I departed from the usual format for a good reason. I did so judiciously and rarely.

The double-entry-journal incident was part of my induction into New York City public schools. There, the rubric (which usually emphasized appearance and format) ruled supreme; if you did everything just so, you could get a good score, while if you diverged from the instructions but had a compelling idea, you could be penalized. I saw rubrics applied to student work, teachers’ lessons, bulletin boards, classroom layout, group activities, and standardized tests. I will comment on the last of these—rubrics on standardized tests—and their bearing on the recent publication of New York City teachers’ value-added ratings (their rankings based on student test score growth).

A New York Daily News editorial asserts that teachers with consistently high value-added ratings are clearly doing something right. (This is the argument put forth by many value-added proponents.) But that’s not necessarily so; all we really know is that their students are making test score gains.

In New York State, on the written portion of the English Language Arts examinations, it matters little what the students actually say or how well they argue it. What matters is that they address the question in the prompt and follow the instructions to the letter. A student may make erroneous or illogical statements and still receive a high score; a student may make subtle observations and lose points for failing to do everything exactly as specified.

Here’s an essay prompt from the 2009 grade 8 ELA exam. (For an example at the high school level, see my blog “A Critical Look at the Critical Lens Essay.”)

Bill Watterson in “Drawing Calvin and Hobbes” and Roald Dahl in “Lucky Break” discuss their approaches to their work. Write an essay in which you describe the similarities and differences between the work habits of Watterson and Dahl. Explain how their work habits contribute to their success. Use details from both passages to support your answer.  In your essay, be sure to include

  • a description of the similarities between the work habits of Watterson and Dahl
  • a description of the differences between the work habits of Watterson and Dahl
  • an explanation of how their work habits contribute to their success
  • details from both passages to support your answer 

To get a good score, a student would only have to write one paragraph about similarities, one paragraph about differences, and one paragraph about how their work habits led to their success. By contrast, a student who began by considering definitions of “success” (as G.  K. Chesterton does) would not fare so well, even though that might be the more thoughtful essay. Likewise, a student who questioned the direct link between work habits and success (as Mark Twain does) would be at a disadvantage. Students are better off if they write a predictable essay, even a bland one, that meets the criteria. Their teachers are better off, too; every point counts when it comes to value-added scores. 

I have scored ELA exams. Human judgment has little place in those scoring rooms. To maintain consistency, everyone is supposed to follow the rubric, and, if there’s any doubt, the state’s own interpretation of the rubric. It comes down, in the end, to following instructions rather than judgment. On the one hand, this is fair and justified. If teachers were to use their own judgment when scoring, two essays of similar quality could receive wildly different scores. On the other, it means that there’s no way to acknowledge the student who struggles with the question becausethe question is tricky or problematic—that is, the student who pushes beyond the obvious response. 

Now let’s consider the consequences in the classroom. Teachers A and B teach at a relatively high-performing school. Teacher A tells students that to write well, you should have something to say and should take care with words. Her students read G. K. Chesterton, Ralph Waldo Emerson, Mark Twain, Jonathan Swift, and others. They discuss these essays, look at their structures, respond to favorite passages in them, and write essays inspired by them. Teacher B, within the same school, has a different approach. She brings in reading passages like those on the tests. She teaches students how to read essay prompts and produce the expected responses. She has them do this every day. Now, arguably, one can teach students to write thoughtfully and follow directions precisely. But the latter has the greater test score payoff.

So, teacher B’s students make more test score gains than Teacher A’s students. Teacher B gets rated “high”; teacher A, “below average.” (This is a plausible scenario in an unusually high- or low-performing school, where a slight difference in points can account for a large difference in ratings.) Then the ratings appear in the New York Times and elsewhere. Many readers will assume, even with caveats galore, that teacher B does better work than teacher A. Teacher A then finds herself under pressure to do what teacher B is doing. That means ensuring that her students follow directions.

How do you get teachers to teach in this manner? Train them in education school. Impress upon them the sacrosanctity of instructions. Teach them that if the assignment is a double-entry journal, then that is what they must produce, period.

  • “To know that you can do better next time, unrecognizably better, and that there is no next time, and that it is a blessing there is not, there is a thought to be going on with.”

    —Samuel Beckett, Malone Dies

  • Always Different

  • Pilinszky Event (3/20/2022)



    Diana Senechal is the author of Republic of Noise: The Loss of Solitude in Schools and Culture and the 2011 winner of the Hiett Prize in the Humanities, awarded by the Dallas Institute of Humanities and Culture. Her second book, Mind over Memes: Passive Listening, Toxic Talk, and Other Modern Language Follies, was published by Rowman & Littlefield in October 2018. In February 2022, Deep Vellum will publish her translation of Gyula Jenei's 2018 poetry collection Mindig Más.

    Since November 2017, she has been teaching English, American civilization, and British civilization at the Varga Katalin Gimnázium in Szolnok, Hungary. From 2011 to 2016, she helped shape and teach the philosophy program at Columbia Secondary School for Math, Science & Engineering in New York City. In 2014, she and her students founded the philosophy journal CONTRARIWISE, which now has international participation and readership. In 2020, at the Varga Katalin Gimnázium, she and her students released the first issue of the online literary journal Folyosó.


    On April 26, 2016, Diana Senechal delivered her talk "Take Away the Takeaway (Including This One)" at TEDx Upper West Side.

    Here is a video from the Dallas Institute's 2015 Education Forum.  Also see the video "Hiett Prize Winners Discuss the Future of the Humanities." 

    On April 19–21, 2014, Diana Senechal took part in a discussion of solitude on BBC World Service's programme The Forum.  

    On February 22, 2013, Diana Senechal was interviewed by Leah Wescott, editor-in-chief of The Cronk of Higher Education. Here is the podcast.


    All blog contents are copyright © Diana Senechal. Anything on this blog may be quoted with proper attribution. Comments are welcome.

    On this blog, Take Away the Takeaway, I discuss literature, music, education, and other things. Some of the pieces are satirical and assigned (for clarity) to the satire category.

    When I revise a piece substantially after posting it, I note this at the end. Minor corrections (e.g., of punctuation and spelling) may go unannounced.

    Speaking of imperfection, my other blog, Megfogalmazások, abounds with imperfect Hungarian.

  • Recent Posts


  • Categories