Action Research and Its Misconceits

When I was in education school and teaching at the same time, we were all required to complete an “action research” project. I objected to this on ethical grounds but ended up doing it anyway. (I made my project as harmless and unobtrusive as possible.) At the time, I ransacked the Web for critiques of action research. The only substantial critique I found was an intriguing article by Allan Feldman, professor of science education at the University of South Florida, who suggested that action research should be more like teaching practice and less like traditional research. (Feldman has written extensively on this subject; he supports action research vigorously but recognizes its pitfalls.)

Action research is research conducted by a practitioner (say, a teacher, nurse, or counselor) in action—in his or her normal setting. It is often informal and does not have to follow standard research protocol. The researcher poses a question, conducts a study or experiment to investigate it, and arrives at conclusions. The action research study may or may not ever be published.

To some degree, teachers conduct action research every day. They frequently try things out and make adjustments. The difference is that (a) they don’t call this research and (b) they don’t have to stick with a research plan (if something isn’t working out, they can drop it immediately). Action research, by contrast, calls itself research and requires more sustained experimentation (if it is to have meaning at all). There lie its problems.

First of all, action research (that I have seen) adheres neither to traditional nor to alternate standards. To call it research is to muddy up the very concept, unless one clearly states what it is. What can action research be? It clearly cannot follow traditional research design. First, it is difficult, if not impossible, for a practitioner to conduct a scientifically valid experiment while performing his or her regular duties. Second, most practitioners have mitigating influences: their prior experience in that setting, their knowledge of the individuals, and their preferences and tendencies. This almost inevitably shapes the findings. If action research is to follow an alternate protocol, then this must be defined with care.

Now for the second problem. Although most action research projects probably do little or no harm, it is ethically problematic to require them. First of all, the teacher may distort her work more than she would do otherwise; she may find the project more distracting than helpful. Second, because of the sheer number of required action research projects, there is rarely much supervision. Teachers conducting such “research” are not required to obtain permission from parents or even notify the students. If education schools were to institute a protocol for action research, they’d double or triple their own work and that of the teachers. That would be impractical. Thus many novice teachers conduct their experiments without even the schools’ knowledge.

I originally thought that the ethical problem was the primary one. I am no longer sure. Most teachers conducting these projects are just getting their bearings in the classroom. An action research project usually amounts to a mild distraction at worst and an interesting investigation at best. However, there should be a standard protocol for such experiments, and they should be voluntary.

The greater problem, in my view, is the intellectual one, with all its implications for policy. We already have enough trouble with the phrase “research has shown.” Again and again we find out that research hasn’t quite shown what people claim it has shown. Because few people take time to read the actual research (which can be cumbersome), researchers and others get away with distorted interpretations of it. Add to this a huge body of quasi-research, and anyone can say that “research has shown” just about anything.

Proponents of educational fads can almost always find “research” to support what they do. Some of it is action research of dubious quality. For instance, the Whole Brain Teaching website cites, on its “Research” page, a “study” titled “Integrating Whole Brain Teaching Strategies to Create a More Engaged Learning Environment.” (I am linking to Google’s cached version of the “Research” page, since the original “Research” page is now blank.) As it turns out, the study took place over the course of a week. The author was testing the effect of “Whole Brain Teaching” on student engagement. She made a list of nine student “behaviors” and observed how they changed between Monday, October 19, 2009, and Friday, October 23, 2009.

One could write off Whole Brain Teaching as some fringe initiative, yet it made its way into an elementary school where I previously taught. It touts itself as “one of the fastest growing education reform movements in the United States.” (I have written more about it in my book and in my blog “Research Has Shown—Just What, Exactly?”) Important or not, it cites shaky research in support of itself—and so do many initiatives. One way to combat this is to insist on basic research standards.

Now, I recognize Dr. Feldman’s argument that action research should try to be less like traditional research and more like actual teaching practice. But in that case, its claims should be different. Its purpose should be to inform the practitioner, not to produce findings that can be generalized. Even in that case, it should have some sort of quality standard. In addition, those conducting the research should exercise caution in drawing conclusions from it, even for themselves. Any action research paper should begin with such cautionary statements.

I am not suggesting that action research be abolished; it has plenty of useful aspects. Of course, teachers should test things out and question their own practice—but voluntarily and perspicaciously. Should such investigation be called research? I’d say no–but the name isn’t really the problem. The challenge here–and for education research overall–is to dare to have a modest and uncertain finding.

Research Has Shown—Just What, Exactly?

In popular writing on psychology, science, and education, we often encounter the phrase “research has shown.” Beware of it. Even its milder cousin, “research suggests,” may sneak up and put magic juice in your eyes, so that, upon opening them, you fall in love with the first findings you see (until you catch on to the trick).

Research rarely “shows” much, for starters—especially research on that “giddy thing” known as humanity.* Users of the phrase “research has shown” often commit any of these distortions: (a) disregarding the flaws of the research; (b) misinterpreting it; (c) exaggerating its implications; or (d) cloaking it in vague language. Sometimes they do this without intent of distorting, but the distortions remain.

Let’s take an example that shows all these distortions. Certain teaching methodologies emphasize a combination of gesture, speech, and listening. While such a combination makes sense, it is taken to extremes by Whole Brain Teaching, a rapid call-and-response pedagogical method that employs teacher-class and student-student dialogue in alternation. At the teacher’s command, students turn to their partners and “teach” a concept, speaking about it and making gestures that the partner mimics exactly. Watch the lesson on Aristotle’s “Four Causes,” and you may end up dizzy and bewildered; why would anyone choose to teach Aristotle in such a loud and frenzied manner?

The research page of the Whole Brain Teaching website had little research to offer a few months ago. Now it points to a few sources, including a Scientific American article that, according to the WBT website,  describes “research supporting Whole Brain Teaching’s view that gestures are central to learning.” Here’s an instance of vague language (distortion d). Few would deny that gestures are helpful in teaching and learning. This does not mean that we should embrace compulsory, frenetic gesturing in the classroom, or that research supports it.

What does the Scientific American article say, in fact? There’s too much to take apart here, but this passage caught my eye: “Previous research has shown”—eek, research has shown!— “that students who are asked to gesture while talking about math problems are better at learning how to do them. This is true whether the students are told what gestures to make, or whether the gestures are spontaneous.” This looks like an instance of exaggerating the implications of research (distortion c); let’s take a look.

The word “told” in that passage links to the article “Making Children Gesture Brings Out Implicit Knowledge and Leads to Learning” by Sara C. Broaders, Susan Wagner Cook, Zachary Mitchell, and Susan Goldin-Meadow,  published in the Journal of Experimental Psychology, vol. 136, no. 4 (2007), pp. 539–550. The abstract states that children become better at solving math problems when told to make gestures (relevant to the problems) during the process. Specifically, “children who were unable to solve the math problems often added new and correct problem-solving strategies, expressed only in gesture, to their repertoires.” Apparently, this progress persisted: “when these children were given instruction on the math problems later, they were more likely to succeed on the problems than children told not to gesture.” So, wait a second here. They didn’t have a control group? Let’s look at the article itself.

The experimenters conducted two studies. The first one involved 106 children in late third and early fourth grade, whom the experimenters tested individually. For the baseline set, children were asked to solve six problems of the type 6 + 3 + 7 = ___ + 7, without being given any instructions on gesturing. Children who solved any of the problems correctly were eliminated from the study at the outset. (Doesn’t this bias the study considerably? Shouldn’t this be mentioned in the abstract?)

From there, the students were assigned to groups for the “manipulation phase” of the study. Thirty-three students were told to gesture; 35 were told to keep their hands still; and 38 were told to explain how they solved the problems. The students who were told to gesture added significantly more “strategies” to their manipulation than did the students in the other two groups; however, nearly all of these strategies were expressed in gesture only and not in speech. Across the groups, students added a mean number of 0.34 strategies to their repertoire, 0.25 of which were correct (the strategies, that is, not the solutions).

It is not clear how many students actually gave correct answers to the problems during the manipulation phase. The study does not provide this information.

The second study involved 70 students in late third and early fourth grade; none had participated in the first study. After conducting the baseline experiment (where no students solved the problems correctly), the researchers divided the students into two groups for the manipulation phase. Children in one group were told to gesture; children in the other group were told not to gesture. The researchers chose these two groups because they were “maximally distinct in terms of strategies added.” (How did they know this in advance? This is not clear.)

Again, the students who had been told to gesture added more strategies to their repertoire; those told not to gesture added none.  The researchers state later, in the “discussion” section of the paper: “Note that producing a correct strategy in gesture did not mean that the child solved the problems correctly. In fact, the children who expressed correct problem-solving strategies uniquely in gesture were, at that moment, not solving the problems correctly. But producing a correct strategy in gesture did seem to make the children more receptive to the later math lesson.”

After the children had solved and explained the problems in the manipulation phase, they were given a lesson on mathematical equivalence. (There was no such lesson in the first study.) The experimenter used a consistent gesture (moving a flat palm under the left side of the equation and then under the right side) for each of the problems presented. Then the students were given a post-test.

On the post-test, the students told not to gesture solved a mean of 2.2 problems correctly (out of six); those told to gesture solved a mean of 3.5 correctly. (I am estimating these figures from the bar graph.)

Why would anyone be impressed by the results? For some reason the researchers did not mention actual performance in the first study. In the second, it isn’t surprising that the students told not to gesture would fare worse on the test. A prohibition against gesturing could be highly distracting, as people tend to gesture naturally in one way or another. Again, there was no control group in the second study. Moreover, neither the overall mean performance on the test or the performance difference between the groups is particularly impressive, given that the problems all followed the same pattern and should have been easy for students who grasped the concept, provided they had their basic arithmetic down.

The researchers do not draw adequate attention to the two studies’ caveats or consider how these caveats might influence the conclusion (distortions a and b). In the “discussion” section of the paper, they state with confidence that “Children told to gesture were more likely to learn from instruction than were children told not to gesture.”

This is just one of myriad examples of research not showing what it claims to show or what others claim it shows. I have read research studies that gloss over their own gaps and weaknesses; popular articles that exaggerate the implications of this research; and practitioners who cite the  popular articles in support of their particular method. When I hear the phrase “research has shown,” I immediately suspect that it isn’t so.

*From Shakespeare’s Much Ado About Nothing; thanks to Jamie Lorentzen for reminding me of the phrase.