The Toxicity of “Toxic”

fort tryon in springWe gain much of our strength, versatility, and wisdom from difficulties and challenges. Yet today a cult of convenience squats in each field of life. Often, when people refer to others as “toxic,” they are not just using words carelessly; they are suggesting that the people they don’t like (or don’t immediately understand) are bad for their existences and deserving of expulsion.

Would the scene in the photo exist if no one could be bothered with difficulty? It took some adventurous sculpting and grappling with stone and plants (and that’s an understatement). What about a great friendship, also a mixture of nature and sculpture? If people dropped friendships as soon as they became difficult in any way, what would be left?

Again and again, I see advice about how to eliminate “toxic” people from your life. The criterion for “toxicity” is basically inconvenience or unpleasantness. Those who speak of “toxicity” rarely distinguish between people who pose difficulties for you and people who really hurt you.

On her website Science of People, Vanessa Van Edwards, author of the forthcoming Captivate: The Science of Succeeding with People (Portfolio, April 25, 2017), declares that you “deserve to have people in your life who you enjoy spending time with, who support you and who you LOVE hanging out with.” The site has been discussed in comments on Andrew Gelman’s blog; while there’s plenty to say about the references to “science,” I’ll focus on “toxic” instead, since that’s the topic of this blog post.

In her short article “How to Spot a Toxic Person,” after describing seven toxic types, Van Edwards lists some tell-tale symptoms that you’re in the presence of someone toxic.  She then assures her readers that they don’t  need these toxic people–that they deserve the company of wonderful people, with whom they can be their best selves. Here is the list:

  • You have to constantly save this person and fix their problems
  • You are covering up or hiding for them
  • You dread seeing them
  • You feel drained after being with them
  • You get angry, sad or depressed when you are around them
  • They cause you to gossip or be mean
  • You feel you have to impress them
  • You’re affected by their drama or problems
  • They ignore your needs and don’t hear ‘no’

Now, of the nine symptoms listed here, only one clearly has to do with the other person’s actions: “They ignore your needs and don’t hear ‘no.'” The others have to do with the sufferer’s own reactions and assumptions. Of course those reactions also matter, but they do not necessarily reflect meanness, selfishness, or obtuseness in the other person.

So what? someone might ask. If someone’s company leaves you miserable, don’t you have a right to detach yourself? Well, maybe, up to a point (or completely, in some cases), but it makes a difference how you frame it, even in your own mind. It is possible to keep (or work toward) some humility.

If your explanation is, “This person wants more time and energy from me than I can give,” then it makes sense to try to set an appropriate limit. If that fails, either because you weren’t clear enough or because the other person does not accept the terms, then a more drastic resolution may be needed–but even then, it doesn’t mean that the person is “toxic.” It just means that you have incompatible needs. Perhaps you were like that other person once upon a time; many of us go through times when we particularly need support or seek it from someone who cannot give it.

If the explanation is, “I don’t like the kind of conversation I end up having around this person,” then one option is to change the topic or tenor of conversation. Another is to limit its length (or try to do something together instead of mainly talking). If neither one works, there may be a basic incompatibility at stake. Even then, it doesn’t mean the other person is “toxic.” It just means that you have different interests.

Now, of course there are people who use, harm, and control others. There are those who gossip aggressively and meanly, promote themselves at every possible opportunity, or treat others  as their servants. When describing such people, one still doesn’t have to use the word “toxic”; a clearer description will lead to a clearer solution.

Why does this matter? The concept of “toxicity,” as applied to humans, has become a fad; people use it to justify writing off (and blaming) anyone who poses an inconvenience or whose presence doesn’t give constant pleasure. Philosophers, theologians, poets, and others, from Aristotle to Buber to Shakespeare to Saunders, have pointed to the moral vacuity of this practice. Yet the “toxic” banner continues to fly high in our hyper-personalized, hyper-fortified society (and always over the other people).

There are ways to be around people and still hold your ground, draw provisional lines, and take breaks. It’s possible to limit a relationship without deeming the other person awful. It is not only possible, but essential to public discussion, substantial friendship, and solitude. Who am I, if I must dismiss and disparage someone just to go off on my own or be with others? Doesn’t that cheapen the subsequent aloneness or company?

As for whether we deserve to be around people we love, people whose company we enjoy–yes, of course. But we also deserve to be around those whose presence is not so easy for us. When appropriately bounded, such a relationship can have meaning and beauty. Some of my best friendships had an awkward start; they grew strong when we let each other know what we did and didn’t want.

I hope never to call a person “toxic”; if it’s my reactions that trouble me, I can address them appropriately; if it’s the person’s actions, I can find a more specific term.

Image credit: I took this photo in Fort Tryon Park.

Update: Here’s an article by Marcel Schwantes (published in Inc.) advising people to cut “toxic” co-workers from their lives as a way of keeping “good boundaries.” Here’s a quote:

5. Cut ties with people who kiss up to management.

They will go out of their way to befriend and manipulate management in order to negotiate preferential treatment–undue pay raises, training, time off, or special perks that nobody else knows about or gets. Keep an eye out for colleagues who spend way more face time with their managers than usual. The wheels of favoritism may be in motion. Time to cut ties.

What? You don’t even know why the person is spending “face time” with management. Why conclude that it’s “time to cut ties”?

This anti-“toxic” stance of this article (and others like it) is much too self-satisfied and self-assured. 

Are College Professors Responsible for Student Learning?

aliceI learn a heck of a lot from Andrew Gelman’s blog–not only his own posts, but the many interesting and substantial comments. It’s one of my favorite places on the internet right now (granted, I have low tolerance for “surfing” and tend to focus on a few sites). That said, I find myself questioning some of his arguments and views, particularly about measurement in education. Now, I am not about to say “learning can’t be measured” or “tests are unfair” or anything like that. My points are a bit different.

In an article for Chance, vol, 25 (2012), Gelman and Eric Loken observe that, as statisticians, they give out advice that they themselves do not apply to their classrooms; this contradiction, in their view, has ethical consequences:

Medicine is important, but so is education. To the extent that we believe the general advice we give to researchers, the unsystematic nature of our educational efforts indicates a serious ethical lapse on our part, and we can hardly claim ignorance as a defense. Conversely, if we don’t really believe all that stuff about sampling, experimentation, and measurement—it’s just wisdom we offer to others—then we’re nothing but cheeseburger-snarfing diet gurus who are unethical in charging for advice we wouldn’t ourselves follow.

They acknowledge the messiness and complexity of education but maintain, all the same, that they could improve their practice by measuring student learning more systematically and adjusting their instruction accordingly. “Even if variation is high enough and sample sizes low enough that not much could be concluded,” they write, “we suspect that the very acts of measurement, sampling, and experimentation would ultimately be a time-efficient way of improving our classes.”

I agree with the spirit of their argument; yes, it makes sense to practice what you proclaim, especially when this can improve your teaching. Of course assessment and instruction should inform and strengthen each other.  Still, any measurement must come with adequate doubt and qualification. I think they would agree with this; I don’t know, though, whether we would raise the same doubts. I see reason to consider the following (at the college level, which differs substantially from K-12):

While still moving toward independence, students are more in charge of their own learning than before. Ideally they should start figuring out the material for themselves. What is the class for, then? To introduce topics, organize the subject matter, illuminate certain points, and work through problems … but perhaps not to “produce” learning gains, at least not primarily. On the other hand, the course should have adequate challenge for those at the top and support for those at the bottom (within reason). Introductory courses may include additional supports.

Also, a student might deliberately choose a course that’s too difficult at the outset (but still feasible). Some people thrive on difficulty and are willing to let their grade drop a little for the sake of it. The learning gains may not show right away, but this does not mean that the teacher should necessarily adjust instruction. If the student puts in the necessary work and thought, he or she will show improvement in good time. Students should not be discouraged from the kind of challenge that temporarily slows their external progress.

In addition, there are inevitable mismatches, at the college level, between instruction and assessment. (This may be especially true of the humanities.) If you are teaching a literature, history, or philosophy class, your students will probably write essays for a grade, but your teaching will address only certain components of the writing. Students have to learn the rest through practice. Thus you will grade things that you haven’t explicitly taught. (Your course may not deal explicitly with grammar, but if a paper is full of ungrammatical and incoherent sentences, you still can’t give it an A.) This may seem unfair–but over time, through extensive practice and reading, students will come to write strong essays.

Since September 2015 I have been taking classes part-time, as a non-matriculated student, at the H. L. Miller Cantorial School at JTS. In my first class, I was far below the levels of my classmates. That was what I wanted. I studied on the train, in my spare moments, and at night. (I was teaching as well.) I flubbed the final presentation, relatively speaking, not because I was underprepared, but because I prepared in the wrong way. I ended up with a B+ in the course. The next semester, my Hebrew had risen to a new level; the course (on the Psalms) enthralled me, and I did well. This year, I have been holding my own in the course I longed to take all along: a year-long course in advanced cantillation. If the professors had worried too closely about my learning gains, I wouldn’t have learned as much.

On the other hand, in the best classes I have taken over the years, the professors did great things for my learning. I wouldn’t have learned nearly as much, or gained the same insights, without the courses.  The paradox is this: to help me understand, the professors also let me not understand. To help me progress, they sometimes took me to the steepest steps–and then pointed out all the interesting engravings in them. It wasn’t just fascination that took me from step to step–I had to work hard–but they trusted that I could do it and left it largely in my hands.

Granted, not all students are alike, nor are all courses. In an introductory course, students may be testing out the field. If they are completely lost, or if the course takes extraordinary effort and time, they may conclude that it’s not for them. A professor may need to respond diligently to their needs. There are many ways of looking at a course; one should work to become alert to its different angles.

In short, college should be where students learn how to teach themselves and how to gain insights from a professor. While helping students learn, one can also hope, over time, to simulate Virgil’s last words to Dante in Purgatorio, “I crown and miter you over yourself” (or to accompany them to the point where, like Alice, they find a crown atop their heads.)

Image: Sir John Tenniel, illustration for the eighth chapter of Lewis Carroll’s Through the Looking Glass (1865).

Note: I revised the fourth paragraph for clarity and made a minor edit to the last sentence.

A Lesson from the Power Pose Debacle

Amy Cuddy’s TED talk on power posing has thirty-seven million views. Its main idea is simple: if you adopt an expansive, authoritative pose, your actual power will increase. For evidence, Cuddy refers to a study she conducted in 2010 with Dana Carney and Andy Yap. Holes and flaws in the study have since been revealed, but Cuddy continues to defend it. Doubt fuels scientific inquiry, but in an era TED-style glamor and two-minute “life hacks” (Cuddy’s own term for the power pose), we find a shortage of such doubt on stage. It is time to tap the reserves.

Recently TED and Cuddy appended a note to the summary of the talk: “Some of the findings presented in this talk have been referenced in an ongoing debate among social scientists about robustness and reproducibility.” In other (and clearer) words: The power pose study has not held up under scrutiny. At least two replications failed; Andrew Gelman, Uri Simonsohn, and others have critiqued it robustly; and Carney, the lead researcher, detailed the study’s flaws—and disavowed all belief in the effect of power poses—in a statement posted on her website. Jesse Singal (New York Magazine) and Tom Bartlett (The Chronicle of Higher Education) have weighed in with analyses of the controversy.

Very well, one might shrug aloud, but what should we, irregular members of the regular public, do? Should we distrust every TED talk? Or should we wait until the experts weigh in? Neither approach is satisfactory. When faced with fantastic scientific claims, one can wield good skepticism and follow one’s doubts and questions.

Before learning of any of this uproar, I found Cuddy’s talk unstable. Instead of making a coherent argument, it bounces between informal observations, personal experiences, and scientific references. In addition, it seems to make an error early on. Two minutes into her talk, Cuddy states that “Nalini Ambady, a researcher at Tufts University, shows that when people watch 30-second soundless clips of real physician-patient interactions, their judgments of the physician’s niceness predict whether or not that physician will be sued.” Which study is this? I have perused the Ambady Lab website, conducted searches, and consulted bibliographies—and I see no sign that the study exists. (If I find that the study does exist, I will post a correction here. Ambady died in 2013, so I cannot ask her directly. I have written to the lab but do not know whether anyone is checking the email.)

In separate studies, Ambady studied surgeons’ tone of voice (by analyzing subjects’ ratings of sound clips where the actual words were muffled) and teachers’ body language (by analyzing subjects’ ratings of soundless video clips). As far as I know, she did not conduct a study with soundless videos of physician-patient interactions. Even her overview articles do not mention such research. Nor did her study of surgeons’ tone of voice make inferences about the likelihood of future lawsuits. It only related tone of voice to existing lawsuit histories.

Anyone can make a mistake. On the TED stage, delivering your talk from memory before an enormous audience, you have a million opportunities to be fallible. This is understandable and forgivable. It is possible that Cuddy conflated the study of physicians’ tone of voice with the study of teachers’ body language. Why make a fuss over this? Well, if a newspaper article were to make such an error, and were anyone to point it out, the editors would subsequently issue a correction. No correction appears on the TED website. Moreover, many people have quoted Cuddy’s own mention of that study without looking into it. It has been taken as fact.

Why did I sense that something was off? First, I doubted that subjects’ responses to a surgeon’s body language predicted whether the doctor would be sued in the future. A lawsuit takes money, time, and energy; I would not sue even the gruffest surgeon unless I had good reason. In other words, the doctor’s personality would only have a secondary or tertiary influence on my decision to sue. On the other hand, it is plausible that doctors with existing lawsuit histories might appear less personable than others—if only because it’s stressful to be sued. Insofar as existing lawsuit histories predict future lawsuits, there might be a weak relation between a physician’s body language and his or her likelihood of being sued in the future. I suspect, though, that the data would be noisy (in a soundless kind of way).

Second, I doubted that there was any study involving videos of physician-patient interactions. Logistical and legal difficulties would stand in the way. With sound recordings—especially where the words are muffled—you can preserve anonymity and privacy; with videos you cannot. As it turns out, I was flat-out wrong; video recording of the doctor’s office has become commonplace, not only for research but for doctors’ own self-assessment.

It matters whether or not this study exists—not only because it has been taken as fact, but because it influences public gullibility. If you believe that a doctor’s body language actually predicts future lawsuits, then you might also believe that power pose effects are real. You might believe that “the vast majority of teachers reports believing that the ideal student is an extrovert as opposed to an introvert” (Susan Cain) or that “the whole purpose of public education throughout the world is to produce university professors” (Ken Robinson). The whole point of a TED talk is to put forth a big idea; alas, an idea’s size has little to do with its quality.

What to do? Questioning Cuddy’s statement, and statements like it, takes no special expertise, only willingness to follow a doubt. If TED were to open itself to doubt, uncertainty, and error—posting corrections, acknowledging errors, and inviting discussion—it could become a genuine intellectual forum. To help bring this about, people must do more than assume a doubting stance. Poses are just poses. Insight requires motion—from questions to investigations to hypotheses to more questions.  This is what makes science interesting and strong.  Science, with all its branches and disciplines, offers not a two-minute “life hack,” but rather the hike of a lifetime. With a mind full of doubt and vigor, one can make it.

Update: TED has changed the title of Cuddy’s talk from “Your Body Language Shapes Who You Are” to “Your Body Language May Shape How You Are.” In addition, the talk’s page on the TED website has a “Criticism & updates” section, last updated in August 2017. Both are steps in the right direction.

Note: I originally had the phrase “two-minute life hack” in quotes, but Cuddy’s actual phrase is “free no-tech life hack.” She goes on to say that it takes requires changing your posture for two minutes. So I removed “two-minute” from the quotes.

Interesting Studies with Hasty Conclusions

I recognize that I was a bit harsh on the calendar synaesthesia study–that is, dismissive in tone. What bothered me was the claim (right there in the paper itself) that this constituted the first “clear unambiguous proof for the veracity and true perceptual nature” of calendar synaesthesia. I sincerely thought, for a little while, that this might be a hoax.

The experiments of the study do not prove anything, nor do they have to. It would be far more interesting (to me) if the authors explored the uncertainties a bit more.

For instance, when you have a mitigating conditioned response, how do you tell what is synaesthesia and what isn’t? Say, for instance, that you have learned to read sheet music from a young age. Suppose, now, that when you hear music, you see musical notes before you. Is this synaesthesia, or is this a learned association between musical notation and sounds?

The calendar situation seems similar. We have all seen calendars (many times). They take different shapes but always arrange the dates in some pattern. In addition, we have seen clocks, season wheels, and other representations of time. If I can picture the months in an atypical (fixed) shape, is this synaesthesia, or is it a modification of learned associations between time and images?

I do not doubt the existence of synaesthesia (of certain kinds). I just see reason to try to delineate what it is and isn’t–and to refine the surrounding questions, including questions of methodology.

The other day, Andrew Gelman posted an (exploratory) manifesto calling for more emphasis  on–and better guidelines for–exploratory studies. The piece begins with a quote from Ed Hagen:

Exploratory studies need to become a “thing.” Right now, they play almost no formal role in social science, yet they are essential to good social science. That means we need to put as much effort in developing standards, procedures, and techniques for exploratory studies as we have for confirmatory studies. And we need academic norms that reward good exploratory studies so there is less incentive to disguise them as confirmatory.

The problem is twofold: (1) Exploratory studies don’t get enough respect or attention, so people disguise them (intentionally or not) as confirmatory studies; (2) Exploratory stories can be good, bad, and anything in between, so there should be clearer standards, procedures, and techniques for them. “Exploratory” does not (and should not) mean “anything goes.”

But then comes the question: What constitutes a good exploratory study? Gelman offered a few criteria, to which others added in the prolific comment section.

It would be encouraging if the social sciences (and other fields, including literature) worked carefully with uncertainties (and got published because they did so). Instead of iffy studies, we’d have studies that wielded the “if” with skill and care.

P.S. Speaking of Andrew Gelman’s blog, I had some fun responding to Rolf Zwaan’s “How to Cook Up Your Own Social Priming Article.”

 

Can Happiness Be Rated?

pandaFirst, I’ll upend a possible misunderstanding: My point here is not that “so many things in life cannot be measured.” I agree with that statement but not with the abdication surrounding it. It is exquisitely difficult to measure certain things, such as happiness, but I see reason to peer into the difficulty. Through trying and failing to measure happiness, we can learn more about what it is.

Lately I have seen quite a few studies that include a happiness rating: the study I discussed here, a study that Drake Baer discussed just the other day, and a study that Andrew Gelman mentioned briefly. In all three, the respondents were asked to rate their happiness; in none of them was happiness defined.

Some people may equate happiness with pleasure, others with contentment, others with meaning. Some, when asked about their happiness level, will think of the moment; others, of the week; still others, of the longer term. The complexities continue; most of us are happier in some ways than in others, so how do we weigh the different parts? The weights could change even over the course of a day, depending on what comes into focus. Happiness changes in retrospect, too.

In addition, two people with similar “happiness levels” (that is, who would describe their pleasure, contentment, and meaningful pursuits similarly) might choose different happiness ratings. A person with an exuberant personality might choose a higher rating than someone more subdued, or vice versa.

Given the extraordinary complexity of measuring happiness, I distrust any study that measures it crudely and does not try to define it. I doubt that it can be defined or measured exactly; but a little more precision would be both helpful and interesting.

Incidentally, the search for precision can bridge the humanities and the sciences; while they will always have different methodologies (and even different questions), they have a common quest for the right words.

Formal and Informal Research

I have been thinking a lot about formal and informal research: how both have a place, but how they shouldn’t be confused with each other. One of my longstanding objections to “action research” is that it confuses the informal and formal.

Andrew Gelman discusses this problem (from a statistician’s perspective) in an illuminating interview with Maryna Raskin on her blog Life After Baby. It’s well worth reading; Gelman explains, among other things, the concept of “forking paths,” and acknowledges the place of informal experimentation in daily life (for instance, when trying to help one’s children get to sleep). Here’s what I commented:

[Beginning of comment]

Yes, good interview. This part is important too [regarding formal and informal experimentation]:

So, sure, if the two alternatives are: (a) Try nothing until you have definitive proof, or (b) Try lots of things and see what works for you, then I’d go with option b. But, again, be open about your evidence, or lack thereof. If power pose is worth a shot, then I think people might just as well try contractive anti-power-poses as well. And then if the recommendation is to just try different things and see what works for you, that’s fine but then don’t claim you have scientific evidence one particular intervention when you don’t.

One of the biggest problems is that people take intuitive/experiential findings and then try to present them as “science.” This is especially prevalent in “action research” (in education, for instance), where, with the sanction of education departments, school districts, etc., teachers try new things in the classroom and then write up the results as “research” (which often gets published.

It’s great to try new things in the classroom. It’s often good (and possibly great) to write up your findings for the benefit of others. But there’s no need to call it science or “action research” (or the preferred phrase in education, “data-driven inquiry,” which really just means that you’re looking into what you see before you, but which sounds official and definitive). Good education research exists, but it’s rather rare; in the meantime, there’s plenty of room for informal investigation, as long as it’s presented as such.

[End of comment]

Not everything has to be research. There’s plenty of wisdom derived from experience, insight, and good thinking. But because research is glamorized and deputized in the press and numerous professions, because the phrase “research has shown” can put an end to conversation, it’s important to distinguish clearly between formal and informal (and good and bad). There are also different kinds of research for different fields; each one has its rigors and rules. Granted, research norms can also change; but overall, good research delineates clearly between the known and unknown and articulates appropriate uncertainty.

Update: See Dan Kahan’s paper on a related topic. I will write about this paper in a future post. Thanks to Andrew Gelman for bringing it up on his blog.

Lectures, Teams, and the Pursuit of Truth

One of these days, soon, I’ll post something about teaching. Since I’m not teaching this year, I have had a chance to pull together some thoughts about it.

In the meantime, here are a few comments I posted elsewhere. First, I discovered, to my great surprise, that Andrew Gelman seeks to “change everything at once” about statistics instruction—that is, make the instruction student-centered (with as little lecturing as possible), have interactive software that tests and matches students’ levels, measure students’ progress, and redesign the syllabus. While each of these ideas has merit and a proper place, the “change everything” approach seems unnecessary. Why not look for a good combination of old and new? Why abandon the lecture (and Gelman’s wonderful lectures in particular)?

But I listened to the keynote address (that the blog post announced) and heard a much subtler story. Instead of trumpeting the “change everything” mantra into our poor buzzword-ringing heads, Gelman asked questions and examined complexities and difficulties. Only in the area of syllabus did he seem sure of an approach. In the other areas, he was uncertain but looking for answers. I found the uncertainty refreshing but kept on wondering, “why assume that you need to change everything? Isn’t there something worth keeping right here, in this very keynote address about uncertainties?”

Actually, the comment I posted says less than what I have said here, so I won’t repeat it. I have made similar points elsewhere (about the value of lectures, for instance).

Next, I responded to Drake Baer’s piece (in New York Magazine’s Science of Us section), “Feeling Like You’re on a Team at Work Is So Deeply Good for You.” Apparently a research team (ironic, eh?) lead by Niklas Steffens at University of Queensland found that, in Baer’s words, “the more you connect with the group you work with—regardless of the industry you’re in—the better off you’ll be.”

In my comment, I pointed out that such associations do not have to take the form of a team—that there are other structures and collegial relations. The differences do matter; they affect the relation of the individual to the group. Not everything is a team. Again, no need to repeat. I haven’t yet read the meta-study, but I intend to do so.

Finally, I responded to Jesse Singal’s superb analysis of psychology’s “methodological terrorism” debate. Singal points to an underlying conflict between Susan Fiske’s wish to protect certain individuals and others’ call for frank, unbureaucratic discussion and criticism. To pursue truth, one must at times disregard etiquette. (Tal Yarkoni, whom Singal quotes, puts it vividly.) There’s much more to Singal’s article; it’s one of the most enlightening new pieces I have read online all year. (In this case, by “year” I  mean 2016, not the past twelve days since Rosh Hashanah.)

That’s all for now. Next up: a piece on teaching (probably in a week or so). If my TEDx talk gets uploaded in the meantime (it should be up any day now), I’ll post a link to it.

Beyond a Dream of Uncertainty

A few years ago, I wrote of a dream of uncertainty. Today I second this dream but also want something beyond it.

We live in a culture of takeaways. The quick “apply it right now” answer takes precedence over complications and open questions. So-called “scientific findings” (as presented on TED and elsewhere) are often tenuous, as the power pose example suggests. Science here is not at fault; the problem lies in the market for quick solutions (and everything feeding that market, from a gullible audience to an overhyped study).

Most of the time, both science and life  take time to figure out. Most of the time, any understanding, any progress, requires grappling with errors over many years.

On Andrew Gelman’s blog, Shravan Vasishth posted a terrific comment (worth reading in full) that concludes:

So, when I give my Ted talk, which I guess is imminent, I will deliver the following life-hacks:

1. If you want big changes in your life, you have to work really, really hard to make them happen, and remember you may fail so always have a plan B.
2. It’s all about the content, and it’s all about the preparation. Presence and charisma are nice to have, and by all means cultivate them, but remember that without content and real knowledge and understanding, these are just empty envelopes that may some fool people but won’t make you and better than you are now.

There was a reason that Zed Shaw wrote Learn Python the Hard Way and Learn C the Hard Way books. There is no easy way.

In this spirit, I continue to dream but do not only dream. I want a society that recognizes substance, that does not fall so easily for bad science. Along with that, I want more kindness, more willingness to see the good in others (while also engaging with them in vigorous debate). But to help bring that about, I need to continue my own studies, pushing up against my own challenges and errors. So let this be a year of study, challenge, substance, and goodwill.

Research Has Shown … Just What, Exactly? (Reprise)

A few years ago, I wrote a piece with this title, minus the “(Reprise).” (And here’s a piece from 2011.)

It seems apt today (literally today) in light of Dana Carney’s statement, released late last night, that she no longer believes  “power pose” effects are real. She explains her reasons in detail. I learned about this from a comment on Andrew Gelman’s blog; an hour and a half later,  an article by Jesse Singal appeared in Science of Us (New York Magazine).

Dana Carney was one of three authors of the 2010 study, popularized in Amy Cuddy’s TED Talk, that supposedly found that “power poses” can change your hormones and life. (Andy Yap was the third.)

The “power pose” study has been under criticism for some time; a replication failed, and an analysis of both the study and the replication turned up still more problems.  (For history and insight, see Andrew Gelman and Kaiser Fung’s Slate article.) Of the three researchers involved, Carney is the first to acknowledge the problems with the original study and to state that the effects are not real.

Carney not only acknowledges her errors but explains them clearly. The statement is an act of courage and edification. This is how research should work; people should not hold fast to flawed methods and flimsy conclusions but should instead illuminate the flaws.

 

Update: Jesse Singal wrote an important follow-up article, “There’s an Interesting House-of-Cards Element to the Fall of Power Poses.” He discusses, among other things, the ripple effect (or house-of-cards effect) of flawed studies.

 

Science ≠ Community

communityI enjoy Andrew Gelman’s blog; it’s a great place if you have heard too many false and flashy “research has shown” and “science tells us” statements and want to know (a) where such research goes wrong and (b) why these errors don’t get attention. The blog attracts readers and commenters from many fields and perspectives; some of the comments could be pieces on their own.

Recently there was an enlightening post followed by lively discussion of Susan Fiske’s call for an end to the reign of “destructo-critics” and “self-employed data police” in social media—that is, those who employ “methodological terrorism” in criticizing others’ research. She doesn’t name names or give any concrete examples, so it isn’t clear who the “destructo-critics” are. She does suggest, though, that the legitimate channels for criticism are peer review or “curated” discussion. That is, she opposes not just nasty tweets, vicious personal attacks, and so forth, but (possibly) any unsanctioned critical commentary on research. In her conclusion, she says, “Ultimately, science is a community,  and we are all in it together.”

A community, eh? That rings a bell….

A commenter (“Plucky”) seized this sentence this and gave it a good shaking:

The key problem with Fiske is this sentence: “Ultimately, Science is a community, and we are all in it together.”

That is just flat-out wrong, and wrong in the ways that result in all that you have criticized. Science is not a community, it is a method.

That’s just the beginning; “Plucky” goes on to explain the dangers of the “community” metaphor. I recommend reading the whole comment. Here’s another choice quote:

My main criticism of this post is the stages of the metaphor—you’re nowhere near six feet of water in Evangeline. If Science devolves into merely a community, then it’s just another political interest group which will be treated as such.

I then remembered the many times I had heard the phrase “the consensus of the scientific community,” along with references to Thomas Kuhn, who supposedly coined it. Kuhn actually said, “What better criterion could there be … than the decision of the scientific group?” He explained what he meant by this, but I consider the statement problematic at best, even in context.

Kuhn aside, I use the word “community” sparingly and cautiously. Many entities that call themselves communities are not communities or should not be. As “Plucky” notes, “communities do not generally value the truth over their members’ well-being.” They exist to support their members.

In fact, someone who wishes to challenge a prevailing idea must often speak independently, without waiting for “community” approval. Dana Carney has just done this in relation to “power poses.” (This is big news, by the way.)

I do not disparage communities overall. Communities of various kinds have a place in the world, and I belong to a few. Still, even the best communities can ask themselves, “To what extent is this a community, and whom do we leave out?” and “What goals does this community not serve, and where does it even counter them?” The point is not to make the community all-encompassing but rather to recognize its limitations.

When it comes to science, I’ll take an open forum over a community any day.

 

Note: I added the paragraph about Carney after posting this piece.
Update: Writing for
Science of Us (New York Magazine), Jesse Singal reported this morning on Carney’s statement and explanation.