Michael Keenan


tl;dr: this latest academic journal hoax is over-hyped and the reporting on it is terribleA trio of academics submitted 20 ridiculous papers to various feminist/gender/related-studies journals in an effort to show the journals to be ridiculous. 7 papers were accepted. The coverage has been gloating and the Twitter response has been gleeful. But the more I look into it, the less there is to it. This is troubling, because smart people like Paul Graham and Patrick Collison have retweeted about it.

WSJ article

The Chronicle of Higher Education article

Google Drive link with all the papers and the review comments

Here’s the trio’s essay on it. At times, I think they’re deliberately vague about which ridiculous papers were accepted and which weren’t. Here’s a paragraph of theirs:

We used other methods too, like, “I wonder if that ‘progressive stack’ in the news could be written into a paper that says white males in college shouldn’t be allowed to speak in class (or have their emails answered by the instructor), and, for good measure, be asked to sit in the floor in chains so they can ‘experience reparations.’” That was our “Progressive Stack” paper. The answer seems to be yes, and feminist philosophy titan Hypatia has been surprisingly warm to it. Another tough one for us was, “I wonder if they’d publish a feminist rewrite of a chapter from Adolf Hitler’s Mein Kampf.” The answer to that question also turns out to be “yes,” given that the feminist social work journal Affilia has just accepted it.

The parallel structure of the paragraph, with ‘The answer to that question also turns out to be “yes”’ elides the very different fates of the two papers. Hypatia didn’t publish the Progressive Stack paper, and in fact they rejected it three times. But phrasing it this way, you can describe it in the same paragraph as an accepted paper, and many people won’t remember the difference. (Here’s a Harvard lecturer’s thread, with 10,000 Twitter Likes, describing the Progressive Stack paper as accepted.)

The coverage has been even worse. Here’s a Quillette piece on it, with a part that a Facebook friend quoted:

[Hypatia] invited resubmission of a paper arguing that “privileged students shouldn’t be allowed to speak in class at all and should just listen and learn in silence,” and that they would benefit from “experiential reparations” that include “sitting on the floor, wearing chains, or intentionally being spoken over.” The reviewers complained that this hoax paper took an overly compassionate stance toward the “privileged” students who would be subjected to this humiliation, and recommended that they be subjected to harsher treatment.

This isn’t just wrong; if anything, the reviewers opposed the shaming technique. Here are the full review comments for all three rejections of the paper. I don’t see any concern for an overly compassionate stance, or any recommendation of harsher treatment. When a reviewer does mention it, their concern is that it might be ineffective, and they’re uncomfortable with it. Here’s a quote from the second rejection:

What are experiential reparations? Say more about this. Also, some of your suggestions strike me as “shaming.” I’ve never had much success with shaming pedagogies, they seem to foment more resistance by members of dominant groups.

And from the same reviewer in the third rejection:

Find a place for the experiential reparations. This still makes me feel uncomfortable, because it’s shame-y and I’m not sure that student can see it otherwise.

After reading the reviewer comments, I’m very sympathetic to the reviewers, and I update toward thinking that their field is not a made-up illegible jargon-fest. They say things like:

“There are dozens of claims that are asserted and never argued for.”

“The author promises to explore key terms and explain why they are applicable to the classroom. They introduce: epistemic violence, epistemic oppression, epistemic violence, testimonial smothering, privilege-evasive epistemic pushback, epistemic exploitation, testimonial injustice, hermeneutical injustice, willful ignorance, virtuous listening, and strategic ignorance. This is too much ground to cover!!”

“The scholarship is not as sound as it could be; that is, the basic structure of the argument is plausible and interesting, but the submission has far too many issues that get in the way of a clear and sound presentation of the author’s argument.”

“I think these are basically good insights, they need to be argued for more clearly and not just asserted as true. They are interesting claims, say more, say how, say why, and don’t just assert…Explain.”

These aren’t possible comments from a field full of fashionable nonsense that doesn’t mean anything. I’m sad to contemplate the reviewers trying to help someone fix the mistakes in their paper, while the authors’ intention is to slip through as many mistakes as possible. As the editor wrote in an encouraging cover letter:

At the same time ref #1 is encouraging about your revisions. You’ll note that ref #1 says, for example, that it’s your earlier improvements that have generated some of the new problems that need attention!

See also this Twitter thread by one of the reviewers for the Masturbation is Rape paper (which was rejected). It’s sad - he rejected the paper, but wrote some encouraging things, and the hoaxers quoted the positive parts in their essay.

I haven’t looked at all the papers in detail; this isn’t a thorough investigation of all of it. Maybe I happened across the least-bad papers and the most-misleading coverage first. I think the “fat bodybuilding” paper is just as bad as it sounds: “fat bodybuilding” would be unhealthy, unpopular, and no sport has ever been started by someone proposing it in a paper to an obscure journal.

But other accepted papers, I think, use a trick: invent some fake data of interest to the journal, and include a discussion section with some silly digressions. The journal accepts the paper because the core is the interesting data, and then the hoax coverage says that the paper is about the silly digressions. For example, the core of the dog park paper is a fake observational study showing that humans, especially males, are faster to stop male-on-male dog sexual encounters than male-on-female sexual encounters. I think that’s fine; it is actually indicative of heteronormativity or homophobia or whatever. The paper also has an angle about canine rape culture, and that is indeed silly, but the paper is not best described, as The Chronicle of Higher Education did, as being “about canine rape culture in dog parks in Portland”.

There are things to learn from this whole thing. I have a lower opinion of fat studies than I did before. But I have a higher opinion of the various fields that correctly objected to ideology-pleasing buzzword-filled digressions, and I wish the coverage noted that in equal measure. I get the impression you have to fake some interesting data to get much Sokal-style fashionable nonsense through, and even then, they’ll catch most of it.

(Maybe I’m minimizing the ridiculousness of what did get past the reviewers. I think a younger, more idealistic version of me would have been more shocked by it, like the commenters at Hacker News who think that peer review should be able to detect fabricated data. My mild reaction is partly due to not expecting Idealized Science-level rigor of these fields to start with.)

And no-one should be saying anything about the rejected papers, except for praising the journals for rejecting them. If you ask someone out, and they say they’re flattered but they only like you as a friend, don’t gloat that they said that they like you. It’s a rejection.