Being a Smart Consumer of the Academic Literature: Gender Differences and the Comp/Egal Debate

The CBMW blog has a post up highlighting research that supposedly agrees with the timeless truth that men and women are different. Jeff Robinson writes:

That this research and story confirms the obvious aside, cialis this represents something of a landmark admission by a secular science journal. Since the advent of feminism in the 1960s, secular academics and researchers have been hard at the task of seeking to prove that gender differences are negligible, circumstantial and not a part of design.
This research once again confirms God’s good and design: He has created men and women in His image to play equally valuable, but complementary roles. To accomplish this, it has pleased God to equip us with different gifts, different strengths and different weaknesses-all perfectly congruent with those of the opposite gender. It will be interesting to see how much play this article gets in the media and how (or if) the secular academy responds.

So what does this mean? Is Mr. Robinson being a smart consumer of the research literature, or is this an example of selecting and spinning for the purpose of upholding an agenda?

To examine this, I turn to Dr. Charles Hackney for assistance. As a psychology professor, Chuck has the huge task of teaching his students how to be smart readers and how to properly use their “B.S.ometers”. Here are some tips on how to critically evaluate the research being presented:

Find the actual study:

Don’t just go to the media write up. The actual study can be found here.

Evaluate the Journal:

PLoS-ONE is an open-access journal that charges authors money to publish their papers, does not assess the quality of the study beyond the technical aspects, and accepts 70% of papers submitted (by contrast, the journals published by the APA have an average 71% rejection rate, and the Journal for the Scientific Study of Religion, which published Chuck’s meta-analysis, has an 80% rejection rate). So it’s not a great journal.

Evaluate the research methodology and conclusions of the study:

The authors of the study deliberately (and explicitly) chose methodological and analytic approaches that they believed would maximize observed gender differences, then claimed that this made their approach “better.” This is a dubious move, that has been criticized by a number of other researchers.

Part of the authors’ argument is that previous research has relied on broad-brush personality theories like the Five Factor Model, and their use of 16 factors provides a more detailed analysis. Fine, except for the fact that FFM personality tests further break down the five factors into smaller subfactors. The most well-known FFM measure, for example, does provide five scores representing overall personality traits, but also provides a more detailed break down involving 30 more specific facets. One study, published in 2001, involved administering an earlier version of this measure to over 23,000 participants in 26 cultures. The researchers did find gender differences in 28 of the 30 facets, but these differences were small to moderate in size. The size of the differences also varied from one culture to another. This points to another limitation in the study, and the conclusions that might be drawn from it. The study published in PLoS-ONE only drew their participants from the US, which limits our ability to generalize the results to all humans. (The 2001 study, by the way, was cited in the PLoS-ONE article, but the authors only talked about it in terms of the five broad traits, and then claimed that their more specific 16-factor approach was better.)

Evaluate the Blog Post:

First, there is a difference between “gender differences” and “inherent gender differences.” Gender differences (and that includes personality differences) are often substantial, but are the product of both biological and social factors. So finding larger differences than previous studies found does not lock us into the interpretation that these differences are all about God’s design. Also, the CBMW author rails against secular academics who are trying to prove that gender differences are “negligible, circumstantial and not a part of design,” but ignores the fact that the study (which I’m guessing he didn’t read) is about a conflict between academics who expect gender differences to be small and other academics (mostly evolutionary psychologists) who expect them to be large.

In addition, the CBMW author finishes off with a snarky expectation that the story will be ignored by the mainstream media and the academy. First problem: his source for the story IS the mainstream media (The Telegraph). Also, a quick google search shows that the story is being run by all sorts of mainstream news-media sources.

Pointing to a poorly written study in a poor-quality journal and using it to “prove” an organization’s position actually serves to undercut the credibility of said organization.

  • Thank you for this analysis. To me the most blatant thing about the CBMW article was the clear implication: “If a study agrees with our conclusions, it’s accurate; if not, it’s obviously biased and based on an agenda.”

  • I appreciate the way that you clearly laid out your analytical strategy. It drives me crazy how often the debates in public Christian forums use evidence (either the Bible or other sources) in a way that makes no sense or that relies on false understandings of the text. I think there need to be more writers like you to help develop Christians’ critical thinking skills. Well done!

  • Trust but verify! And when the organization has an agenda, recognize that their choices may be agenda-driven!

  • Wow. Good analysis, Chuck! It’s very easy to make bad arguments . . .

  • Micah

    You’re being a little hard on PLoS. It’s a younger journal family by a good bit, and a different business model means different expected article submissions. Comparing to APA isn’t apples-to-apples.

    Good job otherwise, though.

    • Having a different business model is fine; I’m not against online journals or proposals to address the admitted shortcomings of the current peer-review system. But the shortcomings of their approach seem to me to be outweighing any benefits. It’s the same problem with a lot of online stuff: greater openness of information is awesome, but it requires more discernment, since the internet’s poop-to-gold ratio is heavy on the poop.

      And calling it apples-to-something-other-than-apples to compare APA to PLoS only reinforces my concerns, since the mainstream media (and blogs) are treating this study as equal to one published in a top-tier journal (the Journal of Personality and Social Psychology would never have published this study).

  • Amanda B.

    Or we could mention that writeup didn’t bother to offer how men and women were discovered to be vastly different. Or the fact that, if we are to assume that the study is perfectly valid and unbiased, even still, 18% of men and women responded in a surprising way according to their sex. That’s nearly 1 out of 5 people who don’t fit the mold. That’s a minority, but that’s a significant minority–plenty significant enough to conclude that sweeping stereotypes just don’t wash.