Science Utopia: Some thoughts about ethics and publication bias

Science Utopia: Some thoughts about ethics and publication bias

This week’s edition of Psych Wednesdays was written by Michael Kraus and was originally published on Psych Your Mind on September 24, 2012.

Psychology’s integrity in the public eye has been rocked by recent high profile discoveries of data fabrication (here,here, and here) and several independent realizations that psychologists (this is not unique to our field) tend to engage in data analytic practices that allow researchers to find positive results (here,here, and here). While it can be argued that these are not really new realizations (here), the net effect has turned psychologists to the important question: How do we reform our science?

It’s a hard question to answer in one empirical article, or one blog post, and so that’s not the focus here. Instead, what I’d like to do is simply point out what I think are the most promising changes that we, as a science, can adopt right now to move toward a solution that will help prevent future data fabrication or the use of biased hypothesis tests. These are not my ideas mind you, rather, they are ideas brought up in the many discussions of research reform (online and in person) that I have had formally and informally with my colleagues. Where possible, I link to the relevant sources for additional information.

(1) PUBLISH REPLICATIONS
Researchers have long highlighted the importance of replication, but in practice, empirical journals haven’t exactly been supportive. For instance, the Journal of Personality and Social Psychology, the flagship journal of our science, actually has a rule that they do not publish replication research. That’s criminal.

I think making replications a higher priority with higher visibility remains the best way to improve the integrity of our science. First, fabricated data surely won’t replicate, so the publication of replication studies can go a long way to ferreting out fake findings without real data. Second, if the personality psychologist David Funder has his way, journals who publish results that do not replicate should be responsible for publishing the non-replication results. In Funder’s words, the journals would have to “clean up their own mess.” This would increase the visibility of replication studies, and would also decrease the chance that unreliable results would continue to influence the literature.

In particular, I’m intrigued by an idea that was alluded to in Nosek & Bar-Anan’s (2012) paper on Science Utopia (though admittedly, they go much farther than I am suggesting): that original studies should be published along side online replication reports. These replication reports could be easily linked to the original studies online, with minimal cost. So in short, not only is replication a major positive, but the costs to implement this change are actually minimal.

(2) MAKE DATA PUBLIC
Researchers are sometimes reticent to make their data available to other researchers (and sometimes, they place gag orders on data sharing). There are various reasons for these concerns: Some researchers do not want others to poke around their data, not because they have something to hide, but because they are planning to poke around in the data themselves. Thus, making data available to others raises the possibility that others will find interesting results in the data that you painstakingly collected.

I think this is a valid concern but I believe that data should be made available, in the very least, to replicate the original analyses reported in published manuscripts. Uri Simonsohn has recently highlighted the importance of publishing raw data: having raw data allows for the ferreting out of extremely questionable research practices–such as the fabrication of whole data sets. Most federal granting agencies require some form of data sharing. It’s time that our science did as well.

(3) ACT MORE LIKE SOCIAL PSYCHOLOGISTS
One of the primary reactions that I see when researchers find out about results that don’t replicate, or about someone who has faked data is to treat that person like an outlier–an unethical and immoral windbag of a “researcher” who is so different from the rest of the research community. This is a mistake because we are actually minimizing the unethical research practices problem to a few bad apples. The reality might be much more sobering: Some form of unethical data practice is actually quite common in our field.

Two lines of reasoning lead me to this conclusion: First, most researchers will tell you that they have a file drawer where they keep all their failed studies. This is common practice, but it’s also a major problem for the research community because the literature only reflects the studies that work. Second, we tend to engage in data analytic practices that bias statistical tests. According to a study conducted by John and colleagues (2012) these practices are conducted by the majority of researchers in psychology.

So it seems that practices that bias the scientific literature are actually quite widespread in our field. Instead of focusing on the outliers that give our field a black eye, as social psychologists we should be acting more likesocial psychologists: Specifically, we should be asking the question– what are the situational factors that make these unethical data practices more likely? Just to name a few factors: (1) Replication research is not published,(2) Researchers are not required to make their data available, (3) Research fame is too important, (4) Whistleblowing incentives are low, and (5) Publication pressure is at an all time high. A few small changes in our field (see #1 and #2 above) could reduce the press of these situational factors on researchers.

(4) STOP THE PEARL-CLUTCHING COUCH FAINTING ROUTINE
So how do psychologists typically react when other researchers fail to replicate their findings or question their data analytic practices? The reaction typically moves in one of two directions: First, some psychologists go on the offensive, claiming that the failed replication attempts are flawed, the researchers who conducted the replications are stupid, and that the journal that published the replications is unethical (here). This reaction is obviously bad for science because it damages the reputation of the journal and the researcher who is doing what amounts to a great service to the scientific community–investigating whether or not a finding is real or fluke. In politics, where the truth seems to have many faces this might be okay, but we’re scientists–and the truth is out there to be discovered. Replication studies (and yes, even failures of replication) are about truth finding and so we should encourage, and not disparage, attempts of replication.

The second reaction is the pearl-clutching and couch fainting reaction I referred to above. When researchers are accused of biases in data analysis they typically claim ignorance–they tend to say things like “Oops, I’m not sure where the bias came from!”

I have two responses to that reaction: First, if you honestly don’t know where bias comes from in data analysis you should very carefully read one of several blog posts (here and here) or empirical articles (here and here) on the subject. Get your head out from under a rock and start taking responsibility for running studies with better research design and data analytic approaches. Second, if you are aware of the bias in your own data, it’s time to start admitting it and making some important changes to the way we conduct analyses and run studies. We can reform the way we do research and we don’t need a witch hunt to do it. If responsible researchers can start admitting to themselves that they need to clean up their act, new graduate students will follow. I for one am ready to be a part of the solution.

Here are some helpful links to a number of other blog entries related to this topic:

Random Assignment - Dave Nussbaum
Fraud Detection
Reforming Social Psychology

Personality Interest Group – Espresso - Brent Roberts
More Bem Fallout
Cycling and Social Psychology

Hardest Science - Sanjay Srivastava
Bargh-Doyen Debate
Replication, Period.

SPSP’s Official Stance

Psych-Your-Mind posts on this topic
Looking ahead to the future of research
P-curves
Reactions from SPSP 2012

References:

John, L., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling.Psychological Science, 23, 524-532 DOI: 10.1177/0956797611430953

Simmons JP, Nelson LD, & Simonsohn U (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22 (11), 1359-66 PMID: 22006061

Share this post:

Leave a Reply

*

Type to Search

See all results