In many scientific fields, and especially in psychology, there’s currently a lot of concern about the integrity of published work. A lot of the concerns stem from “sloppy science” – practices like massaging data, like by excluding participants who detract from a significant effect or by peeking at the data over the course of running participants and ending data collection when the results confirm the hypothesis. Then there’s the “file drawer problem.” This results from widespread acceptance that a p-value of less than 0.05 indicates significant results. What this p-value actually means, is that 5% of the time, the same data will be a result of a false positive – basically a false alarm, detecting an effect that doesn’t exist. So in theory, if the same (wrong) study is run 20 times, it will yield null findings 19 times and a false alarm once. If the false alarm gets published (as truth) and scientists base future research on it, it could result in a lot of wasted time and money.
Most cases of questionable scientific integrity lay in a blurry area between honest mistake and unethical, but this isn’t always the case. I recently discovered a captivating and lengthy article in the New York Times called “The Mind of a Con Man,” spotlighting Diederik Stapel, a prominent Dutch social psychologist whose fraudulent scientific practices were discovered in 2011. Fabricating any data is a serious enough offense, but investigations have revealed that Stapel fabricated studies for at least 55 papers. Stapel also allowed the fabricated data to be used in 10 different students’ PhD dissertations, but none of these students are being held liable because they had no idea that any of the data had been faked.
In most cases, Stapel completely made up the data. He didn’t even run many of the experiments he claimed to have run, and in some cases the experiments wouldn’t even have been possible to run. For example, he claimed to have conducted one study in the Utrecht train station, but the set up of the station would have made the arrangement that Stapel described impossible. In another, he was supposed to measure whether participants who were given a M&Ms in a mug that read “Capitalism” consumed more than those whose M&Ms were in a different mug. Instead of running the experiment, he brought the M&Ms home and became the sole participant in his experiment.
Why did he go through such elaborate lengths to fabricate data over and over? His advisor claims that he was a brilliant researcher, so it seems likely that he could have produced actual high-quality work. In many cases, he never ran the experiment, so he didn’t even try to genuinely investigate his hypothesis. Stapel claims that a “lifelong obsession with clarity and order… led him to concoct sexy results that journals found attractive.” According to the NYT, “he described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.”
There are serious consequences, undoubtedly for Stapel, whose academic career is over, but also for science more broadly. For one, science is a collaborative process. Although Stapel was technically collaborating with many other researchers, he was able to report that he ran many experiments and analyzed a lot data without actually proving this to anyone. Increased transparency would eliminate the ability to conceal so much. Additionally, we should evaluate whether current practices may be contributing to motivation for fraud like Stapel’s. In an earlier post, I wrote about mounting concerns that prestigious journals are actually encouraging bad science by accepting only extremely clean and flashy results… Stapel may be a case in point for this argument. What he did was undebatably wrong, but there may be steps we can take to avoid recurrences.