The Economist calls it "ironic." In the midst of working a book called Evilicious: Why We Evolved a Taste for Being Bad, evolutionary psychologist Marc Hauser has been accused of some wrongdoing of his own. Specifically, scientific misconduct and cheating. What exactly did he do? The Chronicle of Higher Education provides an overview:
It was one experiment in particular that led members of Mr. Hauser's lab to become suspicious of his research and, in the end, to report their concerns about the professor to Harvard administrators. The experiment tested the ability of rhesus monkeys to recognize sound patterns. Researchers played a series of three tones (in a pattern like A-B-A) over a sound system. After establishing the pattern, they would vary it (for instance, A-B-B) and see whether the monkeys were aware of the change. If a monkey looked at the speaker, this was taken as an indication that a difference was noticed. The method has been used in experiments on primates and human infants. Mr. Hauser has long worked on studies that seemed to show that primates, like rhesus monkeys or cotton-top tamarins, can recognize patterns as well as human infants do. Such pattern recognition is thought to be a component of language acquisition.
Researchers watched videotapes of the experiments and "coded" the results, meaning that they wrote down how the monkeys reacted. As was common practice, two researchers independently coded the results so that their findings could later be compared to eliminate errors or bias. According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant's codes, he found that the monkeys didn't seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust. But Mr. Hauser's coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.
The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. "I don't feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder," he wrote. A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it. After several back-and-forths, it became plain that the professor was annoyed.
"i am getting a bit pissed here," Mr. Hauser wrote in an e-mail to one research assistant. "there were no inconsistencies! let me repeat what happened. i coded everything. then [a research assistant] coded all the trials highlighted in yellow. we only had one trial that didn't agree. i then mistakenly told [another research assistant] to look at column B when he should have looked at column D. ... we need to resolve this because i am not sure why we are going in circles."
The research assistant who analyzed the data and the graduate student decided to review the tapes themselves, without Mr. Hauser's permission, the document says. They each coded the results independently. Their findings concurred with the conclusion that the experiment had failed: The monkeys didn't appear to react to the change in patterns. They then reviewed Mr. Hauser's coding and, according to the research assistant's statement, discovered that what he had written down bore little relation to what they had actually observed on the videotapes. He would, for instance, mark that a monkey had turned its head when the monkey didn't so much as flinch. It wasn't simply a case of differing interpretations, they believed: His data were just completely wrong.
The reaction to the Hauser investigation has been a mix of outcry against academic dishonesty, frustration at a lack of research standards, and distress over the terrible blow to the scientific community. But a refreshingly optimistic angle comes from JL Vernon, who sees "Hausergate" as an opportunity to demonstrate the integrity of the scientific process. Here are some of his ideas:
My reaction to this story may surprise readers of my blog, because I believe there is a silver lining to this story. If handled properly, this tragedy can do great things for science. What we have here is a ripe opportunity to showcase the integrity of the scientific process. As I mentioned in my recent article on creating science brand loyalists, I think scientists need to be more transparent about the scientific process from experimental design through peer-reviewed publication. By emphasizing the mechanisms built into the scientific process that brought this deception to an end, science communicators and journalists can make the public aware that science is a self-regulating system in which fraud will not endure. While there were failures in the system, science ultimately prevailed.
In this particular case, the misconduct that led to the investigation of Dr. Hauser occurred at the earliest stage of the scientific process, the experimental design. David Dobbs does a great job describing the weaknesses of Hauser’s experimental protocols. The experiments involved observation of video recordings of monkeys responding to certain stimuli that were varied over time in order to induce a response from the monkeys. The monkeys’ reactions to the stimulus were recorded by the observer. Based on a letter written by the whistleblower researchers, professor Hauser’s observations conflicted with those of his lab assistants. After the researchers realized that Dr. Hauser was trying to force them to accept and publish shoddy data, they acted properly by approaching the Harvard University administration to address these issues of scientific misconduct.
For their bravery, the whistleblowers should be recognized as “loyal defenders” of science. Not only did they end Dr. Hauser’s dangerous practices, they also fulfilled the unofficial oath for science.
Thankfully, once these individuals brought this issue to the attention of the Harvard University ombudsman and the Dean of Arts and Sciences, the appropriate investigation was undertaken. As far as we know, Dean Smith did not delay the investigation and subsequent to the completion of the investigation Dr. Hauser was properly sanctioned.
Related Links: A letter from the Dean of the Faculty of Arts and Sciences describes the findings against Hauser in more detail.