close
close

Solutions to scientific fraud: Should research malpractice be illegal?

Solutions to scientific fraud: Should research malpractice be illegal?

You’ve probably never heard of cardiologist Don Poldermans, but experts who study scientific misconduct believe thousands of people may have died because of him.

Poldermans was a prolific medical researcher at the Erasmus Medical Center in the Netherlands, where he analyzed the standards of care for heart surgery, publishing a series of definitive studies from 1999 to the early 2010s.

One crucial question he studied: Should patients be given a beta-blocker, which lowers blood pressure, before certain heart operations? Poldermans’ research answered in the affirmative. European (and to a lesser extent American) medical guidelines recommend it accordingly.

The problem? Poldermans’ data was allegedly fake. A 2012 investigation by the Erasmus School of Medicine, his employer, into allegations of misconduct found that he “used patient data without written permission, used fictitious data, and … submitted conference papers that included knowingly unreliable data.” Poldermans admitted the allegations and apologized, while stressing that the use of fictitious data was accidental.

Sign up here to learn about the big, complex problems facing the world and the most effective ways to solve them. Sent twice a week.

Following these revelations, a new meta-analysis was published in 2014, assessing the advisability of using beta-blockers before heart surgery. It found that beta-blocker treatment increased the likelihood of death within 30 days of surgery by 27%. In other words, the policy that Poldermans had recommended using falsified data, adopted in Europe based on his research, actually significantly increased the risk of death during heart surgery.

Tens of millions of heart surgeries were performed in the United States and Europe between 2009 and 2013, when these flawed guidelines were in effect. A provocative analysis by cardiologists Graham Cole and Darrel Francis estimated that the death toll would have been as high as 800,000 if best practices had been in place five years earlier. While the exact figure is hotly contested, a 27% increase in mortality for a routine procedure over many years may represent an extraordinary number of deaths.

I heard about the Poldermans case when I contacted researchers specializing in scientific fraud and asked them a provocative question: Should scientific fraud be prosecuted?

Unfortunately, fraud and misconduct in the scientific community are not as rare as we might think. We also know that the consequences of such fraud are often disappointing. It can take years for a flawed paper to be retracted, even if the flaws are obvious. Sometimes, scientists accused of falsifying their data file frivolous lawsuits against their peers who report them, further silencing anyone who would report flawed data. And we know that this behavior can be high-stakes and can have far-reaching consequences for patients’ treatment options.

In cases where research dishonesty is literally killing people, shouldn’t it be appropriate to resort to the criminal justice system?

The question of whether research fraud should be a crime

In some cases, it may be difficult to distinguish research misconduct from negligence.

If a researcher does not apply the appropriate statistical correction for multiple hypothesis testing, he or she will likely obtain erroneous results. In some cases, researchers are strongly incentivized to be careless in this area by an academic culture that values ​​non-zero results above all else (i.e., it rewards researchers who find an effect even if it is not methodologically sound, while being reluctant to publish sound research if it finds no effect).

But I think it is a bad idea to continue such behavior. It would have a serious chilling effect on research and would probably make the scientific process slower and more legalistic, which would also lead to more deaths that could be avoided if science were to evolve more freely.

Thus, the debate over criminalizing scientific fraud tends to focus on the most obvious cases: the intentional falsification of data. Elisabeth Bik, a research scientist who studies fraud, made a name for herself by demonstrating that photographs of test results published in many medical journals were clearly altered. This kind of error is not innocent, and thus constitutes a kind of benchmark for the frequency with which manipulated data are published.

Although technically some scientific fraud may fall under existing laws prohibiting lying in a grant application, for example, in practice scientific fraud is more or less never prosecuted. Poldermans ended up losing his job in 2011, but most of his papers were not even retracted and he faced no other consequences.

But with growing awareness of the prevalence and harm of scientific fraud, some scientists and scientific fraud watchdogs have proposed a change. A new law, specifically tailored to scientific fraud, could help to better distinguish negligence from fraud.

The question is whether legal consequences could actually help solve our fraud problem. I asked Bik what she thought of the proposals to criminalize the wrongdoing she has studied.

Her response was that while criminalization is not a good approach, people should understand that right now, there are virtually no consequences for violators. “It’s infuriating to see people cheat,” she told me. “And even if it’s NIH grants, the penalties are very light. Even for people who are caught cheating, the penalty is very light. You’re not eligible for new grants for the next year, or even for the next three years. It’s very rare that people lose their jobs because of this.”

Why? Basically, it’s an incentive problem. Institutions are embarrassed when one of their researchers does something wrong, so they prefer to give him a light punishment and not continue investigating. No one has much interest in getting to the bottom of the mistake. “If the worst consequence of speeding was a police officer saying, ‘Don’t do it again,’ everyone would speed,” Bik told me. “That’s the situation we have in science. Do what you want. If you get caught, it will take years to investigate.”

In some ways, a law is not the ideal solution. Courts also tend to take years to deliver justice in complex cases. They are also not well-positioned to answer detailed scientific questions and would almost certainly rely on scientific institutions to conduct investigations. What really matters is those institutions, not whether they are attached to a court, a nonprofit, or the NIH.

But in cases of sufficiently serious misconduct, it seems to me that there would be great benefit in bringing in an institution outside of academia to shed light on these cases. If well designed, a law allowing prosecutions for scientific fraud could remove the overwhelming incentives to let misconduct go unpunished and move on.

If investigations were conducted by an outside agency (such as a prosecutor), it would no longer be easy for institutions to preserve their reputations by covering up fraud. But the outside agency would not necessarily be a prosecutor; an independent scientific review board would probably suffice, Bik said.

Ultimately, prosecutions are a blunt tool. They might help establish accountability in cases where no one has an incentive to do so—and I think in cases of misconduct that have led to thousands of deaths, that would be a matter of justice. But they are neither the only way to solve our fraud problem, nor necessarily the best way.

So far, efforts to create institutions within the scientific community to monitor abuse have had only limited success. At this point, I would consider it positive if efforts are made to allow external institutions to monitor abuse as well.