The million-dollar question in nutrition science is this: What should we eat to live a long and healthy life?
Researchers’ answers to this question have often been contradictory and confusing. But in recent decades, one diet has attracted the lion’s share of research dollars and public attention: the Mediterranean way of eating. And in 2013, its scientific cred was secured with PREDIMED, one of the most important recent diet studies published.
The study’s delicious conclusion was that eating as the Spanish, Italian, and Greeks do — dousing food in olive oil and loading up on fish, nuts, and fresh produce — cuts cardiovascular disease risk by a third. As Stanford University health researcher (and nutrition science critic) John Ioannidis put it: “It was the best. The best of the best.”
Not anymore. Last week, the prestigious New England Journal of Medicine pulled the original paper from the record, issuing a rare retraction. They also republished a new version of PREDIMED, based on a reanalysis of the data that accounted for the missteps.
PREDIMED was supposed to be an example of scientific excellence in a field filled with conflicted and flawed studies. Yet PREDIMED now appears to be horribly flawed.
At first, I thought this could be the beginning of the end of nutrition science. There have been too many poorly executed and disappointing studies over the years, too many research dollars wasted. (We’ve also just learned about industry influence in the National Institutes of Health’s alcohol studies, and in-fighting that brought down what was supposed to be the “Manhattan project for nutrition.”)
But after spending the past few days talking with some of the brightest minds in nutrition research and epidemiology, I now feel the PREDIMED retraction is actually cause for hope. Maybe even a new beginning for the field.
Yes, studies with big flaws pass peer review and make into high-impact journals, but the record can eventually be corrected because of skeptical researchers questioning things. It’s science working as it should, and the PREDIMED takedown is a wonderful example of that. This process should bring us a step closer to what really matters: informing people who want to know how to eat for a healthy life.
Why the PREDIMED trial was such a big deal
As I’ve reported, nutrition science has done a great job of finding ways to address diseases of nutrient deficiencies, like scurvy. But today, our greatest health problems relate to overeating. People are consuming too many calories and too much low-quality food, bringing on chronic diseases like cancer, obesity, diabetes, and cardiovascular disease.
These illnesses are much harder to get a handle on. They don’t appear overnight; they develop over years. They’re not usually related to one cause; they’re caused by many lifestyle and genetic factors in concert. And fixing them isn’t just a question of adding an occasional orange to someone’s diet. It involves looking holistically at lifestyle behaviors, like diet and genetics, trying to tease out the risk factors that lead to illness.
The trouble is, most of what we know about nutrition’s effects on chronic disease comes from observational data. Researchers track what large numbers of people eat over time and then look at their rates of disease, trying to tease out relationships in the data. Do people who drink more red wine have lower rates of heart disease? Is meat associated with an early death?
Because these studies aren’t controlled like experiments, they can’t tell us whether one thing caused another thing to happen. Researchers try to use statistics to control for some of these “confounding factors,” but it’s impossible to catch all of them.
That’s why the “randomized controlled trial (RCT),” is considered the gold standard for evidence in health research. With these trials, scientists randomly assign their study participants to one of two groups (though sometimes more). One group gets a treatment or intervention; a control group gets a placebo. Again, they give participants these assignments in a random fashion, meaning the members of the study have an equal chance of being selected for each group. And if there’s a difference in health outcomes at the end of the study, it’s fair to say that the intervention was the cause.
But there’s a catch with RCTs: It’s extremely difficult and expensive to run this kind of study for long enough and in numbers that are large enough to yield meaningful answers.
This is exactly why PREDIMED stood out. “It was a randomized control trial, it was long term, and it had clinical outcomes on things that mattered,” said Ioannidis. “It was the prototype of the best that had been done.”
What PREDIMED found
The study was conducted in Spain and tracked more than 7,400 people at high risk for cardiovascular disease recruited through 11 study sites.
The study participants were randomly assigned to one of three groups: getting advice about following a Mediterranean diet as well as free extra-virgin olive oil delivered to their home, getting advice about following a Mediterranean diet as well as free nuts to their doorstep, and for the control group, getting advice about following a low-fat diet.
The main endpoints the researchers tracked were major cardiovascular events — strokes, heart attacks, death from cardiovascular causes. They stopped the trial early, after a median follow-up time of nearly five years, because the effects of the diet seemed to be so dramatic. The Mediterranean diet, when supplemented with lots of olive oil or nuts, could cut a person’s risk of cardiovascular disease by a third.
It’s rare to see an effect size that big, even in studies on medications. What’s more, the study’s early halting seemed to exaggerate the results in the minds of the public.
“The magnitude of the diet’s benefits startled experts,” the Times reported when the trial first came out. “The study ended early, after almost five years, because the results were so clear it was considered unethical to continue.”
What went wrong with the study
The findings helped heave olive oil and nuts into the realm of the sacred. They were catnip for journalists (“Mediterranean Diet Shown to Ward Off Heart Attack and Stroke,” the New York Times story read. “Spanish Test: Mediterranean Diet Shines In Clinical Study,” NPR’s headline read.) And they spawned a cottage industry of studies by other scientists, who used the PREDIMED data to run hundreds of different analyses.
But, it turns out, the trial wasn’t properly run.
John Carlisle, a British anesthesiologist (and legend in medical statistics) first called attention to the fact that something seemed fishy about PREDIMED in this June 2017 paper. In the study, he applied a statistical test to 5,087 randomized trials, including PREDIMED, to see if the groups randomized in the studies were too similar to happen by chance. He found they were, suggesting problems with the trial’s randomization process.
One of PREDIMED’s authors, nutrition researcher Miguel Ángel Martínez González, told Vox that, upon seeing the Carlisle paper, he reached out to his colleagues on PREDIMED’s steering committee about reexamining the data. To their credit, they also wrote to the New England Journal about the potential errors, which led to a plan to recover old data and fish for errors.
Martínez González, his colleagues, and editors at NEJM spent months poring over the data. They discovered several major problems:
- The researchers at one of the study sites, overseeing 11 clinics, failed to randomly assign individual people to the different diets, instead assigning everybody in the same clinic to the same diet.
- In cases where more than one person in a family was participating in the study, they were all assigned to the same diet, again instead of being randomized. (And while the authors knew about this deviation, it wasn’t reported in their original study as it should have been.)
- Researchers at another study site failed to properly use the randomization table, which is supposed to guide researchers in how the randomization be done.
Altogether, these errors meant that some 1,588 of the study’s participants — or a little more than 20 percent — weren’t properly randomized in one way or another.
Needless to say, this isn’t how a randomized trial should work. The researchers should have made sure each study site and clinic involved in the trial was following strict procedures on how to randomize, continually following up with and checking in on them, Indiana University School of Public Health Dean David Allison told me.
Martínez González insists that the problems wit