There’s been a lot written about the so-called Stanford organic study, that peer-reviewed journal article that came out around Labor Day. The study generated a flurry of headlines to the effect that “Organic food is no healthier than conventional food” (to cite an example from U.S. News and World Report). I finally got my hands on a copy of the journal article and read it. There’s a lot to say about why the study is bad, and much has already been written (for example, Mark Bittman’s analysis here and another good one from the New York Times here). However, nothing I’ve read so far has mentioned one glaring problem that explains why headlines such as the one cited above are complete bunk and a disgrace to science. Coming at this from a background as a professional scientist, I’m going to address this particular point.
The report tries to cover a lot of ground (too much so for a paper of its length, I’d argue), but I’m going to focus on the bit that captured the headlines, the bit that relates to nutrients in organic compared to conventional crops (produce & grain). The authors approached this question by performing a meta-analysis. That is, they didn’t collect any of their own data; they scoured the published literature for data. Then—and this is key—they applied statistical hypothesis testing to draw conclusions from the data.
Let’s revisit Statistics 101 and review the nature of statistical hypothesis testing, a common tool of science (though arguably not an especially good one). A key concept is the null hypothesis, which states for example that there’s no difference between two groups of something, population A and population B. So, you collect some data (which needs to be done intelligently, otherwise the whole thing is garbage). Then you use statistics to see if you can reject the null hypothesis at some pre-determined (& kind of arbitrary) level of significance. If the statistics say so, then you reject the null hypothesis, thus providing support for the alternate hypothesis: that there is a difference between population A and population B, for example. However, failure to reject the null hypothesis does not equate to proof for it. To repeat:
“Failing to reject a null hypothesis does not mean that it is true.” (Johnson, 1999, The Insignificance of Statistical Significance Testing)
Or from Wikipedia:
“It is important to understand that the null hypothesis can never be proven. A set of data can only reject a null hypothesis or fail to reject it. For example, if comparison of two groups (e.g.: treatment, no treatment) reveals no statistically significant difference between the two, it does not mean that there is no difference in reality. It only means that there is not enough evidence to reject the null hypothesis (in other words, the experiment fails to reject the null hypothesis).”
Keep this in mind.
Back to the Stanford organic study. The paper never explicitly states the null and alternate hypotheses, but they run something like this:
Null hypothesis: There’s no difference in nutritional content between organic and conventional food.
Alternate hypothesis: Organic food has different (higher?) nutritional content than conventional food.
The data sets they use come from published studies. They filtered out a lot of data to meet various criteria, then ran the numbers through the statistics machine. I think there are some dubious aspects to what they did here, but let’s just go with it for now. They looked at their results and the levels of significance that they had defined, and found that they couldn’t reject the null hypothesis (except in the case of phosphorus). Thus, they state:
“ Despite the widespread perception that organically produced foods are more nutritious than conventional alternatives, we did not find robust evidence to support this perception.”
Technically, that is an acceptable thing to say. They didn’t reject the null hypothesis based on the data they found and used. But then comes the PR. In the Stanford press release, the senior author waves the banner of the null hypothesis as if it had been proven:
“There isn’t much difference between organic and conventional foods, if you’re an adult and making a decision based solely on your health.”
Sound familiar? Look at the null hypothesis again: “There’s no difference in nutritional content between organic and conventional food”. This amounts to a claim that they proved the null hypothesis. The headlines, of course, echo this message; here are a few examples (from Mark Bittman‘s compilation):
- Why Organic Food May Not Be Healthier for You (NPR)
- Organic food no more nutritious than non-organic, study finds (MSNBC)
- Organic Food Is No Healthier Than Conventional Food (U.S. News and World Report)
- Save Your Cash? Organic Food Is Not Healthier: Stanford U. (New York Daily News)
The press release and many headlines imply that the null hypothesis has been proven. And that’s scientific baloney. The difference may seem subtle, but it is very important. The one acknowledges that there may or may not be a difference between organic and conventional, but the study wasn’t able to detect a difference from the compiled/filtered data with the chosen statistical methods. The other makes an unproven claim: that there is no difference between organic and conventional. Worse, this claim misleads people into believing that how their food is grown makes no difference in its nutritional content, period.
Let’s forget certified organic vs. conventional for a moment, and address a more fundamental question: Do agricultural conditions and practices make a difference in the nutritional content of crops? Based on my reading and understanding, the answer is a resounding yes, though there are a lot of confounding factors. Soil fertility, soil life, geography, growing methods, weather, crop genetics, harvest conditions, post-harvest handling, and more certainly affect crop health and nutrition, even if we don’t fully understand all of the details. And this isn’t new news; during the mid-1900s, the University of Missouri’s own William A. Albrecht was a pioneer in research of the effects of soil fertility on crop nutrition and livestock/human health.
The reference list in the Stanford organic study contains a multitude of studies that did discern differences between nutrient content in organic compared to conventional production (as well as quite a few that didn’t). The extensive body of existing research could be used to generate a well thought out set of multiple working hypotheses worthy of further testing to better understand the ways in which various factors affect the nutritional quality of food. Instead, the authors used statistical abracadabra in a field outside their area of expertise to try to discern a difference between different agricultural practices that in reality represent much more of a complicated continuum than a dichotomy. And then they make the scientifically untenable claim that their null hypothesis is true, abetted by a media feeding frenzy on the latest study claiming something new.
I think all those who know us know our long-term commitment to organic practices. Well, at least the study makes one thing clear: If we ever decide to drop our organic certification due to expense and hassle, we know we can do so without compromising the nutritional value of our vegetables.