Junkfood Science: Dare we believe what a study says if it was industry-funded?

January 27, 2007

Dare we believe what a study says if it was industry-funded?

Examples are being volleyed from all sides of this debate. We can find some of the best, most meticulous, carefully-conducted research with responsible conclusions and interpretations of the results that has been funded by industry. Really! We can find equally good studies or perfectly dreadful ones that don’t appear to have corporate interests. We also can find the best analyses that alert us to unsound studies coming from those outside the fray altogether. By making blanket assumptions because of the source, we can shoot ourselves in the foot and never learn what the best science shows. What’s most needed is critical thinking...

A recent commentary in a food technology publication brought some industry points to this issue that aren’t often considered in the popular rush to distrust anything from industry. In an article called “Death to the industry conspiracy theories,” reporter Stephen Daniells, Ph.D., wrote: “Let’s just think about this for a moment....when industry plans to fund a study, it is natural that it would select a product with a potentially favorable nutritional profile...if industry gives you a project then it’s more likely to succeed than not.... proposing paying for a clinical trial into unfavorable effects won’t make you too many friends at a company.” And concerning the phenomenon of publication bias, where more favorable results are published than negative or null results, he said: “these may go unpublished for a variety of reasons – a null or negative result may mean the company goes back to the drawing board (no need to draw attention to something that doesn’t work); or a null or negative results, while no use to the company, may alert competitors to sensitive areas of research.”

The simple fact of the matter, he said, is research has to be paid for by someone and transparency is imperative, but that “should not be used against [authors] because they report positive or favorable results.”

It’s important to take this a step further. When we do encounter mischief in a study’s methodology, or conclusions and abstracts that don’t match what the study data show, or the importance of a study being exaggerated and its results overstated, then more often than not, something is motivating that. If close examination of a study makes you wonder: “What were they thinking?” or “How on earth did this get published?” that’s when understanding the “why” and potential influences can be helpful. The challenge for consumers, healthcare professionals and policy makers is that those interests are often not evident, nor are they always found in financial disclosures. This latter point is often overlooked.

Industry isn’t the only source of conflicts of interest, either, although it’s the one getting the most attention. Nonprofits, advocacy organizations and foundations that may sound well-meaning and beyond reproach, are shaping things more than most people realize. They typically have extensive financial and political interests behind them which can be challenging to discern. More than 90% of foundations in the United States award grants in the health field, about $4.6 billion in 2000, according to the Center on Philanthropy and Public Policy.

By believing that financial interests are the only ones that might lead a study to have bias also means we could be lured into a false sense of confidence and miss some of the most flawed studies with agendas behind them that aren’t always easy to spot. Non-transparent motivations that could affect objectivity can even be unintentional and unconscious. Researchers are not evil and corrupt just because they might conduct poor studies. Remember that recent Nobel prize winner story? Recognition and acceptance by one’s peers, prestige and notoriety, academic promotion, continued funding for a life’s work and even the personal beliefs and agendas of a researcher can cloud the picture and influence even the most well-intentioned studies and their interpretations.

Studies announced to the public and medical community through PR firms and media marketing campaigns can often be a good clue for us that a study may be motivated by something other than objective science. But not always! And few of us fully grasp the extent and marketing techniques used today, so we can even miss a lot of marketing passed off as science. The lines between marketing and education or advocacy are becoming increasingly blurred. While studies published in peer-reviewed journals tend to be stronger, and most certainly have proven to be sounder than those generally presented at meetings, publication does not guarantee a study’s soundness, either. Peer-reviewers and publishers can also have their own biases. And as we see daily, a lot of poor research gets published, funded and supported!

The bottom line is, the source of a study is not what it important, it is the quality of the research itself. Not all studies are created equal. That’s why to protect ourselves, it is imperative for us to learn basic skills in how to look at studies and media reports with a critical eye. Another valuable aspect of the scientific process is that it builds a body of credible, replicable evidence over time, meaning we are best served by not letting ourselves become frightened by something a single study might suggest. Please, let’s be careful out there!

© 2007 Sandy Szwarc

Bookmark and Share