Article Text

other Versions

Download PDFPDF

Need for a nutrition-specific scientific paradigm for research quality improvement
  1. Alan Flanagan1,2,
  2. James Bradfield2,3,
  3. Martin Kohlmeier2,4 and
  4. Sumantra Ray2,3,5,6
  1. 1Department of Nutritional Sciences, University of Surrey, Guildford, Surrey, UK
  2. 2NNEdPro Global Centre for Nutrition and Health, St John’s Innovation Centre, NNEdPro, Cambridge, UK
  3. 3Department of Nutrition and Dietetics, King's College Hospital NHS Foundation Trust, London, UK
  4. 4School of Medicine, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Kannapolis, North Carolina, USA
  5. 5School of Biomedical Sciences, Ulster University at Coleraine, Coleraine, UK
  6. 6Fitzwilliam College, University of Cambridge, Cambridge, UK
  1. Correspondence to Dr Alan Flanagan, Department of Nutritional Sciences, University of Surrey, Guildford, Surrey, UK; alan.flanagan{at}surrey.ac.uk

Abstract

Nutrition science has been criticised for its methodology, apparently contradictory findings and generating controversy rather than consensus. However, while certain critiques of the field are valid and informative for developing a more cogent science, there are also unique considerations for the study of diet and nutrition that are either overlooked or omitted in these discourses. The ongoing critical discourse on the utility of nutrition sciences occurs at a time when the burden of non-communicable cardiometabolic disease continues to rise in the population. Nutrition science, along with other disciplinary fields, is tasked with producing a translational evidence-base fit for the purpose of improving population and individual health and reducing disease risk. Thus, an exploration of the unique methodological and epistemic considerations for nutrition research is important for nutrition researchers, students and practitioners, to further develop an improved scientific discipline for nutrition. This paper will expand on some of the challenges facing nutrition research, discussing methodological facets of nutritional epidemiology, randomised controlled trials and meta-analysis, and how these considerations may be applied to improve research methodology. A pragmatic research paradigm for nutrition science is also proposed, which places methodology at its centre, allowing for questions over both how we obtain knowledge and research design as the method to produce that knowledge to be connected, providing the field of nutrition research with a framework within which to capture the full complexity of nutrition and diet.

  • nutrition assessment
  • precision nutrition
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The year 2023 marks the 15th anniversary of NNEdPro, the acronym reflecting the original Need for Nutrition Education Project that evolved over subsequent years to its reincorporation in 2022 as the NNEdPro Global Institute for Food, Nutrition and Health; a multidisciplinary and international think-tank promoting the advancement and implementation of nutrition research and knowledge for individual and societal health. The commitment to the generation of evidence and its application into practice was further expanded with the launch of BMJ Nutrition, Prevention & Health (BMJ NPH) in 2018, which is published in association with the NNEdPro Global Institute for Food, Nutrition and Health. March 2020 saw the establishment of the NNEdPro Nutrition and COVID-19 Taskforce in partnership with BMJ NPH, to generate research and synthesise evidence in relation to nutritional factors in the risk and management of COVID-19, in addition to the challenges on food systems and food security arising from the impacts of the global pandemic. In this regard, the NNEdPro Taskforce acted to generate research, while BMJ NPH acted as the curator of evidence, paying attention to scientific integrity and the quality of emerging evidence during COVID-19, and whether it was translation-ready. The 15th and 5th anniversaries of NNEdPro and BMJ NPH, respectively, now provide an opportunity for reflection on the unique characteristics of the study of human diet and nutrition, and the challenge of generating a cogent and actionable evidence-base.

It is also arguably a crucial and pivotal time for the field of nutrition to state its case. In July 2019, a sensationalist article published in the New Scientist ran with the provocative headline, “Why everything you know about nutrition is wrong”.1 Such criticisms of nutrition research have not been confined to the lay press, as criticism on the reliability of nutrition science, and related defences, have disseminated in the published literature.2–5 The focal point of criticism against the reliability of nutrition as field of scientific inquiry is the role of nutritional epidemiology as a mainstay research design of the field, and an emphasis on small, inadequately powered, and short duration randomised trials.3 This discourse becomes mired in two oppositional standpoints, one in which nutrition research is considered suitable only for the scrapheap, and another in which the status quo is upheld as sufficient.6 Neither are necessarily accurate, and both arguments usually fail to proceed with a more systematic approach to the question over whether research of diet and health is reliable, and what steps may be taken to improve the research and its translation into practice. BMJ NPH began with a commitment to nutrition as a hard science, acknowledging that nutrition is a difficult science to do and requires significant resources to produce better research.7 However, greater resources alone may not necessarily be sufficient to produce improved nutrition research unless concomitant consideration is also given to the methodological and epistemic issues that face nutrition science. This paper will expand on some of these issues, challenging some of the assumptions underpinning the assumed lack of reliability, and discuss how nutrition research may improve within its own conceptual epistemic framework to operate.

Rebuttable presumptions against nutritional epidemiology

In the context of evidential assessment, reliability of observational research is primarily evaluated vis-à-vis the results of randomised controlled trials (RCTs).8 This evaluation provides the most logical point of departure, given that apparent discordance between findings from observational nutrition research and intervention trials on the same nominal exposure is central to arguments against nutritional epidemiology.3 9

However, closer scrutiny of this apparent discordance reveals a different picture. Moorthy et al investigated concordance between nutritional epidemiological studies and RCTs for 34 nutrient-outcome relationships from data published between 1996 and 2009, defining concordance as no statistically significant difference in z-scores calculated from the summary effect estimate in both research designs.10 Overall, 65% (22 of 34) of associations were not significantly discordant. However, the analysis by Moorthy et al was confined to micronutrients and specific energy-yielding nutrients, for example, omega-3 fatty acids: of the 34 associations, 20 related to vitamins, 7 to minerals and trace elements and 7 to fibre or fatty acids. In particular, the analysis confined the criteria for matching observational and RCT evidence to examination of the same nutrient on the same outcome, that is, without specifying further the source of that nutrient (eg, dietary intake vs supplements, etc). For example, one nutrient-outcome pairing compared food intake of the omega-3 alpha linolenic acid (ALA) with supplemental ALA.10 The implicit assumption made for such an analysis is that the exposure of interest for nutrition studies is a given nutrient per se, independent of source and delivery method. Importantly, it makes a further assumption that dietary intake in epidemiology and supplemental intake in an RCT are to be considered exchangeable exposures. This assumption does not hold against evidence that with micronutrients as the exposure of interest, and when dietary intake from observational research is compared with supplement intake in RCTs, there may be less agreement between these respective research designs compared with more refined matching of population, exposure, comparator and outcome.11

A more refined analysis of the concordance between nutritional epidemiological findings, specifically from prospective cohort studies, and RCTs, was undertaken by Schwingschakl et al,11 which evaluated agreement on 71 diet-disease outcome pairs based on 950 RCTs and 750 cohort studies. Based on population, intervention/exposure, comparator, outcome (PI/ECO) matching to determine degree of similarity, and using cohort studies as the reference group, the ratio of risk ratios (RRR) was calculated to determine level of agreement in effect size estimates and direction of effect between research designs (if risk ratios (RR) from RCTs <RR from cohorts, RRR <1.0; if RR from RCTs >RR from cohorts, RRR >1.0). Where studies from both designs were considered ‘similar but not identical’ (ie closely matched on PI/ECO), the RRR was 1.05 (95% CI 1.00 to 1.10), compared with an RRR of 1.20 (95% CI 1.10 to 1.30) when the respective designs were only ‘broadly similar’ (ie, less closely matched on PI/ECO). Thus, as the level of similarity in design characteristics increased, concordance in the bodies of evidence derived from both research designs increased. While the main analysis did not further consider source or type of intervention/exposure, the results of a priori planned subgroup analyses stratified by type of intervention/exposure is more instructive for the alleged discordance between nutritional epidemiology and intervention trials. The strongest level of agreement was found where the type/source of intake was diet in cohort studies compared with dietary intake in RCTs (RRR 0.98, 95% CI 0.93 to 1.04). However, where dietary intake in epidemiology was compared with supplemental nutrient intake in RCTs, there was less agreement between the respective study designs (RRR 1.07, 95% CI 0.95 to 1.21), and where micronutrients were the intervention/exposure of interest, and without matching the type of intake/source of exposure, there was even less agreement (RRR 1.14, 95% CI 1.06 to 1.22). In particular, the analysis indicated that most disagreement was driven by a lack of similarity in the intervention/exposure between research designs.

The close agreement when epidemiological and RCT evidence are more closely matched for the exposure of interest has important implications for the perceived unreliability of nutritional epidemiology. Commonly cited references to RCTs that apparently showed observational findings to be ‘wrong’ uniformly reference trials of isolated nutrient supplementation against epidemiological research on dietary intake.3 9 Examples include the Heart Protection Study (a mixed intervention of 600 mg synthetic vitamin E, 250 mg vitamin C and 20 mg β-carotene per day),12 the Heart Outcomes Prevention Evaluation (HOPE) intervention (400 IU supplemental ‘natural source’ α-tocopherol)13 and the Alpha-Tocopherol Beta-Carotene study (50 mg α-tocopherol and 20 mg β-carotene, alone or in combination, per day).14 These trials were each conducted in participants already replete with the intervention nutrients of interest and compared with placebo groups with already adequate levels of the intervention nutrients at baseline12–14 (further discussion on this point can be found in the next section). Epidemiological research compared high with low levels of intake across a broader range of the distribution of nutritional status.15 16 These are fundamentally distinct conceptual exposures, and consequently the respective designs in fact asked entirely different research questions.

The implication is that many intervention trials which purport to contradict results derived from observational research were either not designed to test the epidemiological findings or proceeded with a misconceived research question and tested the wrong hypothesis. In either case, given that RCTs typically proceed from epidemiological findings, the fundamental point is that the RCTs were testing different hypotheses than stated (the actual hypothesis tested was whether more of enough is superior to more than enough), and thus cannot be taken to provide a rebuttal of the observational findings. Criticisms of nutritional epidemiology make the sweeping assumption that the findings from the RCT are ‘right’ by default simply because they contradict the epidemiological associations. However, for this assumption to hold, the exposure in the RCT would have to be the same exposure observed in epidemiology, which is not the case for the oft-cited examples of purported contradiction of findings from a cohort study by a subsequent RCT. This issue is not unique to nutrition research. In clinical medicine there is the infamous example of hormone replacement therapy (HRT) and coronary heart disease (CHD) risk in postmenopausal women, where observational research suggested a lower risk of CHD in postmenopausal women on HRT, which was contradicted by a subsequent RCT showing an increase in cardiovascular disease (CVD) risk.17 18 Later re-analysis reconciled the discrepancy in the respective findings, demonstrating that when factoring in the timing of HRT initiation relative to time since onset of menopause in the context of both designs, both observational and RCT studies yielded similar results.19 20

These are issues of research design and analysis, and provide a call to question the prevailing epistemic assumptions that where observational and RCT evidence do not accord, the presumption is that the discord arises from lack of random allocation in the observational study. Given that as observational and intervention trial evidence increases in agreement with increasing similarity in important characteristics of study design, notably the type of intervention/exposure, the question changes: it is not whether nutritional epidemiology is unreliable, but to what extent there has been translational failure between research designs. In turn, this has important implications for other common criticisms of nutritional epidemiology, including error in measurement of dietary intake, small and/or unreliable effect sizes and potential confounding in the results. In the first instance, where small differences between designs and similar estimates of effect derived from both lines of evidence are observed with greater similarity in design,11 it indicates that these issues are not necessarily fatal to epidemiological research (unless we hold that both designs are yielding unreliable/incorrect findings, in which case there has been a total failure of epistemology). More particularly, the same effort to reconcile apparent flaws may be undertaken to establish epistemic consistency. For example, the correlation coefficients for major nutrients derived from dietary assessment measures compared with reference measurement instruments are often in a range of 0.5–0.7; the correlation coefficient for the homeostatic model assessment of insulin resistance (HOMA-IR) compared with the euglycaemic clamp is 0.67.21 22 Yet, the prognostic utility of HOMA-IR does not appear to have been questioned, while measurement error in dietary assessments generates strong advocacy.23 24 There appears to be little justification for this obvious epistemic inconsistency.

None of this is to argue that measurement error is not a major challenge for observational research, but the relative straw-manning of nutritional epidemiology as a field blind to these issues24 belies the ongoing efforts to improve dietary assessment methodology, quantify and adjust for error and improve robustness in the findings. Nor is it to argue that nutrition research would not benefit from the resources to conduct larger, adequately powered RCTs, however, viewing the need for improved quality of nutrition research solely through the dichotomised lens of non-randomised versus randomised trials may overlook other potential opportunities to strengthen findings from observational data, such as target-trial emulation.25 26 Nutrition researchers can emphasise greater rigour in being explicit about the causal question being addressed by epidemiological data in the absence of an available RCT, and transparent with the assumptions underlying analysis of the data. Furthermore, rather than view epidemiological and interventional research as distinct silos, nutrition science should view these designs as complementary, and communicate across these research fields to carefully consider the relevant population, exposure and comparator, in research designs.

In sum, it is prudent to acknowledge that neither the broad dismissal of nutritional epidemiology nor defences of the status quo are accurate or helpful in guiding the trajectory of this important branch of nutrition science.6 Where the critical discourse focuses on these respective positions, it misdirects the dialogue to superficial considerations of evidence hierarchy and trades due diligence in reconciling apparent inconsistencies in favour of an assumption that merely because a trial was randomised, it yielded ‘correct’ results, particularly where those results purport to contradict an observational finding. This further highlights the relevance of a pragmatic research paradigm for nutrition science that places methodology at its centre, as more congruent findings may be facilitated through reciprocal, translational design approaches between epidemiology and RCTs. There may be more room for improvement than either the broad dismissal or status quo defence views of nutrition epidemiology allow for.

When ‘gold standard’ assumptions do not hold

The assumption that a greater emphasis on RCTs in nutrition science would improve the rigour of the field requires some detailed consideration. On the one hand, there is a valid question over whether, if nutrition research was sufficiently funded and resourced, the ability to conduct larger, higher-powered trials may generate more reliable answers. Nutrition has, at least historically, been a chronically underfunded field and while there is increasing recognition particularly following the pandemic around the global malnutrition crisis, the majority of efforts worldwide ranging from the White House initiative on Hunger, Nutrition and Health, to multilateral initiatives, are focused on urgent action rather than a strategic approach to long-term data and insights from research. There is a degree of truth to the statement that the current nutrition research landscape, with an emphasis on small trials, produces ‘endless nominal answers but hardly any credible ones’.3 And there is a degree of truth to the utility of large, high-powered RCTs in providing more credible, robust answers. In the recent Salt Substitute and Stroke Study (SSaSS) trial, an impressive intervention in 20 995 participants conducted over 5 years in which 4172 deaths occurred, participants using an added-potassium salt substitute showed a significant 14% (HR 0.86, 95% CI 0.77 to 0.96) decreased risk of stroke and 13% (HR 0.87, 95% CI 0.80 to 0.94) decreased risk for major CVD events, compared with a usual salt intake control group.27 Prior to the publication of SSaSS, the best available evidence for the effects of salt reduction on CVD event end points was derived from a meta-analysis of RCTs with a total of 1615 and 1610 participants in the intervention and control groups, respectively, which demonstrated a 20% (RR 0.80, 95% CI 0.64 to 0.99) decrease in CVD event risk.28 Thus, the SSaSS trial provided a robust corroboration of previous evidence of salt reduction for CVD in smaller, although well-designed and executed, intervention trials.

On the other hand, however, simply boosting funding and resources to conduct ‘megatrials’ without considering underlying methodological challenges for nutrition RCTs would be short-sighted. The example of sodium should not necessarily be taken to represent all nutritional exposures, as sodium appears to exhibit linear relationships with CVD outcomes,29 30 and consequently any effects of increases or decreases in intakes may be more detectable compared with a control. However, most nutrients are characterised by non-linear relationships,31 and nutrient status—and consequently nutrient responses—are influenced by categorical factors, including sex and/or genetics.32 That nutritional responses may be stratified by categorical factors, or influenced by baseline nutritional status and magnitude of achieved exposure contrast, are factors seldom considered in the design of nutrition RCTs.31 Repeating the same design flaws in larger trials is unlikely to produce any more credible or reliable answers merely because the study was bigger. Any such expectation disregards the fact that RCTs are not an assumption-free design, and the primary assumptions for internal validity, that is, exchangeability of the participant sample, clearly defined intervention and placebo-controlled, independence of treatment and control arms and of outcome effects,33 34 require careful consideration in the design of nutrition RCTs.

The assumption of exchangeability poses difficulties for free-living nutrition interventions, particularly as RCTs examine bivariate cause-effect relationships by testing the change in the outcome measure based on the contrast in effect between treatment and controls.33 In drug trials, exchangeability of the population sample before randomisation assumes that the addition of the treatment is then the only difference between groups. Thus, technically it is not possible for a group to be both treated and untreated at the same time,34 yet that is precisely the state that a control group in a nutrient intervention may find itself in, given that it is both not assigned to the additional intervention nutrient (‘untreated’) and has at least adequate levels of the exposure nutrient (‘treated’) for physiological function. By way of analogy, imagine an intervention trial where the treatment arm is being allocated to a high-intensity statin; however, the placebo group are also provided with the minimum effective dose of that same statin. This violates an assumption within Rubin’s causal model, specifically that a control group does not include any factors that are intended to be unique to the treatment group, which reduces the magnitude of the planned treatment contrast.33 The function of the placebo in a drug trial is to isolate the independent effects of the drug, however, this function is rendered redundant in many nutrition RCTs where the planned treatment contrast falls entirely within a range of adequacy for an exposure nutrient in both intervention and control arms. For example, the ‘null’ findings in antioxidant trials of α-tocopherol, ascorbic acid and/or β-carotene were all conducted comparing additional provision of isolated nutrients in the intervention group with control groups already with adequate measured levels of the treatment nutrients at baseline.12–14 That a potential negative effect was found on risk of heart failure in the HOPE trial also highlights the difference between a dietary exposure consumed in the context of foods and a whole dietary pattern, compared with high-dose supplementation and the biphasic dose-responses which defines antioxidant activity, with higher levels potentially associated with adverse outcomes.35 36

These methodological challenges for nutrition RCTs reflect the fact that there is no true ‘zero exposure’ in human nutrition. The assumption of independence of treatment and control arms of an RCT is complicated by the fact that there is no ‘nutrient-free state’.37 Although a placebo is technically possible in the context of a nutrient supplement trial, ‘placebo’ in this context is a misnomer given that the control group will never be entirely devoid of the intervention nutrient exposure. The net effect is that most RCTs have control groups with at least around the recommended daily intakes of a nutrient of interest.37 While a drug trial tests the independence of effect of an intervention by contrasting a treatment exposure versus a zero exposure, the absence of a true placebo for nutrients means that a nutrition trial is often testing whether more of a given nutrient is better than enough of the same nutrient.37 38 Importantly, as previously stated, nutrient status and responses are also influenced by categorical differences in participant characteristics. For example, the effect of folic acid supplementation on stroke risk is mediated by whether a policy of folic acid food fortification is in place; regions without folic acid fortification policies exhibit greater magnitudes of stroke risk reduction from folic acid supplementation compared with regions with folic acid fortification.39 This likely reflects a greater effect of supplementation in individuals with low folate status; in a recent meta-analysis of RCTs, folic acid supplementation lowered stroke risk by 12% (RR 0.88, 95% CI 0.80 to 0.98) overall, however, the effect was more robust in individuals with low baseline folate levels who showed a 21% (RR 0.79, 95% CI 0.69 to 0.89) decreased stroke risk.40 However, folate requirements and responses to folic acid supplementation or dietary folate intake are also influenced by the methylenetetrahydrofolate reductase gene, in particular the C677T genotype for which carriers of the TT variant require higher folate intakes to lower homocysteine levels.41 42 Folate serves as a case in point for the importance of categorical stratification of nutritional exposures,32 and the need for consideration of these factors in the design of nutrition RCTs. Drug trials test whether the addition of the drug prevents a disease process, while in nutrition the question is primarily whether insufficient or excessive levels of a nutrient results in a disease process, and remedying the insufficient/excessive exposure alters disease risk.37 38 These are fundamentally distinct questions. This distinction is not merely academic, as it underpins the failed assumption of a clearly defined intervention and placebo, and the discord between research designs comparing RCTs with epidemiological findings.

Furthermore, cause-effect relationships for nutrition are unlikely to be bivariate. This potentially violates the assumptions of independence of effects and unbiased effect estimates, which require orthogonality, that is, the treatment is orthogonal to other causes of the outcome, and the outcome should be the direct downstream effect of the treatment.43 This may be a logistical impossibility for free-living dietary interventions of either nutrient supplements or food-based exposures, given the interactive effects of diet, baseline nutrient repletion in both comparison groups and both dietary and behaviour changes that may occur in both intervention and control groups. In an RCT of the effects of 500 mg calcium+700 IU vitamin D supplementation on rate of change in bone mineral density (BMD), participants with the highest dietary protein intake in the intervention group exhibited greater preservation of BMD, effects not observed in the placebo group despite the same protein intake.44 In the Homocysteine and B Vitamins in Cognitive Impairment (VITACOG) trial of the effects of supplemental vitamin B6, B9 and B12 on rate of brain atrophy and cognitive decline, the effect of the B-vitamin intervention was only observed in participants with the highest baseline plasma levels of long-chain omega-3 fatty acids.45 These examples serve to further illustrate the mediating effects of categorical stratification of nutritional status,32 and the need to consider these factors in trial design in order to produce more informative evidence. These considerations extend to food-based interventions. In the Women’s Health Initiative, an RCT in 48 835 postmenopausal women investigating the effects of a low-fat dietary pattern on CHD, lowering saturated fat intake from 12.7% to 9.5% did not result in any significant decrease in CHD risk; however, in replacing saturated fat there was no increase in polyunsaturated fats, which in addition to intakes of vegetables, fruits and fibre, were all below recommended intake levels.46 Given that the effect of replacing saturated fat on CHD risk is mediated by the replacement nutrition,47 and the magnitude of effect preferencing polyunsaturated fat followed by complex carbohydrate,48 it is perhaps unsurprising that an isolated reduction in saturated fat without substitution for higher intakes of protective dietary characteristics yielded little effect on CHD risk. Thus, it may not be optimal for a given outcome to alter only one variable in isolation; for nutrition exposures, it may be desirable to manipulate several variables in a diet for specific outcomes.38 49

In sum, to state that RCTs are automatically more reliable is to presume the assumptions for validity and causal inferences are met,33 43 but there is little justification ever provided for this beyond a rudimentary mention of the study design. Nutrition science would be bolstered by a level of funding that more appropriately reflects the burden of cardiometabolic diseases in the population, which would allow for a greater scale of intervention trials to be conducted. However, it would be imperceptive to assume that more money and bigger trials provides nutrition science with solutions in the absence of any consideration for the unique nature of nutrition as a subject of scientific inquiry. There are numerous examples of large nutrition RCTs conducted with the assumption that the intervention and ‘placebo’ or control groups represented a true bivariate ‘exposed versus unexposed’ comparison, and many of these trials produced null findings, potentially due to inherent design flaws as outlined above. Thus, in addition to greater resources, it is crucial that nutrition-specific factors are considered in trial design of RCTs.

Meta-analysis and methodological mishaps

The primary conceptual basis for meta-analysis is quantitative precision by obtaining a statistical summary estimate of effect size through synthesis of evidence for a given exposure and outcome.50 As such, meta-analysis sits atop the hierarchy of evidence, with the implication that if RCTs are to be considered ‘gold standard’ evidence, meta-analysis may be considered ‘platinum standard’.50 The proliferation of meta-analysis for nutritional exposures may reflect an assumption that the lower magnitude of relative risk observed is aided by a quantitative synthesis of evidence into a summary point estimate of effect, which would not, for example, be required for an exposure like smoking because the strength of association is so obvious from individual studies.51 Nonetheless, the conceptual basis and underlying assumptions for meta-analysis favour high internal validity RCTs. As a hypothetical example, a meta-analysis of antihypertensives may include trials of a drug designed specifically for a particular patient population (individuals with high blood pressure), conducted with randomisation, double-blinding and placebo control, with a clearly defined intervention (the drug), and strong independent effect sizes demonstrated against a zero-exposure placebo. Thus, the exposure treatment, outcomes, effect measures and study population are all relatively homogenous. However, applying this methodology to nutrition research—observational or randomised experiments—without consideration of the underlying assumptions, and of nutrition-specific issues outline in the foregoing sections, has resulted in meta-analysis generating misleading conclusions for nutrition science.52

One purported benefit of meta-analysis is that the statistical approach constrains subjective assessments by the authors,50 however, this presupposition is predicated entirely on the quantitative synthesis of the included studies, not the inclusion of the studies themselves. To illustrate this point, in a 2010 meta-analysis of prospective cohort studies that spawned controversy over public health recommendations for saturated fat,53 42% of the statistical weight was derived from studies that controlled for blood cholesterol levels, with the expected effect of overadjustment for the causal mediator between high saturated fat and coronary heart disease being an obscuring of such an association.54 A 2017 meta-analysis of the effects of red meat on CVD risk factors levels included wildly divergent exposures and comparisons in the primary studies, with various types of red meat as all forms of beef, pork, lamb, veal, goat and non-bird games (eg, venison, bison, elk), and control arms including fatty fish, lean white fish, soy, tofu, chicken, plant protein.55 Given that the effect size and variation in the primary study would be influenced by the type of exposure and comparison, the purported exposure of interest of >0.5 servings vs <0.5 servings was a crude definition of the exposure contrast and too unspecific to detect meaningful differences between comparisons.55 56

The issue of exposure contrasts and absolute levels of an exposure of interest is a factor often overlooked in conducting nutrition meta-analyses, particularly of prospective cohort studies. The standard methodology of comparing high versus low quantiles of an exposure is influenced by the actual quantile division, the magnitude of contrast between highest and lowest intakes, which relate to absolute levels of intake. A recent meta-analysis purported to conclude that higher saturated fat intake was associated with lower risk of stroke.57 However, this association was primarily driven by included cohorts in East Asian populations, with narrow contrasts in exposure and median daily saturated fat intake of 20.6 g/day in the ‘highest’ category (often compared with levels as low as 7 g/day).57 This level of intake, and the entire exposure contrast between ‘high’ versus ‘low’, were within the range of public health recommendations for saturated fat intake.58 The authors conclusion that the study provided evidence for ‘the protective effects of diets high in SFA (saturated fat) on the reduction of stroke risk’, while attractive for sensationalist headlines, was erroneous and should more appropriately have stated that the study upheld current targets for a threshold of 10% energy and was consistent with thresholds of intake at which lower risk of CVD events would be expected.59 In a recent meta-analysis, which found no association between processed meat intake and CHD mortality, the lack of association was driven by Japanese cohorts which had a ‘highest’ category of 13.9 g/day in men and 11.7 g/day in women, respectively.60 These respective ‘high’ categories are far below the ~50 g/day threshold at which processed meat appears to significantly increase risk.61 62

Thus, while the intended outcome of a meta-analysis may be quantitative precision, the outcome for nutrition studies is often muddied waters, and the foregoing examples illustrate the issues that arise with ‘mindless agglomeration of study results into a single summary estimate’.63 The pride of place of meta-analysis in the evidential hierarchy generates assumptions of reliability in the results, which may be problematic when inaccurate findings are added to the evidence-base. Meta-analysis may be a useful statistical technique for nutrition science, but barriers in relation to the grading of intervention studies, appropriate stratification of exposures and analytical methods in the primary studies, require careful consideration for nutrition meta-analytic methodology. Quantitative precision would be greatly aided if meta-analysis used prior knowledge, set out a clearly defined exposure, quantified the absolute levels of the ‘high versus low’ comparison and ensured a similar exposure contrast, and incorporated studies with similar comparator group characteristics. Without more rigour in execution, the distortive lumping of studies in meta-analysis will not change the fact that no statistical technique may overcome the limitations of the input data.63

An ontological and epistemic paradigm shift for nutrition science

Kuhn described a research paradigm as ‘the set of common beliefs and agreements shared between scientists about how problems should be understood and addressed’.64 The four constructs of axiology (“what do we value/what is ethical?”), ontology (‘what is the nature of reality/what is there to be known?’), epistemology (“what do we know/how do we know it?”) and methodology (“what approaches can we take to obtain knowledge?”), together comprise the essential core of a research paradigm. What is broadly termed ‘the scientific method’ is grounded in the ontology of objectivism—that there is a reality that is observable and discoverable—and the epistemology of positivism; that observable phenomena are empirically testable. While this objectivist and positivist approach may be sufficiently broad to accommodate different methodologies, for biomedical sciences the guiding assumption underpinning methodology has been that of reductionism, the distillation of disease down to molecular and cellular biological origin and the development of interventions at a targeted level.65

This principal assumption has, by extension, been applied to nutrition research, not initially without reason given the early successes of nutrition science in identifying and eradicating single-nutrient deficiency conditions in the population.49 However, the prevailing approach to moving nutritional exposures from observational findings to RCTs has primarily emphasised testing the effects of isolated supplemental nutrients, irrespective of whether the epidemiological associations were derived from dietary intakes. This is a direct reflection of the principal assumption of reductionism underpinning the biomedical model, for which there is a long history of critique of its application to nutrition research.37 38 49 66 67

From the ontological perspective, it is important to consider the distinction between diet and nutrition. Nutrition may be operationally defined as objective: the process by which a living organism takes in food and uses the nutrients provided by food for growth and repair, emphasising the six nutrient categories of proteins, carbohydrates, fats, fibre, vitamins, minerals and water. Diet, however, may be defined as subjective: the sum totals of foods consumed by an individual or community, influenced by cultural, traditional, regional, religious, ethical and environmental factors, in addition to wider socio-economic factors, and indeed, personal preference. The epistemic implications of this reality extend to the heart of the methodological friction between the biomedical paradigm and the field of nutrition research: viewed through the objectivist lens, nutrition may appear to be more amenable to investigation by reductionism. However, as diet constitutes the totality of the conceptual exposure of interest, food is the fundamental unit in human nutrition, and dietary intakes influenced by a range of wider behavioural and environmental factors, this assumption does not hold.38 49 67

Thus, the inherent characteristics of the subject of inquiry necessitate the development of a specific research paradigm for human diet and nutrition, encompassing a wider ontological and epistemic framework within which to fully elucidate the complexity of the exposure, from the cellular to the social. Given this broader encompassing ontological and epistemic paradigm in which the field of nutrition science must necessarily operate, pragmatism may provide a unifying approach to surmount the duality of the objectivist/subjectivist ontology and constructivist/empiricist epistemology (figure 1).68 While there is value in understanding these distinctions, a pragmatic framework emphasises methodology at the centre, allowing for considerations of research design (which form the basis of the epistemic conflict between biomedicine and nutrition) to be connected back to a guiding epistemology, rather than separating questions regarding the nature of knowledge from the processes of producing that knowledge.68

Figure 1

Flow chart illustrating a pragmatic research paradigm for nutrition science. The scientific method, within which a range of disciplines operate, is grounded in the ontology of objectivism and the epistemology of empiricism. The guiding methodology (“what approaches can we take to obtain knowledge?”) in the biomedical model has centred around reductionism. While this methodological approach may be useful for identifying and eradicating single-nutrient deficiency diseases, such conditions are no longer the primary concern for nutrition research, which is instead focused on chronic lifestyle diseases as a public health priority. The illustration of the conceptual framework for a pragmatic research paradigm is adapted from Morgan.68 Within a pragmatic approach, methodology itself is centred as the domain that connects both abstract epistemic constructs and concrete research methods together.68 In a pragmatic framework, considerations of methodology to obtain knowledge feedback to considerations of how we go about acquiring knowledge, which in turn informs approaches to research design. With methodology at the centre, questions over both how we obtain knowledge, and research design as the method to produce that knowledge, are connected.68 This provides the field of nutrition research with a framework within which to capture the full complexity of nutrition and diet.

One implication for a research paradigm that places methodology at its centre is the need to develop more nutrition-specific criteria for evidence assessment. An over-reliance on the traditional hierarchy of evidence may undermine evidence evaluation by potentially overstating or understating the results from a given study merely due to its design.69 70 The use of evidence assessment tools such as Grading of Recommendations, Assessment, Development and Evaluations carries default implications for nutrition research, often leading to a downgrading of the actual findings of the research and its congruence with wider evidence.52 If we consider evidence as the body of data that supports a given conclusion, what constitutes sufficient evidence for a given standard of proof will differ relative to the question being addressed. It is important to acknowledge that evidential assessment criteria will inevitably include arbitrary standards and some element of subjectivity. Nevertheless, considering evidence evaluation as a process, rather than as a canonical hierarchy, may encourage nutrition scientists to undertake a more systematic approach to investigate concordance between findings from different lines of evidence, comparing similar exposures and other study characteristics, and analyse the convergence of multiple lines of evidence. We echo Tobias et al52 with the need for the nutrition science community to forge its own consensus for appraising nutrition evidence, reflecting the unique methodological challenges inherent in investigating an exposure as complex as human nutrition.

Conclusions

It will be important for stakeholders in nutrition science to come together in advancing the next phase of nutrition research, working collaboratively to improve the methodologies driving the field forward and to produce scientific evidence of sufficient quality and integrity to translate and apply for the improvement of human health. NNEdPro remains committed to advancing nutrition science, education and implementation in order to link research, policy and practice. The 15th anniversary of NNEdPro during the 9th Annual International Summit will examine sustainable resourcing for all—this will include innovative ways to finance nutrition research which would benefit from the added funding and a nutrition-specific scientific paradigm as proposed in this paper. These aims will require the network of nutrition journals to uphold evidential standards and promote robust research; BMJ NPH remains committed to this task. This task is complicated by the widespread conjecture and beliefs regarding the viability of nutrition science to produce reliable answers. Rather, to paraphrase Concato and Horwitz,71 a ‘nutrition-based evidence’ framework is warranted. Greater rigour in the conduct of nutrition research will produce better nutrition science; more refined evidence assessment will aid in the translation of that research for the betterment of human health.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Contributors Conceptualisation: AF. Writing—original draft: AF. Writing—subsequent drafts, review and editing: AF, JB, MK and SR.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.