Article Text

Download PDFPDF

One size does not fit all: on the need for categorical stratification in nutrition science, practice and policy
  1. Martin Kohlmeier
  1. UNC Nutrition Research Institute, UNC, Chapel Hill, North Carolina, USA
  1. Correspondence to Dr Martin Kohlmeier, UNC Nutrition Research Institute, UNC, Chapel Hill, North Carolina, USA; MKOHLMEIER{at}UNC.EDU

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

We can claim with good reason that nutrition is a hard science.1 This claim does not depend on intrinsic inerrancy but on the potential for self-correcting evidence-based principles just as in physics, chemistry and other classical natural sciences. The claim must not diminish the existence of numerous controversies and uncertainties about important specific aspects of nutrition science. It is of particular importance to constantly reassess key foundations of the science. In this respect, we need to critically examine the common problem in most aspects of nutrition science that categorical differences are not respected or not known. The usual assumption is that unless a categorical difference is strongly evident or otherwise proven, it does not exist. Thus, formal tests for heterogeneity are commonly omitted or ignored. This often means that the relevance of existing categorical differences is misunderstood, that vulnerable groups are overlooked and that actionable opportunities for subgroups are missed. The issues are not new in clinical medicine2 but call for urgent attention, considering the explosive advancements in understanding genetic variants and other categorical variables in the life sciences.

A particularly common assumption in nutrition science is that most relationships are of a continuous nature. Manifestations of this assumption, such as Bertrand’s rule of optimal nutrition3 and the promiscuous use of normal distributions to predict nutrition responses, for example, by the dietary reference intakes (DRIs),4 may be a carryover from the ancient health framework based on the mixing of fundamental humours: blood, phlegm, yellow bile and black bile; the quintessence or fifth element was eventually added as a fudge factor. In modern biology, concentrations of defined molecular compounds, such as specific proteins, lipids, carbohydrates, minerals, vitamins, bioactives and numerous others, have taken the place of the original humours and have served us well to advance a better understanding of mechanisms and outcomes.

With the growth of knowledge has come the realisation that we are missing critical aspects to effectively model and predict the way nutrition works in different individuals and populations. One of them is that many biological features are categorical in nature and do not neatly fit into the current framework of continuous variables. This is especially true for genetic categories including the most common one, sex.

One might say that categories affecting nutrition responses are of two distinct kinds: the known and the unknown ones. Of the first kind, we can take biological sex as an example, with a reasonable understanding of responses to many nutrition exposures. Thus, with equal iron intake per body weight, the population distribution of steady-state haemoglobin concentrations in blood will skew to higher levels in young healthy men than in menstruating women of comparable age.4 Therefore, we must insist that all related conclusions, such as estimates of dietary iron requirements, need to be considered separately by sex. No matter how often we measure the response of men to iron intake, the results cannot inform us meaningfully about the corresponding response of women. Even if it becomes apparent in a particular case after thorough investigation that the numbers are similar in males and females, the results must remain separate because we found out about their similarity only in hindsight.

Conclusions about the second kind of category, the unknown ones, do not come so easily. As a reminder and to set the stage, there are millions of common genetic variants with minor allele frequencies of several percent in some populations in the world.5 For most of the nutritional relationships with a significant genetic component,6 impactful categorical variables remain hidden in the jungle of other common genetic variants, often involving multiple loci or genes and typically hundreds of variants across each of them. One application of genetically modulated nutrition status is as instrumental variables for Mendelian Randomisation to examine whether the nutrition status impacts particular disease outcomes.7–9

As soon as substantial and corroborated evidence emerges for a particular variable that indicates the stratification of a nutritional response, there is no going back to the assumption of uniform behaviour. Whether the candidate variant ends up being causal itself or linked to another, ultimately causal variant is of lesser importance in this context. The relevant question is whether the response predictably differs by carrier status. This may be illustrated with the stratification of dietary folate requirements by the common MTHFR rs1801133 TT genotype. Multiple feeding studies have demonstrated that healthy adult TT carriers need a much higher folate intake to achieve the same homocysteine concentration in blood as CC carriers.10 11 The MTHFR genotype constitutes a categorical difference. Observations and conclusions about people with the CC genotype, which encodes a high-activity enzyme version, cannot and must not be applied to individuals with the TT genotype encoding a low-activity enzyme version. The difference in protein patterns resulting from non-identical DNA sequences results in different enzyme characteristics, such as specific activity and thermostability. Further, it is robustly known that nutrition responses also differ. That means that averaging individual responses of different people is not informative unless they carry the same genotype. Since we already know about the underlying categorical difference, there is no going back to assuming a uniform response across otherwise similar individuals in folate-related studies. The same applies to riboflavin intake since the MTHFR enzyme needs FAD as a coenzyme.12 If someone wants to know the response to a given folate intake or how much folate or riboflavin should be consumed to achieve a particular outcome, averages cannot guide us because they will differ too much between carriers of the CC and TT genotypes. People with the CC genotype (figure 1, blue bars) appear to have adequate folate status, as indicated by average homocysteine concentrations of less than 9 µmol/L, with daily intakes under 300 µg, while the TT genotype carriers (grey bars) achieve such low concentrations only with more than two times as high daily intakes, well over 600 µg.

Figure 1

Effect of folate dose on plasma homocysteine concentration by genotype in healthy adults. Data based on Ashfield-Watt et al.10 DFE, dietary folate equivalent.

While people differ in innumerable ways and considering each difference is impossible, acknowledging the existence of both known and unknown categorical differences is important and can help in advancing research, practice and policies. This does not mean that we must measure them all in all situations. In research, it will often be desirable to capture extensive genomic, metabolomic and other -omic information. In medical and nutritional practice, it can be helpful, even without genetic or other testing, to know that some patients respond differently to an intervention because they are intrinsically less responsive and not just because they are non-compliant. Policy makers, food producers and other stakeholders should want to know what is most likely to work for different regions and populations, and which vulnerable subgroups need attention because they are different in predictable ways. The practical consequences of categorical differences depend on the challenge we want to solve, urgency of an answer, availability of resources and many other circumstances. It remains the responsibility of each nutrition scientist to do a careful search for prior information on possible heterogeneity of molecular features or nutrition responses before setting out on a particular research question or interpreting already existing data. Readers, editors and other stakeholders also must know how gender, genetic variants and other common categorical variables impact individual responses to nutrition.

It should not need saying that women are not smaller versions of men. Similarly, people with other genetic variants, which is all of us, are not meaningfully described by a general average and a fictional normal distribution. All of us individuals should get the kind of effective nutrition advice, treatment and policies that can be reasonably available by considering who we are.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Contributors This editorial was conceived and written solely by MK.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests No, there are no competing interests.

  • Provenance and peer review Not commissioned; internally peer reviewed.