What can sugar teach us about evidence-based chemical regulation?

April 14, 2016 at 6:07 pm | Posted in Comment | 1 Comment

This month, we recommend reading “The Sugar Conspiracy”, a Guardian Long Read by science writer and journalist Ian Leslie. The article is interesting because the challenges of developing evidence-based chemical regulations are mirrored in this account of the lessons we should be learning from a 60+ year-old argument about the place of sugar and fat in dietary guidance for public health.

Jelly Sugar Candy - Renatto Vect - Flickr - edited

Sugar, chemicals, and the role of science in policy-making

Chemicals and public health policy both sit at the interface of science and decision-making, trying to make sense of accumulating scientific evidence about health risks posed by chemicals, how to best make use of that ever-shifting research landscape to agree on desired outcomes, and shape the policies that will stand the best chance of achieving them.

The first lesson about the relationship between science and policy, which “The Sugar Conspiracy” gets right, is that scientific research as well as policy-making is embedded in human social practices: there is no magic cordon which automatically ensures a separation of scientist from society, or of scientific behaviours from regular ones.

In many circumstances, these social determinants may be at least as important for explaining why scientists have a particular set of beliefs as what a putative body of evidence might be saying. These social determinants operate at the personal level and include deference to the charismatic, herding towards majority opinion, punishment for deviance, and intense discomfort with admitting to error.

They also operate at the societal level, with the article touching on how the spread and eventual mainstreaming of an idea can be sometimes be explained without apparent recourse to an evidence base at all: academics accumulate power and appoint like-minded thinkers to influential positions; this increases their funding and ability to determine the research agenda, the methods used, and the admissible evidence.

As the elite spreads and homogenises, any canvassing of expert opinion reaches only a demographically uniform group, and any dissenting opinions are either missed altogether or dismissed as outliers. So by shaping the evidence and the surrounding opinion, ideas can spread through the research community without needing to be right.

The second lesson from the piece is its first misstep: the article misunderstands the role the scientific method can play in providing constraints on the social steers under which scientists operate. Of the above psychological and social pressures, the article states: “Of course, such tendencies are precisely what the scientific method was invented to correct for, and over the long run, it does a good job of it.”

In fact, it is a mistake to hold that the scientific method somehow automatically keeps in check the worst social excesses of human researchers; really, the scientific method cannot do anything automatically because it has to be deliberately applied by researchers in order to have any effect.

Most of the time, this deliberate application is made in the context of the single experiment, whereby the controlled set-up required by the scientific method makes it possible for the researcher to be more confident that the effect they are seeing is a consequence of the changes they are introducing, rather than a consequence of something else happening in the experiment of which they are unaware.

But in “The Sugar Conspiracy”, the author is interested in how scientific research is aggregated: here, the research activity moves from limiting the effect of psychosocial pressures on producing new evidence at the lab bench, to limiting the effect of these pressures on the process of gathering and appraising existing evidence.

Why, in making this transition, should we assume the scientific method is still being applied? Even if scientists are good at conducting controlled experiments in the lab, there is no reason to assume they are equally effective at controlling the variables which affect the process of synthesising all the evidence which those lab experiments are producing.

The third lesson is that we can question another assumption implied by the article: in this case, it is how the Sugar Conspiracy seems to buy into the idea that science produces a canon of fact, to which some people (like John Yudkin) are aligned all along and some (like Ancel Keys) are not.

In fact, science produces a body of evidence which is sufficiently confusing, messy and open to interpretation that at any given time it might not be possible to tell who is right. In these instances (which may be the vast majority of the time) there is just opinion, some of it better founded on the available evidence, some of it formed by social determinants, and some of it ultimately turning out to represent the best guess as to the facts of the matter regardless of how it was come to.

If it really were a matter of science determining the facts and researchers agreeing with those facts or not, it is unclear how scientific debate could ever get started: if scientists either know the facts or they do not, then anyone arguing against the facts is either doing so out of ignorance or bad faith. It doesn’t allow for the possibility of uncertainty stemming from the difficulty of interpreting a limited and/or conflicting evidence base.

This is perhaps why the article focuses on Keys’ rather uncivilised behaviour to explain how he won the argument with Yudkin; however, it is not clear if the debate would have been resolved differently even if Keys had been more of the quiet man which Yudkin was. In a situation in which nobody knows because the evidence is weak, a decision still has to be made and it is down to luck if it is the right one. (It is also worth noting how Yudkin never really disappeared from view quite as much as the article would have the reader believe, such as this Guardian piece from 1999.)

This is one of the reasons why developing policy from an evidence base is so difficult: except in very restricted decision-making contexts, the evidence base is always going to be too underpowered to be capable of determining the right decision among the multitude of policy choices and their attendant consequences. This is for two reasons: that the number of possible choices vastly outstrips our capacity to gather sufficient empirical data to determine which choice is best; and because many of the choices are not determinable by research anyway, deriving as they do from our value systems (i.e. what we want in the world).

Where evidence is lacking, opinion fills the space. Where outcomes can be legitimately informed or determined by evidence, the trick will be in determining which opinions are sufficiently based in what is currently known, where there is opinion instead of evidence, and what to do in terms of research to meet the information requirements of the policy-makers. (Where outcomes cannot be legitimately determined by evidence, the trick is ensuring the political process is capable of producing fair and equitable outcomes.)

The final lesson concerns what to do in order to ensure that we are making the best use of evidence in policy-making. At this point, the Sugar Conspiracy rather peters out, being ambivalent about information democracies or information oligarchies, as if somehow the prize of science is clarity in purpose rather than (as the article itself seems to imply throughout) using the evidence to give oneself the best possible chance of making the right decision.

There is in fact a route to a better way of doing things which means we can be much more optimistic about the prospects for the scientific method in hastening resolution of scientific disputes, whether they are about appropriate sugar intake in dietary guidelines or the risks to health posed by chemicals and other pollutants.

The solution involves revisiting how the scientific method can be applied to the aggregation of evidence. The premise of the story, that scientists are bad at developing evidence-based policies, only comes as a surprise because people (scientists included) seem just to assume that because scientists are scientific when they are producing evidence, then they must be just as scientific when they are accumulating evidence.

As the article shows, they are not. But the situation is by no means insoluble: the reason scientists are not very good at accumulating evidence is that it is only relatively recently that the scientific method has been deliberately applied not only to the process of generating evidence, but also to aggregating it.

These lessons have been most painfully learned in medicine, historically an eminence-led profession where, by the 1990s, experts were being found to be making one error after another in their understanding of what they thought the evidence said. This cost lives in administration of ineffective interventions and resulted in clinical trials being conducted for questions to which the answers should already have been known.

The lesson was that as much methodological care needs to be taken in aggregating research as needs to be taken in producing it. For this purpose, systematic review methods were invented. In essence, they are simple: it is about taking the principle of control, of transparency and repeatability of methods and of minimising bias, so familiar in lab work, and applying it to how evidence is synthesised.

This has been very successful in medicine, making groups of experts consistently much better at using effective healthcare treatments and rejecting ineffective ones. In the context of systematic review, as the large volume of positive responses to the Teicholz article in the BMJ suggests, the culture shift towards challenging eminence with evidence, facilitated by an accessible evidence-base, could go a long way towards preventing the likes of Ancel Keys apparently getting their way by throwing their weight around rather than demonstrating the evidence for their position.

So while we can’t make scientists asocial, we can start imposing controls on the aggregation of evidence, to minimise (or at least help us identify) the effect which uncontrolled social influences have on what we think the best evidence is saying. This won’t solve all the problems with ensuring policy makes best use of the best evidence, but it helps with at least one of them.

Further reading

  • Testing Treatments. Evans et al. (2011). Short, free and very accessible book about how randomised controlled trials, systematic review methods, patient involvement in research decisions and other hallmarks of the modern approach healthcare research have transformed medicine.
  • How science makes environmental controversies worse”. Dan Sarewitz (2004). Offers a compelling explanation of why the processes of conducting research and developing policy should not be conflated.
  • The Honest Broker. Roger Pielke Jr (2007). Explains how science can become politicised, politics can become scientised, and how science advice, if sought in the right way, can navigate between these two unappealing alternatives.

1 Comment »

RSS feed for comments on this post. TrackBack URI

  1. Reblogged this on jagabaldondominguez.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.
Entries and comments feeds.

%d bloggers like this: