Who can you believe when it comes to chemical safety?

November 20, 2013 at 11:45 am | Posted in Feature Articles | 1 Comment

PFS Project

You do not have to travel very far to find controversy in chemical safety: if you eat canned food there is probably some in your refrigerator. In this case, we would most likely be talking about bisphenol A, a polymerising agent used for manufacturing durable plastics and resins and a poster-child for controversy in chemicals policy.

The concern with BPA is that, while it is widely used in food contact materials, it might act in the body as an oestrogen. Researchers are looking at whether it increases the risk of breast cancer, reduced fertility, abnormalities of the reproductive tract, obesity and accelerated puberty.

A number of EU Member States have restricted the use of BPA in food contact materials intended for infant use. The French Government intends to ban BPA from food contact materials altogether, and the Swedish Government is even planning to ban its use in receipt papers. Such measures are supported by environmental health NGOs and a number of scientists have spoken out against the use of the substance.

Such moves against BPA are not, however, uncontested. The current opinion of the European Food Safety Authority (EFSA) is that BPA at current exposure levels poses no threat to people’s health – a view echoed by the UK Food Standards Agency. Professor Richard Sharpe of the MRC, no stranger to endocrine disruptors, thinks BPA is the wrong target for regulatory control and that research into the toxicity of the compound should desist (Sharpe 2010).

Whose opinion, then, are you supposed to believe? With over 6000 published studies, it is unlikely that you have the personal time or resources to get sufficiently familiar with the research into BPA to decide for yourself who is most likely to be right. So how do you pick the opinion which is most likely to be right?

Further reading: Systematic Review and the Future of Evidence in Chemicals Policy. A new report from the Policy from Science Project, looking at how systematic review techniques used in evidence-based medicine can advance the credibility and utility of chemical risk assessments.

Sign up here for project updates, including alerts for upcoming webinars for discussing report findings.

Reading what the experts think

All of these experts have at some point justified what they think, publishing in some shape or form reviews of the evidence which underpin their opinions. You might expect that reading these and deciding which one is the best presentation of the evidence should be a decent short-cut having to do all that research yourself – after all, that is the whole point of the academic literature review.

Some of these literature reviews will be better than others. Since you would not want to be misled by a review which was either inadvertently or deliberately biased, you would want a literature review to fulfil at least the following six criteria:

1. Stating a clear objective. The objective of the review should address a clear question, such that there is no room for misunderstanding what the review is trying to find out.

2. Applying a consistent method. The method used in the conduct of the piece of research shouldn’t change as the research is being conducted, otherwise (deliberately or not) the researchers risk finding what they think the evidence should be saying, rather than what the evidence actually says.

3. Including all relevant evidence. The process for finding evidence and selecting studies for analysis should deliver a representative sample of the evidence base, otherwise the review risks finding only what a particular segment of the overall evidence says, not what all of the evidence says (sampling and selection bias in finding evidence).

4. Assessing the quality of included evidence. Because the individual pieces of evidence in a review will be of variable quality and should therefore have varying weight in the overall conclusions, each piece of evidence should subjected to the same fair test of quality, so the findings of weaker studies are appropriately downgraded and stronger studies not unduly rejected.

5. Stating the interests of the reviewers. Because the interests of the reviewers can have a detrimental effect on the objectivity of the review findings, or at least an important role in shaping the course of the review, the interests and contributions of each author and contributor to the review should be stated.

6. Accurately reporting findings. The findings reported in the review document are an accurate representation of what the reviewers actually found.

All this probably sounds like a no-brainer. But there is in fact a problem here: analysis of EFSA’s two most recent Scientific Opinions on BPA by this author reveals serious shortcomings in methodology and documentation of the review process (Whaley 2013, PDF – see recommended reading above), on precisely these points.

Are expert opinions sufficiently scientifically robust?

The two Opinions in question are the 2010 Opinion on BPA and the draft version of the next Opinion on BPA, the exposure part, published in 2013. Neither Opinion performed well when compared to what is expected from a scientifically-robust literature review process, in an evaluation based around the criteria listed above.

For example, the objective of the 2010 Opinion was ambiguous, being unclear if it was concerned with evidence which would permit a recalculation of the safe level of exposure for BPA, or if there was evidence which undermined the current calculation. The Opinion addressed the former – but the question of whether or not any new studies permit the safe intake can be recalculated is a very different one to whether or not new studies mean the safe intake level is still credible.

Neither the 2010 nor 2013 Opinions used pre-published protocols, which have an important role in preventing the sort of ad-hoc decision-making which can bias a review’s findings. Nor did either Opinion include a statement of interests and contributions; instead, users have to approach EFSA for a full statement of interests for the Opinion Working Group and Panel members, before having to work out for themselves which interests might have been relevant to the writing of the Opinion.

There were weaknesses in the search and selection processes for finding and including evidence for analysis in the review which suggested that only a partial selection of the overall evidence was analysed by EFSA; nor was there a clear and consistent method for evaluating the quality of included studies, so it was unclear if studies were being correctly weighted in accordance with their methodological quality.

All this doesn’t mean the EFSA Opinions are wrong. What it does mean, however, is that they provide insufficient assurance that they are correct: in the absence of a transparent, reproducible methodology it is an overwhelmingly arduous task to evaluate the likelihood that EFSA’s Opinions represent the best possible synthesis of all the available evidence on BPA – because if you don’t know what they are doing, how can you judge if they are likely to be correct?

This has all been seen before

A study published in 1987 found that of 50 literature reviews published in the top 4 medical journals, only 1 had clearly specified its methods for identifying, selecting and validating information included in the review (Mulrow 1987).

This helped precipitate a step-change in how evidence was reviewed in medicine, catalysing the development of systematic review techniques designed to integrate the fundamental scientific concept of reproducibility of method into the literature review process. These include the publication of protocols prior to conducting reviews, advanced search techniques for assembling all the available evidence relevant to a question, and reproducible methods for evaluating the quality of evidence.

It seems natural, then, that these techniques could also be applied to toxicological research in the course of assessing the safety of chemicals.

Moves are already being made in this direction, with the US Environmental Protection Agency’s IRIS project, the University of California San Francisco’s Navigation Guide, the US National Toxicology Panel’s draft protocols for systematic review; the Evidence-Based Toxicology Collaboration (arguably the first to articulate the concept of systematic review for toxicological data); the Policy from Science Project (the author’s own group), work at Stockholm University, the Swiss Ecotoxicological Institute, and others.

This work is important not only for finally getting clear and transparent statements of what is known about chemical toxicity (rather than statements of merely what is thought to be known). It has important consequences for everyone’s engagement in chemical regulation – because without access to a systematic presentation of the evidence base, how can any of us expect to engage in an informed manner in the debate about chemical safety?

1 Comment »

RSS feed for comments on this post. TrackBack URI

  1. […] Who can you believe when it comes to chemical safety? | Health & Environment (written by ourselves) […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Blog at WordPress.com.
Entries and comments feeds.

%d bloggers like this: