Could “Relevance” be a Source of Bias in Risk Assessment?

May 26, 2011 at 11:32 am | Posted in Feature Articles | 3 Comments

Bias, the inclination to hold a partial perspective at the expense of possibly equally valid alternatives, has plagued analysis of information since time immemorial: like lodestone to an explorer’s compass, bias invisibly guides analysis off course from fact and into the realms of falsehood. Although often associated with research malpractice borne of funders’ interests, bias is the pernicious influence of any distortions behind raw data. Given the obvious importance of factual accuracy to scientific research, a great deal of effort has been expended on identifying potential biases and devising means for nullifying them.

In public health and chemicals regulation, the cost to health of bias resulting in incorrect chemical safety assessments could be substantial. Some of these assessments are certainly controversial enough. In 2009, a group of 30 researchers heavily criticised a European Food Safety Authority (EFSA) assessment which concluded that BPA, a substance widely-used in food packaging, poses no risk to human health at current exposure levels (Myers et al. 2009). The researchers argued that the assessment missed a body of coherent evidence that BPA is harmful to health, instead preferring the conclusions of a tiny minority of studies carried out according to Good Laboratory Practice guidelines. [article continues below]

Figure 1: How demand for relevance can obscure studies from the RA process. Click image to enlarge.

The researchers are disputing the findings of the majority of the nine significant, recent regulatory evaluations of the potential risk posed to health by BPA. Six of these agree with the EFSA evaluation, finding no cause for concern; two express some cause for concern but no need for changing exposure limits; only one, by more-or-less the same researchers as critiqued the EFSA assessment, finds “a great cause for concern” (Beronius et al. 2010).

One could conclude that the dissenting view is wrong – after all, the consistency with which regulators find BPA to be safe is referred to often enough when consumers are being reassured about the safety of the chemical. However, consistency of findings can also be generated by bias in the method for assessing evidence. That the dissenting review was the only one not carried out under risk assessment requirements increases the likelihood that the consistency of regulatory opinion is generated by the method they use for assessing the data, not the data itself.

So is it possible that the reviews of the safety of BPA are subject to a systematic bias which renders each of them unreliable in terms of their conclusion that BPA, at current exposure levels, is safe? To analyse in detail the full review process goes far beyond what can be discussed here, so we are only going to discuss one way in which this could happen.

The first thing to note is that reviews carried out for risk assessment are not necessarily asking the question: “Is there is coherent, corroborating body of evidence which suggests BPA may be harmful?” Rather, the reviews tend to ask if there is any evidence which warrants changing the current tolerable daily intake (TDI) for BPA. Here, risk assessors introduce the concept of “relevance to risk assessment”.

“Relevance” is a muddy notion less widely-discussed than the reliability of industry or peer-reviewed studies, but one which plays a central role in how regulators handle scientific information for risk assessment, for example making an appearance in both EFSA guidance on evaluating the safety of pesticides (EFSA 2011) and EU Scientific Committee on Emerging and Newly-Identified Health Risks (SCENIHR) explanations (Health and Consumer DG 2011) of the execution of evidence review methodologies.

EFSA describes relevance as a combination of “provid[ing] data for establishing or refining risk assessment parameters” and “reliable”, which is “the extent to which a study is free from bias and its findings reflect true facts”. The relevance assessment is carried out in two steps. Data which does not fit the first criterion does not make it to the second phase of assessment where its reliability is assessed.

SCENIHR also has a concept of relevance. Confusingly, its terminology is not consistent with that of EFSA, using “utility” instead of “relevance” (for SCENIHR “relevance” applies to whether or not the stressor and model of outcome used in a study measure an effect relevant to the one in question). Utility assessments determine whether or not a study is included in the risk assessment data review; if it doesn’t help with risk assessment, its weight in the evaluation of the safety of a substance is greatly diminished.

This filtering could introduce what might be called a “relevance bias”. If the studies not relevant for risk assessment do contain some factual information, then eliminating these studies from review on the grounds that they are not relevant for risk assessment runs the risk of reducing the capacity of the review to determine what the facts are. To put it another way: a relevance filter could introduce bias by robbing from the review the opportunity to compare studies which may corroborate each other’s findings. If the discarded data reflects the facts better than the included data, then the risk assessment will present a skewed evaluation of the potential for harm to health which the chemical may pose.

It would work like this (see Figure 1 for illustration): imagine there are 10 studies of the neurotoxicity of PBDE flame retardants. Five produce corroborating findings, but only two are relevant to risk assessment; the other seven are not. The relevant studies are assessed for consistency and are found to contradict each other. The review concludes there is no consistent evidence of harm to health and therefore no possibility of risk assessment. The review misses the five corroborating studies and three negative studies, so the selection process potentially obscures a consistent finding, leaving no guarantee that the risk assessment process has produced an outcome which represents the facts.

Bias needs to be addressed in systematic review and weight-of-evidence evaluations because agreement in outcome between reviews using the same methodology is consistent with either (a) the reviews reflecting the facts, or (b) the reviews being false and distorted by a similar bias. It may be that filtering for relevance has no effect on the truth-finding capacity of risk assessment – however, without evaluating whether or not the bias is there, whether or not (a) or (b) is the case cannot be determined, so we do not know if we can trust the results of the reviews – no matter how many there are.

In the case of BPA, given there is one dissenting review (vom Saal et al. 2007), and the dissenting review was carried out according to a substantially different methodology than the risk assessments, it is worth asking if the consistency in the risk assessments is generated by a bias in the assessment methodology. In this circumstance it might be wise to follow the recommendations of David Sackett, one of the pioneers of evidence-based medicine: in any study where biases are possible, they should be defined, there should be referenced examples of their magnitude and direction of effects, and there should be a description of measures to prevent them distorting the research findings (Sackett 1979).

Barring cursory attempts to prevent e.g. publication bias affecting the results of review in EFSA’s guidance on evaluating the safety of pesticides, measures to prevent bias in review methodology are at least invisible in EU risk assessments. Given the possibility that requiring data be relevant to risk assessment may reduce the truth-finding capability of those reviews, then an evaluation of the potential bias ought perhaps be found a place.

Editor’s note: Next month we will present a more detailed model to describe how using relevance as an inclusion criterion for risk assessment may distort evaluations of evidence. Discussion of the ideas presented on this site is always welcome – readers are encouraged to take advantage of the comment feature to do this.

3 Comments »

RSS feed for comments on this post. TrackBack URI

  1. […] methodology itself might introduce biases into reviews of the safety of a substance like BPA (H&E #38). We explored the concept of “relevance for risk assessment”, central to how studies are […]

  2. […] methodology itself might introduce biases into reviews of the safety of a substance like BPA (H&E #38). We explored the concept of “relevance for risk assessment”, central to how studies are […]

  3. […] lot of educated guesswork. The process is therefore not without its critics (as readers of H&E will know). The problem is, if things did not already look unreliable enough, the possibility that some […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.
Entries and comments feeds.

%d bloggers like this: