How to progress in endocrine disruption debate; new methods for safer chemicals; and more // Jan 2015 science digest #2

January 14, 2015 at 12:46 pm | Posted in News and Science Bulletins | Leave a comment
Tags: , , , ,

January 2015 Science Digest #2:
Non-Human and Policy Research

Endocrine disruptors, scientific debate | A path forward in the debate over health impacts of endocrine disrupting chemicals. There are four areas fundamental to the debate about the human health impacts of EDCs. The first is about the definitions for terms such as “endocrine disrupting chemical”, “adverse effects”, and “endocrine system”. The second is focused on elements of hormone action including “potency”, “endpoints”, “timing”, “dose” and “thresholds”. The third addresses the information needed to establish sufficient evidence of harm. Finally, the fourth focuses on the need to develop and the characteristics of transparent, systematic methods to review the EDC literature.

Safer chemicals | Advancing Safer Alternatives Through Functional Substitution. This article describes a functional approach to chemicals management we call “functional substitution” that encourages decision-makers to look beyond chemical by chemical substitution to find a range of alternatives to meet product performance. We define functional substitution, outline a rationale for greater use of this concept when considering risks posed by uses of chemicals, and provide examples of how functional approaches have been applied toward the identification of alternatives.

Breast cancer, prevention | Environmental exposures, breast development and cancer risk: Through the looking glass of breast cancer prevention. This review summarizes the report entitled: Breast Cancer and the Environment: Prioritizing Prevention, highlights research gaps and the importance of focusing on early life exposures for breast development and breast cancer risk.

Endocrine disruptors, breast cancer research | Advancing Research on Endocrine Disrupting Chemicals in Breast Cancer: Expert Panel Recommendations. The daunting tasks of identifying, characterizing, and elucidating the mechanisms of endocrine disrupting chemicals in breast cancer need to be addressed to produce a comprehensive model that will facilitate preventive strategies and public policy. An expert panel met to describe and bring attention to needs linking common environmental exposures, critical windows of exposure, and optimal times of assessment in investigating breast cancer risk.

Endocrine disruptors, reproduction | Effect of maternal exposure to endocrine disrupting chemicals on reproduction and mammary gland development in female sprague-dawley rats. Significant morphological/histological changes were observed at the end of lactation in the MGs of EDC-treated dams. The total transcriptome profile as well as lactation-related genes in MGs also corroborate the morphological findings as more profound gene expression changes are present only at the weaning period. The study highlights the heightened sensitivity of the MGs during critical windows of exposure, particularly pregnancy and lactation, with an impact on pups’ survival.

Are moves toward threshold-based chemicals risk assessment premature? (TTCs part 3)

March 17, 2012 at 2:37 pm | Posted in Feature Articles | Leave a comment
Tags: , , ,

Because of the possible far-reaching consequences of their adoption, the adoption of thresholds of toxicological concern (TTCs) in regulatory risk assessment needs to be discussed in a broad, democratic environment where all affected parties are involved in a final decision. Moves to implement TTCs should not be made in isolation by small committees in single European authorities.

Introduction

It may be necessary to speed up risk assessment and use fewer animals in toxicity testing, but if proposals to do so could have far-reaching implications, they need to be agreed on by all affected parties before they are implemented.

Thresholds of toxicological concern (TTCs) are a proposal to reduce the amount of data needed for the risk assessment of chemicals. The theory is that, for any given class of substances, there is an exposure level (a TTC) below which the chance of harm from any substance in that class is very small. Toxicological testing of the substances below the TTC is then virtually worthless as it is so unlikely to find an effect.

This greatly simplifies the risk assessment process for substances present below a TTC, as it merely becomes a matter of making sure the threshold for the substance in question is not exceeded. TTCs therefore remove all specific toxicological data requirements for below-threshold substances, speeding up the decision-making process and eliminating any need for animal testing.

Last month we looked at whether or not the proposed thresholds are likely to be adequately protective of health and concluded this may not be the case, as the age and methods used in the tests on which the TTCs are based mean their safety could easily be overestimated.

This month, our concern is with the regulatory implications of setting of a threshold below which toxicity data is not required for a chemical risk assessment. We pose four questions on TTCs and conclude they may be profoundly at odds with the driving principles of the on-going modernisation of chemicals policy in the EU.

  • Could TTCs create an uneven regulatory playing-field?
  • What happens to low-dose testing under TTCs?
  • Could TTCs confuse liability in the event a substance is toxic below a threshold?
  • Do TTCs undermine the principle of the producer being responsible for proving a product is safe?

Could TTCs create an uneven
regulatory playing-field?

TTCs are supposed to only be applied in very specific regulatory contexts, when a metabolite, breakdown product or unexpected contaminant is detected in a foodstuff, item of packaging, or so forth. The argument is that, because these substances are too numerous and unpredictable to risk assess individually, a threshold of concern has to be set in order to make the problem manageable.

There do not appear, however, to be any formal attempts (at least at the time of writing this) to specify when TTCs are applicable and when not. In its draft opinion on the use of TTCs (EFSA 2011, p2 PDF) the European Food Safety Authority simply states that they “would not normally be applied when there is a legislative requirement for submission of toxicity data.”

The problem is, it is not clear why this should not turn into a slippery slope: if controlling a substance’s concentration to keep it below a threshold is deemed sufficient for risk management in one context, why should it not be so in all contexts?

A manufacturer is unlikely to be happy with a situation in which they have to produce detailed toxicological data on a product but a competitor does not, simply because of some quirk of circumstance or contingency unrelated to the product’s design or intended use. It seems more likely they will argue that, regardless of whether a substance in a product is deliberately added or is an unexpected contaminant, everyone should have identical data obligations in proving that a product is safe – in the extreme case, TTCs would have to be used either universally or not at all.

At this stage, we are not in a position to predict how the adoption of TTC might work in practice. However, if there genuinely is a slippery slope, then the scope of application of TTCs ought to be carefully defined before they are introduced, lest a decision on TTCs made by one part of the EU regulatory apparatus have consequences more far-reaching than initially anticipated.

What happens to low-dose
testing under TTCS?

One would have to be very confident that one is correct in setting a threshold, if it is to be written into law that testing below this threshold is unnecessary. This is not merely a matter of deciding that some data is not relevant to making a risk assessment decision; it is deciding that the data does not need to be generated.

Do we really know enough about low-dose effects of substances to no longer need to generate the data? The consensus view of EFSA’s TTC panel seems to be in the affirmative, but the fact is low dose effects are the subject of intensive on-going research and controversy.

Although studies based on globally-harmonized test protocols (incidentally the only studies to which EFSA referred in validating its proposed TTCs (EFSA 2011, p3 PDF)) have failed to reproduce the effects under examination, their failure to do so could reflect the inadequacy of the tests as a lack of low-dose effects.

Indeed, the most recent state-of-the-art report on endocrine disruptors (EDCs) concluded the globally-harmonised studies are unlikely to be as clearly definitive as EFSA would have us believe (Kortenkamp et al. 2011, p7 PDF), precisely because they capture only a limited range of potential EDC effects, while a new review citing 845 studies has found that “low-dose effects are remarkably common in studies of natural hormones and EDCs” (Vandenberg et al. 2012).

Even if EFSA was correct and little of the low-dose data turned out to be sound, surely it would still be inappropriate to legislate against the relevance of new toxicological research to the risk assessment of a substance? Ordinarily one would give the science the chance to speak for itself; even if the data is not expected to alter the risk assessment, it would be rather irregular to exclude the data from the assessment before examining it.

Are we overstating the problem? It could of course be argued that concerns here are overstated, because under TTCs toxicity testing is triggered in the event that threshold levels are exceeded.

It is difficult, however, to foresee how this could result specifically in low-dose testing: why, in a system which assumes a threshold is safe, would exceeding the threshold trigger testing at doses below the threshold? If harm is unlikely enough that testing low-dose testing is not worth doing before the threshold is exceeded, why would exceeding the threshold make it any more likely the low-dose testing is worthwhile?

What seems more likely is that exceeding a TTC will be dealt with in one of two ways: either by reducing exposure to below the threshold level; or by testing the substance to ensure it is safe at the level at which it is present. Neither alternative will generate low dose data.

Indeed, if the low-dose testing did take place, it would have to be on the assumption that substances can be toxic below TTCs – but it is this assumption which the use of TTCs explicitly rejects.

The overall concern, then, is use of TTCs may result in legal prohibition on the generation of data which might falsify the assumptions on which the use of TTCs is based. This situation might be prevented by including a safety-net mechanism in risk assessment regulation which guarantees generation of the low-dose data which would allow the basic assumptions of TTCs to be reviewed, but as things stand such proposals are absent – at least from EFSA’s draft opinion on TTCs.

Could TTCs confuse liability in the event a
substance is toxic below a threshold?

Because TTCs determine safety based not on the intrinsic properties of a substance, but instead on structural similarity to other substances, it creates a situation in which a manufacturer can claim it has taken sufficient steps to ensure a substance is safe simply by determining its structure and ensuring its exposure threshold is not exceeded.

This is not a universally compelling line of argument, however: for example, it would hardly be offered as proof of safety of a car. An individual model has to demonstrate it is safe by passing the minimum required safety tests, and it would never be assumed a car is safe simply because it is similar in design to other cars which have passed the safety tests.

Nor if there was a fault in a car which results in an accident, would a manufacturer be likely to argue that it was not liable because nearly all similar cars are safe – it is the specific car and its specific failure to be safe which is the problem. For chemicals one might think the same rationale should apply, but under TTCs it is not clear how this can be the case.

Admittedly, duty of care is a complex matter in chemicals regulation; analogies drawn from vehicle manufacture do not necessarily apply to chemical manufacture. However, there are good reasons for regulating a substance according to its specific properties rather than the properties of its class – as things stand toxicity tests are carried out to determine the specific hazards a substance may present.

Dropping toxicity tests may therefore have implications for liability in the event of harm. How this is so may be unclear, but it surely needs examination before TTCs are introduced – especially if TTCs represent a slippery slope when it comes to data requirements and risk assessment of chemicals in general.

Do TTCs undermine the principle of the
producer being responsible for proving safety?

For our final concern, we will return to the question of where the low-dose data will come from: in the event that a substance is suspected of being harmful, who will be responsible for generating the data to prove it is safe?

Since the use of TTCs has already absolved a producer of this responsibility, it seems it would be up to society, reversing a basic principle of modern chemicals regulation that producers are responsible for proving that a substance is safe. That is, unless simply being below the threshold is adequate proof of safety – but we have already argued this may not be satisfactory.

Conclusion

Substance-specific risk assessment will always have a failure rate for setting a safe TDI for a substance, regardless of the quality and quantity of data involved in the assessment. Probabilistic risk assessment under TTCs, with its acknowledgement of the inevitability of failure, looks like a pragmatic alternative. It may even be possible to implement a probabilistic assessment process with a failure rate lower than a substance-specific system.

Similarity in failure rate, however, does not mean the two approaches are legislatively equivalent. The fundamental differences between probabilistic and substance-specific risk assessment, and the consequences of a move from the latter to the former for liability, the principle of no-data no-market, the generation of low dose data and the maintenance of a level playing field for all producers, are notably under-discussed.

It might therefore be worth asking: is it just simpler to avoid these problems by basing regulatory approval on substance-specific toxicity data?

Currently, chemicals risk assessment is supposed to be moving forward; is it appropriate to set in stone a threshold-based approach which has complex consequences at a time when the relevance of thresholds for risk assessment is being questioned (NRC 2009) and there is evidence that e.g. endocrine disruptors may be having an effect at low doses? Or should we be reflecting on the effectiveness of chemicals RA and concentrate on developing effective reforms in line with the latest science and political intent?

We have to draw the line on toxicity testing somewhere. TTCs may even have a limited role in prioritising chemicals for rigorous testing. However, because of the possible far-reaching consequences of adopting TTCs, if they are to have a substantial role in regulatory risk assessment, this needs to be discussed in a far broader, democratic environment where all affected parties are involved in a final decision. Moves to implement TTCs should certainly not be made in isolation by a small committee in a single European authority.

Blog at WordPress.com.
Entries and comments feeds.