Sense About Scientific Advisors Part 3: Science, Politics, Innovation and Controversy

February 28, 2015 at 6:14 pm | Posted in Feature Articles | 2 Comments
Tags: , ,

Part 3 of our series on scientific advice in the European Union. Here we leave behind specific problems with the current delivery of science advice in Europe to examine how scientific research can be better employed for resolving disagreement about what constitutes sustainable environmental policies in the EU. We caution against artificially reductive and risk-driven approaches to analysing the relative merits of different innovation pathways which may be on offer at any given time.

Click here to download printer-friendly PDF.

lightbulb with text - asifthebes -

Innovation is great. We solve problems with it. By solving problems with it, we make money with it. By innovating faster than our competitors, we become more competitive than they are and make more money than they do. More money going around means more taxes and more jobs. When we all engage in an innovation race, we get better at everything, bootstrapping our way to improved technology: from sled to cart to steam engine to automobile to aeroplane to spaceship. Everybody wins.

There is, however, a wrinkle in the bedclothes: innovations themselves are rarely, if ever, unconditionally or self-evidently positive. With hindsight there have been many unfortunate innovations such as: leaded petrol for engine knock; PCBs for electrical insulation; thalidomide for morning sickness; asbestos in thermal insulation; halocarbons for refrigeration; tributyltin as an antifoulant; and so on.

Even those innovations which appear (so far and depending on whom you speak to) to have on balance been beneficial could have been introduced with less harm, such as mobile phone technology (what with the problems of electronic waste and mining of rare earth metals) and the automobile (which causes so many accidents, generates a large carbon footprint and causes air pollution). And there are the controversial innovations of today, such as fracking, neonicotinoid pesticides, the use of possible endocrine disruptors in consumer goods, and the genetic engineering of foodstuffs, which seem to further lock in previous bad choices about how to feed ourselves and keep ourselves warm. The merits of nuclear power, renewable energy innovations such as wind farming, and organic food production are also hotly contested.

There is a question, then, as to how we make sure as a society that we choose better innovation pathways over worse and, in the near-certain event of controversy about which path is preferable, how best to resolve disagreements about the best way forward. Here, we will look at the roles which science and politics can play in this process.


We are here because of controversy. There are multitudes of ideas about how society should solve its various challenges: renewables or fossil fuels for energy? Conventional or organic farming methods for food security? Genetic modification or conventional breeding for desirable plant traits?

We can start with a premise and two observations. The premise is that controversies are driven by uncertainty about which choices society should make. The first observation is that, as tempting as it is to present them as such, controversies are rarely if ever about dichotomous alternatives, in which there is a choice only of one option or the other. There are almost invariably other alternatives as well (in food production, this would include permaculture, hydroponics and simply giving up on meat production). The second observation is that individual innovation pathways are not necessarily homogeneous options themselves: genetic engineering covers a range of techniques from gene editing to marker-assisted back-crossing. So regardless of how they are badged, there are innovations within innovations, each carrying their own set of consequences.

Decisions about which innovations to support are therefore not about a narrow choice between obvious alternatives; rather, they are about selecting among a proliferation of pathways towards some desired outcome. Each path entails a different set of consequences, some good and some bad; some innovation pathways carry a better overall profile of consequences than others. The objective is to try to figure out which one is best, which in turn requires attempting to anticipate the consequences of a particular innovation path.

The ideal way to analyse the pros and cons of each posible decision would be to develop a structured list of actions and their possible consequences; then assign each option a numerical measure of the utility of each of the consequences (economic, social, environmental etc.); then calculate the probability of each consequence being realised; then determine a rational decision maximising the benefit of the decision.

This is a highly quantified, utilitarian calculus is the task taken on by risk assessment and socioeconomic analysis, and it is here that scientific research and evidence-based approaches to anticipating and evaluating consequences should be extremely helpful, delivering a measure of entailed risk in a pathway and confidence in the accuracy of the assessment. Of course, these analyses are only as good as our knowledge of the system, and the temptation is to overstate our confidence in what we know; but regardless, insofar as risk has the potential to be meaningfully quantified within a sufficiently well-defined system, scientific research can play its part.

There is a major challenge to usefulness of these risk calculus approaches. The problem is (beyond whether they are valid), socioeconomic analyses and risk assessments are fundamentally reductive, analysing only what is directly in front of them (be it profitability, health risks, or whatever else is thought to be entailed by any putive innovation) as if the options under analysis are the only games in town.

So while they may be useful for resolving debate about degrees of risk by providing scrutable analyses of one type of uncertainty which drives controversy, the kind of uncertainty about extent of our knowledge of risk within defined limits, they are processes with no ear for when discussions of the desirability of a given innovation encompass two other types of uncertainty: ambiguity (where what is factually known is consistent with two or more choices); and ignorance, that area out of which surprise (good or bad) can spring.

Risk and socioeconomic assessment struggle to deal, for example, with whether or not it is a good idea to base food supply on a system in which companies primarily seek profit from realising rents on intellectual property and global supply and value chains; arguably, this is more the defining feature of GM innovations than any particular GM technology itself. Firstly, this is because the process of risk assessment itself does not generate this question: the people doing the risk assessment have to. Secondly, the question as to whether this approach is desirable is born of society’s desired outcomes, and is therefore a question which can be informed with analysis but resolved only by politics. Thirdly, the question of whether GM development is the sort of thing which should be encouraged is so complex and entwined with values discussions, that it resists reduction to any single question of whether or not GM technology is safe to eat, harmless to the environment, economically beneficial, etc.

Why? Because the problem is ambiguous: at any decision-point there are a number of available options, and any merit ranking of these options can be inverted by changing the assumptions about what is considered valuable. For example, at its most oversimplified, the appeal of a decision can be inverted simply by changing the beneficiary: suppose there is some inner-city park land of high monetary value to investors, but high social value to the local community. Should it be built on? It depends very much on whether you use the park or stand to profit from the property. A decision needs to be made, but a risk assessment of the safety of any property built on the park will not on its own tell you what that decision should be; it certainly won’t tell you if there is an option besides the park or the property – perhaps a school, or a way of developing the area without taking the park away. (In reality, urban planning is obviously more sophisticated than this – we will return to this point later.)

Reductive thinking in chemicals policy

One could argue that it is this sort of narrow, reductive view of risks and alternatives which is undermining the substitution component of REACH regulation. Regardless of whether one considers bisphenol-A to present any human health risks or not, in some EU states it is pretty wildly unpopular and no doubt representative of the sort of compound a lot of people don’t want in their food and drink. It is also pretty obvious that, by removing it and replacing it with broadly similar compounds such as BPS and BPF, there is a high risk that the replacements will pose the same problems as BPA (either physical or perceptual).

You could possibly argue that there was no reason why manufacturers should have foreseen a problem with BPA (although it was initially developed as a synthetic oestrogen). However, they have since become overcommitted to a persona non grata and now need to get out. The question is, could there have been a faster exit strategy? It could maybe have benefitted manufacturers to have more irons in the fire than BPA and some dubious substitutes which were likely to be similarly rejected (although that makes a lot of assumptions about tooling up costs, capital investment in plant, etc.) It would certainly benefit society if there were faster exit options for problem technologies (if you don’t think BPA is an example of a harmful chemical, think about asbestos or leaded petrol instead).

So rather than determining a viable substitute simply by risk assessing an individual chemical and its immediate neighbours, surely there is a question to be asked about whether the innovation pathway we are currently committed to ought to be re-evaluated as a whole? This is the broader sort of question green chemists are asking and programmes such as the GreenScreen are encouraging chemical users to consider.

The point is, we cannot lean too heavily on risk assessment alone: we need a process for managing ambiguity (where much disagreement arises) and for ignorance (from whence nasty surprises like asbestos may spring). This is also why exclusively “risk based approaches” take us only part way down the road we need to travel when we analyse the desirability of an innovation: they hard-wire in a reductive approach and force narrow assessments of what is immediately in front of us rather than a broad assessment of whether the innovation trajectory is appropriate. By being aimed at granting permission to existing innovations rather than a strategic approach to nurturing more of the kinds of innovations we want, they are of limited help in resolving controversy which is driven by uncertainty from ambiguity and ignorance.


Part 1: Getting the answers right

We do well in policy-making when we use scientific research to identify novel policy options, foresee their consequences, and evaluate their pros and cons in a manner which reflects the broad range of concerns people may have about them, while identifying areas of ignorance and ambiguity which need to be anticipated by policy-makers.

Transparency and methodological quality of approach are at a premium; there should be a quality assurance process by which best practices are developed and followed by the relevant institions. However, we should be cautious that we do not hand too much power to whoever polices the quality of scientific approach, lest we make politicians answerable to their scientific advisers rather than vice versa. Nor should we overstate the extent of our knowledge of a system: it is one thing to risk assess a relatively closed system such as building a bridge or a tower block, and quite another to predict consequences of intervening in e.g. complex food webs. Accurate assessment of what we don’t know is as critical as crunching the numbers on what we do know.

We should also be very wary of allowing claims of the type that “ the science says” such-and-such is the right choice; it cannot, because while scientific research is a process which can help us anticipate the consequences of a given option, research alone cannot decide for the electorate and their representatives what they consider important, what the problem is which needs to be solved, and what choices they would rather make.

Part 2: Getting the questions right

How do we resolve ambiguity? By getting better at deciding collectively what we want. This is a political problem. Part of this is to make sure the ambiguities are blown out, by discussion and research which supports asking the right questions, follows a process such that as many options as possible are considered, and that the motivational drivers for preferences are understood in order to permit compromises to be articulated.

Possibly the real process of town planning (at least in its ideal form), with public involvement, is the sort of model we should be following instead of the reductive, “evidence-based” approaches which arguably predominate in so much discussion of technology.

It might be annoying for companies who have invested heavily in a particular innovation pathway and are therefore locked into it that their pet technological solution is opened up for such broad discussion rather than permitted on the basis of a clear, predictable set of rules pertaining to safety and social value directed only at the innovation itself.

However, the broader approach is necessary: a society which values sustainability cannot afford simply to back innovations just because someone happens to have invested in it or the innovation happens to have caught on before others. To assume these innovations are optimal is to make the Panglossian assumption that the long-term technological solutions to the challenges of sustaining life on Earth are granted by short-term, profit-driven decision-making.

This should not be understood as an argument for authoritarian control of innovation (remembering that options in innovation policy are no more dichotomous than their counterpart innovations in technology), as there is no more reason to suppose that a cabal of civil servants is any better at picking winners than the invisible hand of the market. Rather, it is to encourage policy approaches which deliberately support innovative pluralism so that, when the time comes, course can be changed more easily than has been achieved historically, where bad ideas have lingered long past their “best-before” dates.

It sounds obvious that people should be asked what they think and decisions made based on a broad understanding of what society wants to achieve rather than narrowly-constrained assessments of whether or not a particular technology or chemical can be considered safe, or if little enough economic damage is anticipated that it can reasonably be banned. But if we look at chemical regulation or the debate around GM technology, can we convince ourselves that it ever proceeds along these terms?

Part 3: Getting the unknown unknowns right

How do we resolve ignorance? In a way, we don’t have to: we can’t disagree about what we cannot anticipate. But we should have get-outs built into innovation choices because we don’t want to be locked in to bad ideas. For example, while it might be fair to say that as a society we could not have anticipated the problems which were later caused by asbestos (we didn’t have the methods to do it) we could have prevented a lot of harm if as a society we had been able to get out earlier than we did. Strategies which allow faster reaction to surprises would be beneficial.

Conclusion: using science in policy-making

Divergent views and opinions on science and technology can be reconciled by improving the transparency and scientific quality of the methods by which evidence relating to policy questions is generated, gathered, appraised and communicated – but only when the disagreement is well-defined and factual rather than values-driven.

Since divergent views are at least as often the result of disagreement about desired outcomes as they are about facts, resolution is likely only to be achievable when we recognise that political debate cannot be resolved by appeal purely to “science”, and we instead focus on mechanisms by which differences in preferred outcomes can be acknowledged and incorporated into the decision-making process. This would include strategies for mitigating over-commitment and lock-in to undesirable innovation pathways.

To make better use of science in policy-making in general, we need to put more effort into science governance: we need a body which can determine a set of best practices by which evidence is generated and reviewed such that scientific robustness is secured; that the processes for setting research priorities, questions and agendas are sufficient to secure democratic robustness; and that best practices are enforced such that neither can science ride rough-shod over policy, nor can policy ride rough-shod over science.


RSS feed for comments on this post. TrackBack URI

  1. […] Click here to read the article. […]

  2. […] Don’t miss our recent feature article: “Science, Innovation, Politics and Controversy: How to Resolve Disagreement in Environmental Policy“ […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at
Entries and comments feeds.

%d bloggers like this: