Last week I referred to a paper by Max More, in which he both set out the faults of the precautionary principle, and offered in its place a proactionary principle. I thought his demolition of the former was well crafted, and I feel much the same about his alternative, about which more in a moment.
It is plain that humanity’s numbers have increased from about one billion at the beginning of the 19th century to around 7 billion today, and that increase has resulted from greater knowledge of ourselves and our environment, and improved technology. Optimists like me see no reason to think that our capacity for knowledge-increase and innovation has somehow reached an end, and I have considerable confidence that human life will be easier for more people in fifty years’ time, and that they will live longer and fuller lives, too.
Max More argues that ‘we need to develop and deploy new technologies to feed billions more people over the coming decades, to counter natural threats from pathogens to environmental changes, and to alleviate human suffering from disease, damage, and the ravages of ageing’. He proposes, in place of the precautionary principle, a ‘more sophisticated principle that incorporates more extensive and accurate assessment of options while protecting our fundamental responsibility and liberty to experiment and innovate’.
Why? Because, he says, relying on the precautionary principle ‘inherently biases decision-making institutions toward the status quo, and reflects a reactive, excessively pessimistic view of technological progress’. He wants us to focus on the real threats we face now, not the hypothetical ones we might face in fifty or a hundred years. In doing so we should recognise that technological innovation offers us advances in dealing with those threats. Yes, all technology (I would say all change) comes with costs and well as benefits, but we have the capacity to deal with technology’s undesirable effects as well.
His proactionary principle is built around the following themes, or principles. I’ve edited and adapted what he has written in this summary.
- Freedom to innovate: Because our freedom to innovate technologically is valuable to humanity the burden of proof should belong to those who propose restrictive measures.
- Objectivity: When deliberating we should use a decision process that is objective, structured, and explicit, and employ available science, not emotionally shaped perceptions. We should use explicit forecasting processes, fully disclose the forecasting procedure, ensure that our information and our decision procedures are objective , and rigorously structure the inputs to the forecasting procedure.
- Comprehensiveness: We should consider all reasonable alternative actions, including taking no action. We should estimate the opportunities lost by abandoning a technology, and take into account the costs and risks of substituting other credible options. When making such estimates, we need carefully to consider not only immediate effects, but also the widely distributed and follow-on effects.
- Openness/Transparency: We need to take into account the interests of all potentially affected parties, and keep the process open to input from those parties.
- Simplicity: Our best course is to use methods that are no more complex than necessary.
- Triage: We should give precedence to reducing known and proven threats to human health and environmental quality over acting against hypothetical risks.
- Symmetrical treatment: We should treat technological risks on the same basis as natural risks, and (especially) avoid underweighting natural risks and overweighting human-technological risks.
- Proportionality: We should employ restrictive measures only if the potential impact of an activity has both significant probability and severity. If the activity also generates benefits, we should take those benefits properly into account. And don’t overdo it! If measures to limit technological advance appear justified, we should ensure that the limits we choose are proportionate to the extent of the risk.
- Prioritize (Prioritization): We should employ an obvious hierarchy when we choose among measures. It could go like this: (a) give priority to risks to human and other intelligent life over risks to other species; (b) give non-lethal threats to human health priority over threats limited to the environment (within reasonable limits); (c) give priority to immediate threats over distant threats; and (d) give priority to more certain rather than less certain threats, and to irreversible or persistent impacts rather than to transient impacts.
- Renew and Refresh: The decisions that we make in this domain are not for all time, and we need to be able to revisit any decision when conditions may changed significantly.
You can see that that More’s alternative is a process, rather than a ‘principle’, but he does offer a crisp summary.
‘The Proactionary Principle stands for the proactive pursuit of progress. Being proactive involves not only anticipating before acting, but learning by acting. When technological progress is halted, people lose an essential freedom and the accompanying opportunities to learn through diverse experiments. We already suffer from an undeveloped capacity for rational decision-making. Prohibiting technological change will only stunt that capacity further. Continuing needs to alleviate global human suffering and desires to achieve human flourishing should make obvious the folly of stifling our freedom to learn.’
It’s great stuff. But it assumes that no one involved has been scared, and assumes the possibility of governments that make rational decisions and don’t face elections. But it would be nice if our political leaders, not to mention the AGW doomsayers, could take a few minutes to read and think about it.