Peter Saunders, King’s College London and
In Policy Responses to Societal Concerns in Food and Agriculture: Proceedings of an OECD Workshop. OECD, Paris 2010, pp47-58. There is no definitive statement of the precautionary principle, but there is a reasonable consensus about what it says, at least among its proponents. The 1998 Wingspread Declaration [1] which takes its name from the place where it was formulated, is typical:
When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof. The process of applying the precautionary principle must be open, informed and democratic and must include potentially affected parties. It must also involve an examination of the full range of alternatives, including no action.
The European Commission, in 2000, expressed it less succinctly but the statement begins with the key phrase [2]
The Precautionary Principle applies “where preliminary objective scientific evaluation indicates that there are reasonable grounds for concern …”
The statements of the principle by those who are advocating it or proposing to use it, are essentially similar. The principle is to be applied when (a) there is scientific evidence for a threat to the environment or to health, but (b) the evidence, while sound, is not conclusive. What is crucial is that there must be a prima facie scientific case for a threat. If there is not, then nothing happens. If there is, then we do not have to wait until we are certain about the hazard before we can take measures to mitigate or avoid it. The precautionary principle states that we are permitted to act on the basis of evidence that is not conclusive. It does not, however, say that we are obliged to. What, if anything, we actually do is a matter for judgment on the basis of the evidence that we have in front of us. The precautionary principle is something like the burden of proof. In a civil court the playing field is level, but in a criminal court it is deliberately not. The defendant is not required to prove his innocence; it is for the prosecution to prove him guilty beyond reasonable doubt. The lack of balance is deliberate. Courts are supposed to convict the guilty and acquit the innocent. In an imperfect world that isn’t always going to happen, and the legal system has to allow for when it doesn’t.
There are two different ways in which things can go wrong, just like Type I and Type II errors in statistics. The view that we as a society have come to is that while it is undesirable that a crime should go unpunished, it is far worse for an innocent person to be convicted. So we shift the balance to make it less likely that will happen, which naturally means that we have to accept a greater probability that a criminal will be set free. In the same way, when we act on the basis of evidence that is not conclusive, we are saying that we have reason to be concerned that something is hazardous and we are sufficiently worried about the consequences that we are willing to go without it, or at least to delay its introduction until we have more evidence. Neither the burden of proof nor the precautionary principle is an algorithm for decision making. A jury still has to decide whether the defendant is guilty beyond reasonable doubt – and even what they are prepared to accept as “reasonable doubt”. In the same way, even if we accept the precautionary principle, we still have to weigh up the evidence as best we can, and we still have to decide how much reassurance we are going to require before we allow something to proceed. As in ordinary risk assessment, an important factor in this is how much we believe we stand to gain if it does go ahead. Common Criticisms: There are a lot of criticisms of the precautionary principle around; let us deal with them at the outset:
• ill-defined – Critics sometimes complain that there are so many definitions of
the precautionary principle that it cannot be taken seriously. In fact, those that are not covered by the description above come from opponents, each setting up his or her own straw man to knock down. Then, to add insult to injury, they say that anything with so many different definitions is obviously too vague to be useful.
• vacuous – Some people complain that the precautionary principle does not
lead to definite decisions. But that isn’t what it’s meant to do. Like the burden of proof, it is something we take into account when we are making decisions.
• incoherent – Others, evidently believing that the precautionary principle does
lead to definite decisions, complain that because there can be risks on both sides of an action, it can “ban what it simultaneously requires” [3]. As above, the answer is that the role of the precautionary principle is to influence decision makers, not do their job for them.
• too weak – The ‘burden of proof’ in a trial does matter: people are acquitted
who would be found guilty if criminal trials were like civil proceedings. See below for examples of where the principle could have a real effect. And while some people say it is too weak, naturally there are others who claim it is …
• too strong – Even with the burden of proof on the prosecution, many people
do get convicted. In the same way, even if we adopt the precautionary principle, progress will continue. Almost all innovations will proceed without being challenged, just as they do now.
• anti-scientific – On the contrary, the precautionary principle is all about
science. For it to apply at all, there must be scientific grounds for concern, and
it then requires that more science must be done to allay (or not) those concerns. Sweeping but unsupported assurances that everything will be all right will not do.
• an excuse for protectionism – Anything that can lead to a restriction can be
used as an excuse for protectionism. But at least here the innovator has an opportunity to counter the objection, by providing real evidence that the concerns are unwarranted, or at least outweighed by the advantages.
• these matters should be dealt with in the courts – This is really just another
version of the misunderstanding that the precautionary principle is an algorithm for taking decisions, which it is not.
Critics of the precautionary principle often make up alarming stories of how it could stop progress dead in its tracks. Here is a typical example:
Given that the dynamics of science are not predictable, it is important to consider the dangers of excessive precaution. One of those is the threat to technological innovation. Imagine it is 1850 and the following version of the precautionary principle is adopted: no innovation shall be approved for use until it is proven safe, with the burden of proving safety placed on the technologist. Under this system, what would have happened to electricity, the internal combustion engine, plastics, pharmaceuticals, the Internet, the cell phone and so forth? [4]
Not only is this all a flight of fancy, it depends crucially on a “straw man” version of the precautionary principle. What the critics do not provide are examples where the precautionary principle was applied and resulted in losses. In contrast, there are many real cases where if it had been applied the outcome would have been far better than what actually happened. Tobacco: The precautionary principle would not have prevented Sir Walter Raleigh introducing tobacco into Europe because at that time there was no evidence that it was harmful. We are told that in the late 1940s, when Sir Richard Doll and his colleagues were trying to find out why lung cancer had increased, they had no idea that the cause was tobacco. The most likely candidate seemed to be the great increase in motor traffic during the War. The epidemiological studies provided strong evidence that smoking was an important cause of lung cancer [5]. The sceptics, above all the tobacco industry, refused to accept this conclusion, demanding instead proof in the form of a clearly demonstrated mechanism. As a result, while it has been widely known since about 1950 that smoking is dangerous, it was only much later that governments started to act. Had the precautionary principle been applied, they would not have waited for the proof the industry was demanding. They would have started much earlier to increase the tax on tobacco, ban advertising especially when aimed at children, not allow smoking in public buildings, and so on. It is obviously impossible to calculate how many lives would have been saved if governments had not waited so long. Above all, we cannot say when they would have judged the evidence strong enough to justify going ahead over the objections of the
tobacco industry. But the WHO MPOWER report [6] estimates that in the twentieth century, tobacco killed 100 million people. If we suppose that only one per cent of those lives would have been saved if the precautionary principle had led countries to act sooner, that is still a million lives. That in itself is a sobering thought, but we can, if we prefer, convert it into monetary terms by using the usual risk assessment valuation of a life as six million US dollars. That gives a total cost of six trillion dollars, massive even when compared with the amounts involved in bailing out the banks.
Asbestos: Most people probably believe that the danger of asbestos was only discovered 30 or 40 years ago and that action was taken as soon as it was. In fact, it was in London in 1898 that Lucy Deane, one of the first “Women Inspectors of Factories” noticed that people who worked with asbestos suffered more serious ailments than those who worked in other dusty environments. She even found a reason why: asbestos particles are sharp. [7] For the next hundred years, governments and the asbestos industry kept insisting that there was no conclusive proof that asbestos was as dangerous as we now know it is. They also maintained that there was no alternative for it, which we also now know there is. They therefore only gradually restricted its use. As late as the 1970s it was being installed in buildings in the UK and it wasn’t until 1998 that it was finally banned altogether in the UK and France. A nice touch that: a commemoration of the 100th anniversary of Lucy Deane’s report. For asbestos as for tobacco, we cannot really estimate how much was lost by ignoring the precautionary principle. We can, however, get an idea of the scale by noting that in 2000 it was estimated that there were about a quarter of a million deaths from mesothelioma yet to come in Europe alone. That converts to about one and a half trillion dollars, not including all the deaths in the twentieth century, or the rest of the world, or asbestosis or other costs. This is something to bear in mind the next time someone complains about the unreasonable cost of adopting the precautionary principle. The Bradford Hill Criteria: The precautionary principle had not been formulated at the time that Sir Richard Doll’s results were published. Those involved in the research were, however, very conscious of the problem. They had what they considered to be convincing evidence that smoking causes lung cancer, but the government and the tobacco industry were refusing to accept it. And while the sceptics had their own reasons for wanting not to believe the result, they also had some logic on their side. For while epidemiology can show there is an association between two variables, that does not necessarily mean that one is the cause of the other. Something more is needed to establish causation. This led one of the investigators, Sir Austin Bradford Hill, a professor of medical statistics in London University, to produce what are now called the Bradford Hill criteria [8]. These seem to be very well known in the world of medicine but not more widely:
• How strong is the association? The death rate from lung cancer was over nine
times as high in smokers as in non-smokers.
• Are the results consistent? By 1965 there had been 36 different inquiries, not
all using the same methodology, and all had found an association between smoking and lung cancer.
• Is the phenomenon specific? Death rates for smokers are higher for many
causes of death but the increase is much greater for lung cancer so there does appear to be a specific connection.
• Temporality: Did the purported cause occur before the effect? This is not
always obvious, for instance in the case of diseases that take a very long time to become apparent.
• Dose response: The death rate from lung cancer increases with the number of
• Plausibility: If we do not know the mechanism (if we did, we would not need
these criteria) is there a plausible candidate?
• Coherence: If our present knowledge does not even suggest a plausible
mechanism, does it actually rule out the possibility?
• Experiment: If people stop smoking, does the death rate from lung cancer fall?
• Analogy: Are there analogous examples? Since the effects of thalidomide and
rubella became known, it has been much easier to make the case that some other birth defect could be due to a drug or a viral disease.
Bradford Hill himself insisted that what he was proposing was not a check list where all the boxes have to be ticked. In any real situation, some of the criteria may not be met. For example, there is no dose response when you take a drug overdose: you either die or you don’t. What is deemed ‘plausible’ can also change over time. In the nineteenth century it was thought totally implausible that doctors not washing their hands could be responsible for the deaths of women in maternity wards. But the criteria do suggest the sorts of questions we should ask when we are faced with a prima facie case for hazard and we are trying to decide whether action is warranted. A current example: Childhood leukaemia: There is a long-standing debate about whether children who live near nuclear power stations are more likely to develop leukaemia. There is evidence for clusters around certain nuclear installations, such as Sellafield, Aldermaston and Rosyth in the UK, but the numbers involved are small. This makes it hard to get statistical significance, which is often taken as proving there is no effect, which of course it does not. The authorities also point out that the amount of radiation involved is generally believed to be far too small to have an effect, and so they have suggested other possible causes, generally to do with the influx of workers into an area that had previously had a small, stable, rural population, though no one seems to have thought of a plausible mechanism by which that could happen. Now a new study has been carried out by the German Bundesamt für Strahlenschutz (BfS). Instead of just looking at the number of cases of childhood leukaemia in the area around a power station, they carried out a detailed study both of children with leukaemia and of controls in the same area [9]. They found a significant correlation
between proximity to the reactor and the incidence of leukaemia. It has also been pointed out that the radiation from a reactor is significantly higher when it has been opened up for its annual maintenance, and there is evidence that this might have an effect on unborn children at a sensitive time in their development, manifesting itself a year or two later as leukaemia. The radiation may not be evenly distributed over an area because it is a matter of the dispersal of radioactive gases, so the dose at one location at a particular time may be considerably higher than the mean annual level for the entire region. Thus two more of the Bradford Hill criteria have been met. There is a dose response, and there is a plausible mechanism. In contrast, there is still no plausible mechanism for the hypothesis that the cause is something other than radiation. Surely this counts as a prima facie case, and warrants further investigation. Yet in 2008 the UK government published a White Paper [10] laying out its proposals to build a new generation of nuclear power plants. This is what it had to say on the issue of clusters of childhood leukaemia:
2.107 During the course of our consultation in July 2007, a separate report identified that leukaemia rates were higher in children and young people living near nuclear facilities [11]. However, it concluded that there was no clear explanation for this and that further research is needed before firm conclusions can be drawn from the report. A report was also published by the German Federal Office for Radiation Protection on a study into childhood cancers in the vicinity of nuclear power stations in Germany [9]. The report concluded that whilst in Germany it believes that there is a correlation between the distance of the child’s home from the nearest nuclear power station and the risk of developing leukaemia, it did not follow that ionising radiation emitted by German nuclear power stations was the cause. Childhood cancer is also related to socio-economic factors and this does not seem to have been taken into account in the German study. The study also covers a relatively small sample in comparison to COMARE’s 11th report [12] which contains 32,000 cases.
According to the White Paper, because no mechanism for the increase in leukaemia rates has been demonstrated, there is no proof that radiation from nearby nuclear plants is responsible and the epidemiological evidence from two separate studies (another of the Bradford Hill criteria satisfied) can therefore be ignored. This is a clear application of what we might call the “antiprecautionary principle”: any innovation must be permitted unless and until it can be proven to be unsafe. This has, of course, been the traditional view of the tobacco industry. Evidence Far from being anti-science, the precautionary principle relies on science at every stage. It does not come into play unless there is at least prima facie scientific evidence of a hazard, it requires scientific evidence to determine whether or not restrictions are justified, and, if they are, further scientific evidence might lead to their being lifted. Scientific evidence is also required to assess the benefits we may lose: do we need
GM food to feed the planet, will the lights go out all over Europe if we decide not to go for nuclear power, is there really no alternative to asbestos for brake linings? Policy makers will therefore have to draw on scientific evidence, and this can be more problematic than it appears. Ordinarily, when we want to know about science, we ask scientists, and most of the time we can be confident that the answers we get will be as factual as they can be, bearing in mind the uncertainties inherent in science and the fact that in some of the most interesting areas there is not yet a consensus. In the sorts of issues in which the precautionary principle can apply, however, many of the scientists with the relevant knowledge are likely to be connected directly or indirectly with the innovation being considered. That does not mean that their advice should be ignored, but it should be treated with some caution, as we would treat any advice from someone with a vested interest. As a British government minister said while replying to a question in Parliament, “. where somebody is paying, one questions whether the research will be reflective of scientific rigour or not.” [13] It has, for example, been shown in both pharmaceuticals and nutrition that research sponsored by industry is far more likely to produce results favourable to industry than research done by independent scientists [14-16]. That may be because of the way the experiments are designed, or because results not favourable to the industry are not published, or for other reasons, but the effect is beyond doubt. One would expect that in the same way the evidence given to governments and their regulators by industry funded scientists will also tend to favour the interests of the companies that support them. This does seem to be the case, but so far there do not appear to have been any formal studies. It is not just that the scientists who are consulted may be selective about the evidence they choose to present. The evidence itself is selective. Research is expensive, and so the people that hold the purse strings have a major say in deciding which lines of research are followed up and which are not. In most of the cases where the precautionary principle is relevant, much of the research will be have been funded by industry, either in their own laboratories or in universities or research institutions. Government funding too will have been largely directed towards supporting industry. Even those scientists who are free to choose their own topics are going to be attracted by the prospect of making a profit out of their work, and their institutions will be putting pressure on them to concentrate on research that will lead to major grants and to patents. Research in these areas is therefore largely directed towards the development of new products: new GMOs, new pharmaceuticals, new applications of nanotechnology, new nuclear power plants, and so on. Far less attention will be paid to studying the possible dangers. What is more, scientists who do carry out research into hazards, or who become aware of hazards as they carry out research that was not specifically aimed at finding them, are likely to run into great pressure from industry [17-21]. A full account of the ways in which industry can ensure that the evidence on which decisions are made is biased in their favour is clearly beyond the scope of this article, but by way of an example here is one that has recently attracted attention.
Early in 2008, the US Environmental Protection Agency (EPA) invited comments ahead of a meeting it was holding on two proposals concerning GM crops. In response, 26 scientists, all of them experts in the area and none known to be opposed to GM, wrote to the EPA to complain that they were unable to do proper research in the area because anyone who buys GM seeds is required to sign a stewardship agreement. This forbids them not only from saving seeds from the crop to plant the following year, but also from carrying out any research without the express permission of the seed company. Some of the scientists had even obtained such permission, and then when the results were not turning out as the company had hoped, had it withdrawn [22]. As a result, when the EPA or anyone else is trying to assess the benefits and hazards of any GM variety, or of GM crops in general, the only scientific evidence they will have will be what the biotechnology companies want them to have. In fact, that is almost the case already, often through refusals to release data on the grounds of “commercial confidentiality”, but the industry is trying to close the last loopholes. Research into climate change provides another example of the problems of evidence. Most of the work was done by people like meteorologists and oceanographers who were working for universities and for government establishments with adequate resources and no incentives or a priori bias one way or the other. Imagine what would have happened if all the experts, their equipment, their data and their supercomputers had belonged to the oil industry. When the American scientists had completed their report, however, the Bush administration altered the conclusions to suit the industries that are major producers or users of fossil fuels. Fortunately, the research was international, and they were unable to prevent the results from becoming known. It is bound to be difficult for lay people to take decisions that are based on science when there is disagreement about what the evidence is and what implications can be drawn from it, but the problem occurs in other contexts as well. When technical issues are important in a court case, expert witnesses are called. Often their evidence is accepted without challenge. Sometimes it is not, and when that happens, the witnesses are cross examined by the lawyers just as other witnesses are. Policy makers should do the same. Where there is disagreement about the science, they should require all the scientists they consult to explain what precisely is the evidence on which their statements are based. Where there are opposing scientific opinions, each side should be encouraged to comment on the other’s submission. It is best for them to give evidence together, because a lay person may not know the key questions to ask or whether a particular response is adequate, yet still be able to judge at the end of the discussion which side has the better case. The ordinary citizens who serve on juries are expected to be able to do this, so should regulators. Non-scientists may not realise how much of a typical paper can be understood by someone with no expertise in the field. For example, a recent paper in the prestigious journal Science has the title “Suppression of Cotton Bollworm in Multiple Crops in China in Areas with Bt Toxin–Containing Cotton” [23]. In the abstract, the authors state “Our data suggest that Bt cotton not only controls H. armigera [the cotton
bollworm] on transgenic cotton designed to resist this pest but also may reduce its presence on other host crops and may decrease the need for insecticide sprays in general.” If, however, we read through to the last two sentences of the paper we find: “Nevertheless, as a result of decreased spraying of broad-spectrum pesticides for controlling cotton bollworm in Bt cotton fields, mirids have recently become key pests of cotton in China. Therefore, despite its value, Bt cotton should be considered only one component in the overall management of insect pests in the diversified cropping systems common throughout China.” A lay person reading this will make up his or her own mind about how effective Bt cotton is in reducing pest infestation and pesticide use. It is not just lay people who should read the whole paper for themselves. Scientists do not read carefully every paper they cite in their work; indeed they may never have looked at some of them at all but know about what they contain only through the abstract or a brief mention in another paper. As a result, what becomes part of the common knowledge base in the subject may not be what was actually found. This is yet another reason why policy makers should insist that the scientists who advise them must provide the original evidence for what they say. Equally, the public should be wary about taking at face value what the policy makers say about science. The Bush administration were not the only ones to seek to massage the evidence. The UK White Paper [10] quoted in the last section states that the BfS report “believes” there is a correlation, when in fact the investigators found a statistically significant correlation. It also points out that the BfS study used a smaller sample than COMARE, but neglects to mention that the analysis was more detailed and that it did find statistical significance. One need only read the BfS and COMARE reports [9,12] oneself to get a better picture. Finally, when health and safety are involved, policy makers should not confine themselves to the peer reviewed literature. Peer review is a useful part of the scientific process, but it has a number of serious weaknesses and cannot be the sole test of what is or is not science [24-26]. It is probably still the best way we have of deciding which papers should be published and become part of the scientific literature, but when there are plausible reports of risk to humans or the environment, the precautionary principle – and ordinary common sense – tell us we should not ignore them until some learned journal has decided to publish them. [27]. Conclusion: By itself, the precautionary principle does not stop anything. What it does is prevent governments and regulators from deliberately ignoring a strong scientific case by using the excuse that there is no proof of danger. It prevents companies from insisting that they must be allowed to carry on until absolutely conclusive scientific proof is available. It would make it much more difficult for companies to demand damages from regulators, as the Ethyl Corporation was able to do when the Canadian government passed legislation banning the fuel additive MMT [28]. Because the Precautionary Principle shifts the burden of proof on to the innovator, and given the doubts about MMT (there was already a partial ban in the US) it would have been for Ethyl to show that it was safe. If the precautionary principle were implemented, most innovations would proceed without interference just as they do now. Some, however, would not. There would be
a cost attached to that, as there is in all regulation, but history has shown us that the cost of ignoring warnings can be very great indeed. Notes and References:
1. Science and Environmental Health Network (1998). The Wingspread Consensus
Statement on the Precautionary Prin Accessed 4/12/09.
2. European Commission (2000). Communication from the Commission on the
3. Sunstein CR (2008). Throwing precaution to the wind. Boston Globe, 13 July, 2008.
4. Graham JD (2003). The Perils of the Precautionary Principle: Lessons from the
American and European Experience. Speech at the Heritage Foundation Regulatory Forum. Available . Accessed 4/12/09
5. Doll R and Bradford Hill A (1950). Smoking and carcinoma of the lung. British med. J. 2, 739-748. 6. World Health Organisation (2008). WHO Report on the Global Tobacco Epidemic, 2008 The MPOWER package. WHO, Geneva. ISBN: 978 92 4 159628 2.
7. Harramoës P, Gee D, MacGarvin M, Stirling A, Keys J, Wynne B and Guedes Vaz S
(2001). Late lessons from Early Warnings: the Precautionary Principle 1896-2000. European Environment Agency, Copenhagen. ISBN 92-9167-323-4.
8. Bradford Hill A (1965). The environment and disease: Association or causation? Proc R Soc Med58, 295-300.
9. Bundesamt für Strahlenschutz (2007). Background Information on the KiKK Study.
10. Department for Business, Enterprise & Regulatory Reform (2008). Meeting the Energy Challenge: A White Paper on Nuclear Power. The Stationery Office.
11. Baker PJ and Hoel DG (2007). Meta-analysis of standardized incidence and mortality
rates of childhood leukemia in proximity to nuclear facilities. European J. of Cancer Care16 355-363.
12. Committee on Medical Aspects of Radiation in the Environment (2006). Eleventh Report: The distribution of childhood leukaemia and other childhood cancers in Great Britain 1969-1993. Health Protection Agency.
13. Norris D (2009). Reply to oral question to [UK] Department of Environment, Food
and Rural Affairs. Hansard 29 October 2009.
14. Lesser LI, Ebbeling CB, Goozner M, Wypij D, Ludwig DS (2007). Relationship
between funding source and conclusion among nutrition-related scientific articles. PLoS Med4
15. Davidson RA (1986). Source of funding and outcome of clinical trials. J. Gen. Intern. Med. 1 155-158.
16. Lexchin J, Bero LA, Djulbegovic B, Clark O (2003) Pharmaceutical industry
sponsorship and research outcome and quality: Systematic review. BMJ 326: 1167–1170
17. Ho MW and Ryan A. (1999). Pusztai Publishes Amidst Fresh Storm of Attack - The
Sorry State of ‘Sound Science’. ISIS News (now Science in Society) 3.
18. Saunders PT and Ho MW (2001). Big Business=Bad Science? ISIS News (now
Science in Society) 9, 11-12.
19. Saunders, PT. (2006). Actonel: Drug company keeps data from collaborating
scientists. Science in Society30 48.
20. Saunders, PT (2007). Actonel and dog that did not bark in the night. Science in Society36, 4-6.
21. Healy D (2004). Let Them Eat Prozac. New York University Press, New York and
London. See also the review by PT Saunders (2008): The depressing side of medical science. Science in Society39 50-51.
22. Pollack A (2009). “Crop scientists say biotechnology companies are thwarting
research”. New York Times 23 February 2009. (Refers to EPA Docket EPA-HQ-OPP-2008-0836) Available at
23. Yu KM, Lu YH, Feng HQ, Jiang YY and Zhou JZ (2008) Suppression of Cotton
Bollworm in Multiple Crops in China in Areas with Bt Toxin–Containing Cotton. Science 321, 1676-1678.
24. Saunders PT (2008). Peer review under the spotlight. Science in Society38 31-32. 25. Peters D and Ceci S (1982). Peer-review practices of psychological journals: the fate
of submitted articles, submitted again. Behavioral and Brain Sciences5 187-255.
26. Smith R (2006). Peer review: a flawed process at the heart of science and journals.
Journal of the Royal Society of Medicine 99 178-182.
27. This is not a hypothetical point. In 2007, the UK Food Standards Agency, on
receiving a report on research it had itself commissioned, refused to do anything until the work had been published. See PT Saunders (2008): Food colouring confirmed bad for children: Food Standards Agency refuses to act. Science in Society36, 30-31.
28. See for example the bill introduced into the California State Legislature by Senator
The suits mentioned in the bill arise under the North American Free Trade Agreement, not the laws of Canada, Mexico or the United States. This demonstrates how the precautionary principle is especially relevant in international dealings, where trade has been given a higher status than other values.
Sehr geehrte Frau Doktor,sehr geehrter Herr Doktor,wie gewünscht, senden wir Ihnen nachfolgend Musteranforderungen nach AMG § 47 Abs. 3 mit deren Hilfe Sie Ihr Arzneimittelbudget entlasten, sowie Kurzfassungen aktueller Newsund wissenschaftlicher Literatur, die Einfluss auf Ihren Praxisablauf nehmen können. Für alle Kurzmeldungen stehen weitergehende Informationen zur Verfügung, die wir Ihn
FEDERAL CIRCUIT DECISIONS FOR WEEK ENDING December 13, 2013 Galderma Laboratories, L.P., v. Tolmar, Inc., (December 11, 2013) (precedential) Patent Nos. 7,579,377; 7,737,181; 7,834,060; 7,838.558; and 7,868,044. Key point(s): • Claims to a compound known to have a property at a specific concentration can be shown to be obvious by introducing prior art that teaches the usefu