61 results found (0.005 seconds)

Risk / Risk Analysis

The idea of risk combines two notions: that of the likelihood of something happening, or failing to happen, and the undesirability or disvalue of that outcome. Utilitarian decision-making involves three tasks: a) identifying the range of courses available in a given context; b) assigning utility values to the foreseeable outcomes of each course, and c) estimating the probability of each outcome. Most commonly such thinking focuses on positive values for utility, but negative utilitarianism argues for minimising negatives (inutilities) and is therefore more inclined to think in terms of risks than of opportunities. Risk analysis is now a common practice in corporate, government and financial sectors, particularly in relation to environmental, economic and social impacts. Apart from the obvious challenge of predicting outcomes there are several ethical and philosophical questions about risk analysis. First, the nature of the values to be considered: is it happiness, health, length of life, opportunity, or all of these and others besides? Second, the constituency over which risk is defined: is it those immediately affected, or those remotely affected, and should there be discounting for distance? and should it include future as well existing generations? This last question introduces the ‘person-affecting’ issue, since different courses will lead to different people and different numbers of people being born. Third, there is the issue of the inevitability of risk and negative outcomes. No course of action is risk free, and the selection of which risks to consider and the setting aside or neglect of others may indicate question-begging biases.

  • https://scholar.lib.vt.edu
    • PDF
    • Suggested

    Techné 8:1 Fall 2004 The Concept of Risk Philosophical Perspectives on Risk Sven Ove Hansson Royal Institute of Technology, Stockholm (1) (2) (3) (4) (5) In non-technical contexts, the word “risk” refers, often rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In technical contexts, the word has many uses and specialized meanings. The most common ones are the following: Hansson, Philosophical Perspectives on Risk / 10 risk = an unwanted event which may or may not occur. risk the cause of an unwanted event which may or may not = occur. risk = the probability of an unwanted event which may or may not occur. risk = the statistical expectation value of unwanted events which may or may not occur. risk the fact that a decision is made under conditions of known probabilities (“decision under risk") — Examples: Lung cancer is one of the major risks (1) that affect smokers. Smoking also causes other diseases, and it is by far the most important health risk (2) in industrialized countries. There is evidence that the risk (3) of having one's life shortened by smoking is as high as 50%. The total risk (4) from smoking is higher than that from any other cause that has been analyzed by risk analysts. The probabilities of various smoking-related diseases are so well-known that a decision whether or not to smoke can be classified as a decision under risk (5). The third and fourth of these meanings are the ones most commonly used by engineers. The fourth, in particular, is the standard meaning of "risk" in professional risk analysis. In that discipline, "risk" often denotes a numerical representation of severity, that is obtained by multiplying the probability of an unwanted event with a measure of its disvalue (negative value). When, for instance, the risks associated with nuclear energy are compared in numerical terms to those of fossil fuels, "risk” is usually taken in this sense. Indeed, all the major variants of technological risk analysis are based on one and the same formal model of risk, namely objectivist expected utility, that combines objectivist probabilities with objectivist utilities (Hansson 1993). By an objectivist probability is meant a probability that is interpreted as an objective frequency or propensity, and thus not (merely) as a degree of Techné 8:1 Fall 2004 Hansson, Philosophical Perspectives on Risk / 11 belief. Similarly, a utility assignment is objectivist if it is interpreted as (a linear function of) some objective quantity. It is often taken for granted that this sense of risk is the only one that we need. In studies of "risk perception," the “subjective risk" reported by the subjects is compared to the "objective risk," which is identified with the value obtained in this way. However, from a philosophical point of view it is far from obvious that this model of risk captures all that is essential. I will try to show why it is insufficient and how it should be supplemented. In doing this, I will also show how the issue of risk gives rise to important new problems for several areas of philosophy, such as epistemology, philosophy of science, decision theory and—in particular—ethics. Let us begin with epistemology. Epistemology In all the senses of "risk" referred to above, the use of this term is based on a subtle combination of knowledge and uncertainty. When there is a risk, there must be something that is unknown or has an unknown outcome; hence there must be uncertainty. But for this uncertainty to constitute a risk for us, something must be known about it. This combination of knowledge and lack thereof contributes to making issues of risk so difficult to come to grips with in practical technological applications. It also gives rise to important philosophical issues for the theory of knowledge. Risk and Uncertainty 1 In decision theory, lack of knowledge is divided into the two major categories “risk” and “uncertainty”. In decision-making under risk, we know what the possible outcomes are and what are their probabilities. Perhaps a more adequate term for this would be "decision-making under known probabilities". In decision-making under uncertainty, probabilities are either not known at all or only known with insufficient precision.² Only very rarely are probabilities known with certainty. Therefore, strictly speaking, the only clear-cut cases of "risk" (known probabilities) seem to be idealized textbook cases that refer to devices such as dice or coins that are supposed to be known with certainty to be fair. More typical real-life cases are characterized by (epistemic) uncertainty that does not, primarily, come with exact probabilities. Hence, almost all decisions are decisions "under uncertainty". To the extent that we make decisions “under risk,” this does not mean that these decisions are made under conditions of completely known probabilities. Rather, it means that we have chosen to simplify our Techné 8:1 Fall 2004 Hansson, Philosophical Perspectives on Risk / 12 description of these decision problems by treating them as cases of known probabilities. It is common to treat cases where experts have provided exact probabilities as cases of decision-making under risk. And of course, to give just one example, if you are absolutely certain that current estimates of the effects of low-dose radiation are accurate, then decision-making referring to such exposure may be decision-making under risk. However, if you are less than fully convinced, then this too is a case of decision-making under uncertainty. Experts are known to have made mistakes, and a rational decision-maker should take into account the possibility that this may happen again. Experts often do not realize that for the non-expert, the possibility of the experts being wrong may very well be a dominant part of the risk (in the informal sense of the word) involved e.g. in the use of a complex technology. When there is a wide divergence between the views of experts and those of the public, this is certainly a sign of failure in the social system for division of intellectual labor, but it does not necessarily follow that this failure is located within the minds of the non-experts who distrust the experts. It cannot be a criterion of rationality that one takes experts for infallible. Therefore, even when experts talk about risk, and give exact probability statements, the real issue for most of us may nevertheless be one of epistemic uncertainty. The Reduction of Uncertainty One possible approach to all this epistemic uncertainty, and perhaps at first hand the most attractive one, is that we should always take all uncertainty that there is into account, and that all decisions should be treated as decisions under epistemic uncertainty. However, attractive though this approach may seem, it is not in practice feasible, since human cognitive powers are insufficient to handle such a mass of unsettled issues. In order to grasp complex situations, we therefore reduce the prevailing epistemic uncertainty to probabilities ("There is a 90% chance that it will rain tomorrow") or even to full beliefs ("It will rain tomorrow").³ This process of uncertainty- reduction, or "fixation of belief" (Peirce 1934), helps us to achieve a cognitively manageable representation of the world, and thus increases our competence and efficiency as decision-makers. 3 Another possible approach to uncertainty is provided by Bayesian decision theory. According to the Bayesian ideal of rationality, all statements about the world should have a definite probability value assigned to them. Non- logical propositions should never be fully believed, but only assigned high probabilities. Hence, epistemic uncertainty is always reduced to probability, Techné 8:1 Fall 2004 Hansson, Philosophical Perspectives on Risk / 13 but never to full belief. The resulting belief system is a complex web of interconnected probability statements (Jeffrey 1956). Bayesianism Certainty Risk Uncertainty Our predicament What we do Figure 1. The reduction of epistemic uncertainty. In practice, the degree of uncertainty-reduction provided by Bayesianism is insufficient to achieve a manageable belief system. Our cognitive limitations are so severe that massive reductions to full beliefs (certainty) are indispensable if we wish to be capable of reaching conclusions and making decisions. As one example of this, since all measurement practices are theory-laden, no reasonably simple account of measurement would be available in a Bayesian approach (McLaughlin 1970). On the other hand, Bayesianism cannot either account for the fact that we also live with some unreduced epistemic uncertainties. Figure 1. The reduction of epistemic uncertainty. The left column represents our predicament as it looks like in practice. Most of our beliefs are uncertain. Only in few cases do we have certainty, or precise probabilistic knowledge ("risk"). The middle column represents the Bayesian simplification, in which uncertainty is reduced to risk. The right column represents the simplification Techné 8:1 Fall 2004 Hansson, Philosophical Perspectives on Risk / 14 that we perform in practice, treating many of our uncertain beliefs provisionally as if they were certain knowledge. In my view, it is a crucial drawback of the Bayesian model that it does not take into account the cognitive limitations of actual human beings. Of course, we may wish to reflect on how a rational being with unlimited cognitive capabilities should behave, but these are speculations with only limited relevance for actual human beings. A much more constructive approach is to discuss how a rational being with limited cognitive capabilities can make rational use of these capabilities. In practice, in order to grasp complex situations, we need to reduce the prevailing epistemic uncertainty not only to probabilities but also to full beliefs. Such reductions will have to be temporary, so that we can revert from full belief to probability or even to uncertainty, when there are reasons to do this. This is how we act in practice, and it also seems to be the only sensible thing to do, but we do not yet have a theory that clarifies the nature of this process (See Figure 1). There are important lessons for risk research to draw from this. In risk analysis, it is mostly taken for granted that a rational individual's attitude to uncertain possibilities should be representable in terms of probability assignments. Due to our cognitive limitations, this assumption is not always correct. In many instances, more crude attitudes such as "This will not happen" "It is possible that this may happen" may be more serviceable. Transitions between probabilistic and non-probabilistic attitudes to risk seem to be worth careful investigations, both from an empirical and a normative point of view. I believe, for instance, that such transitions are common in the process of technological design. An engineer designing a new product typically questions some parts of the construction at a time, while at least temporarily taking the reliability of the other parts for granted. This way of reasoning keeps uncertainty at a level at which it can be handled. The process of uncertainty reduction is not a value-free or "purely epistemic" process. We are less reluctant to ignore remote or improbable alternatives when the stakes are high. Suppose that when searching for mislaid ammunition, I open and carefully check a revolver, concluding that it is empty. I may then say that I know that the revolver is unloaded. However, if somebody then points the revolver at my head asking: "May I then pull the trigger?," it would not be unreasonable or inconsistent of me to say "No," and to use the language of probability or uncertainty when explaining why. In this case, we revert from full belief to uncertainty when the stakes involved are changed.

  • http://www.bioethics.org.au
    • PDF
    • Suggested

    The ethics of risk What is risk? Strictly, two categories have been identified: 'risk' (based upon known probabilities) and ‘uncertainty' (epistemic – based upon what cannot be clearly identified). While life typically contains a measure of uncertainty, certain probabilities can be quantified, much like the tossing of a coin or the roll of a dice¹. For each individual, even though they live with quantifiable ‘risk', life is always risky, and generally, people have difficulty resolving risk-benefit conflicts. “People's perceptions frequently fail to match up with the actual dangers risks pose. Few people have a 'feel' for what a chance of dying, say a chance of one in a million, really means... We tend to emphasize low probabilities and underestimate those that are high"². The use of statistics is one way to overcome this uncertainty; however, denial and overconfidence are more common ways to deal with it³. The main concern for regulatory bodies is to determine what risks are acceptable, both ethically and politically (what is ethically acceptable, and what will the public accept?). While realistically, risk management must be undertaken by regulatory bodies (a certain amount of paternalism seems unavoidable), lack of trust is a critical factor underlying controversy over technological hazards. The playing field is tilted towards distrust, and 'the system destroys trust'5. Even being honest is more difficult than it appears, and scientists and policy makers who point out the gamble often taken in assessing risk are frequently resented for the anxiety their frankness provokes. One of the philosophical theories proposed to address risk assessment is utilitarianism. Notwithstanding the fact that utilitarianism has been subject to devastating philosophical critique7, it is still strongly favoured by many bioethicists. But where the literature on risk is concerned it seems to be generally agreed that utilitarianism (applied as cost-benefit analysis) is not an appropriate way to address this problem. Five main reasons have been identified: ● it is not democratic it ignores unequal distribution of possible outcomes' 1 Hansson S E, "What is Philosophy of Risk?" Theoria 62 (1996) pp 169-86 2 Teuber A. "Justifying Risk" Daedalus 119 (1990) 235-253 3 Slovic P, Fischhoff B and Lichtenstein S. "Cognitive Processes and Societal Risk Taking", in Slovic P (ed.) The Perception of Risk, Earthscan London 2000, pp 32-50 4 Slovic P "Perceived Risk, Trust and Democracy", in Slovic P (ed.) The Perception of Risk, Earthscan London 2000, pp 316-326 5 ibid 6 Slovic, Fischhoff, Lichtenstein (2000) op cit 7 Cf Smart, J. J. C. & Williams, B. Utilitarianism For & Against, Cambridge, Cambridge University Press, 1990; Williams, B. Ethics and the Limits of Philosophy, Fontana Press, London, 1985 8 Fischhoff B. "Acceptable Risk: A Conceptual Proposal", Risk: Health, Safety and the Environment Vol 5 (1): 1994. Southern Cross Bioethics Institute 1 As an alternative to utilitarianism, contract theories have also been applied to risk assessment and have been criticised on other grounds, whether consent is actual or hypothetically derived. Criticism has been made because: Even if an ethic of ‘natural rights' is used as a basis for risk regulation, the fact is that harms vary qualitatively and their probabilities vary quantitatively. Some writers view this as insurmountable and therefore sufficient grounds on which to dismiss a natural rights theory of risk. This problem is sometimes call causal dilution¹5: “Given that a certain moral theory prohibits a certain action because it has the property P, under what conditions should a generalized version of the same theory prohibit an action because it possibly, but not certainly, has the property P?" 10 ● Altham¹ rejects utilitarian, contractual, and natural rights theories, citing fatal problems with each. His solution is to have no principle on which to base risk management; but to simply find the level of risk at which people do not feel anxious (based upon their own assessment of their quality of life) regardless of the facts, and their desire to reach agreement will cause them to accept the risk. For him risk is therefore entirely subjective. 13 it is value-blind ¹⁰ it attempts to measure what human life is worth¹¹ it tries to reach a definite conclusion from insufficient information ¹² 12 McKerlie¹7 argues that moral views based on rights cannot deal with risk. Not only is a hostile intention not necessary for a rights violation, but “an action that turns out to 14 16 9 ibid (for example, benefits for many but disastrous consequences for a few) Baron D S "The Abuses of Risk Assessment", in Waterstone M (ed.) Risk and Society: The interaction of science, technology and public policy, Kluwer Academic Publishers 1992, pp 173-1778 11 Teuber (1990) op cit; the problem of saving lives immediately at risk. 12 Hansson SE, "Philosophical perspective on risk" Ambio Vol 28 (6) Sept 1999, pp 539- 542, also Hansson S O "The False Promises of Risk Assessment”, Ratio 6 (1) June 1993, 16-26 obtaining a genuine consent is unrealistic and seeking it could create a stalemate ¹3 15 there can be too much faith in contracts, leading to the undesirable consequence that "only consent can turn a potential violation of a right into a permissible act”¹4 16 Fischhoff (1994) op. cit; Hansson S O "What is philosophy of risk?", Theoria 62 (1996) 169-186 17 Teuber (1990) op cit Hansson (1999) op cit Altham J E J “Ethics of Risk", Proceedings of the Aristotelian Society 1984, pp 15-29 McKerlie D "Rights and Risk”, Canadian Journal of Philosophy 16 (2) June 1986, pp 239-252 Southern Cross Bioethics Institute 2 be harmless may nevertheless have been prohibited, while an action that causes harm may nevertheless have been permissible", e.g. driving down one's street versus Russian roulette. He also rejects the view that rights correspond with autonomy. 218 McKerlie also argues that “In many examples the rights view chooses an action that will not lead to the best outcome. For example, it will tell us not to kill an innocent person even if that would somehow save the lives of many others.' In saying this, of course, McKerlie does not identify a problem with natural law reasoning per se. He just assumes we have the right to do whatever is necessary to achieve what, on other grounds, he wants the outcome to be. It also appears that 'scientific' risk assessment cannot be separated from either the values or management issues. Analysis cannot replace process, only inform it”. And science cannot pretend to be ‘objective '20. What is studied, how the research is designed, how it is interpreted and reported are all value-laden ²¹. This leads to three 21 corollaries: ● Disagreement about risk will not disappear in the face of 'scientific evidence¹22 Most writers therefore suggest that ethical guidelines, or values, should be established first. The process of risk regulation should be, at least to some extent, democratic, because the procedure of decision-making is as important as the outcome²5. Therefore much research into the public's 'perceptions of risk' has been and is being conducted. 20 'Public education' is not the sole answer²³. We don't need improved risk assessment techniques; we need greater participation in decision-making²4. 18 ibid 19 Fischhoff (1994) op cit Thompson P B "Risk Objectivism and Risk Subjectivism: When are Risks Real?" Risk: Health, Safety and the Environment 1: 3 (1989) Also find at www.fplc.edu/risk/vol1/winter/thompson.htm 21 For exa ple Cranor C. "Scientific Conventions, Ethics and Legal Institutions", Risk: Health, Safety and the Environment 1 (1989):155. Presentation given at Symposium on Public Participation in Risk Management. www.fplc.edu/risk/vol1/spring/cranor.htm; the ethics of confidence and type I and II errors. 25 22 Slovic, Fischhoff and Lichtenstein (2000) op cit 23 Freudenberg W R and Rursch J A, “The risks of ‘putting the numbers in context': a cautionary tale", in Löfstedt R and Frewer L, The Earthscan Reader in Risk and Modern Society, Earthscan London 1998, pp 77-90 24 Teuber (1990) op cit Renn O. "Concepts of risk: a classification" in Krimsky S and Golding D (ed.) Social Theories of Risk, Praeger, London: 1992, ch3 pp 53-79, Fischhoff (1994) op cit Southern Cross Bioethics Institute 3 For example, a comparison of rival theories of risk perception considers the following 26: 1. knowledge: people perceive things to be dangerous because they know them to be dangerous. 2. personality theory: some love risk-taking, others are averse. 3. economic theory: (a) the rich are more willing to take risks because they benefit more and are shielded from the consequences, while the poor feel the opposite; (b) "post-materialist” – living standards have improved so the rich are more interested in social relations and better health. 4. political theory: struggles over interests, i.e. explanatory power in social and demographic characteristics. Wildavsky and Dale conclude that cultural theory provides the best prediction of how people will perceive risk in a variety of situations. This corresponds with Teuber's conclusion that "we are not only end-oriented; we are also ideal-oriented. We do not care just about where we end up; we care about the kind of people we have to become in order to end up in one place or another"²7. 5. cultural theory: individuals choose what to fear (and how much to fear it) in order to support their way of life. (Renn categorises people as hierarchists, individualists or egalitarians). An evaluation of risk can't just measure against the benefits. Other characteristics are also important: control, familiarity, knowledge, and immediacy of benefits 28. For example, voluntariness and control are important: people will accept more risk if they perceive to have more control (e.g. the “irrational" preference for driving over flying)". But should the fact that people have double standards for different types of risk (feelings of dread, unfamiliarity, involuntarily accepted, control) be imposed upon industry?30 29 So how do we determine acceptable levels of risk? Considerations of policy regarding risks are highly complex: 30 26 Wildavsky A and Dake K, “Theories of risk perception: who fears what and why?" Daedalus 119 (1990) pp 41-60 The acceptability of risk is a relative concept and involves consideration of different factors. Considerations in these judgments may include: The certainty and severity of the risk; the reversibility of the health effect; the knowledge or familiarity of the risk; whether the risk is voluntarily accepted or involuntarily imposed; whether individuals are compensated for their exposure to the risk; the advantages of the activity; and the risks and advantages for any alternatives.31 27 Teuber (1990) op cit 28 Slovic (2000) op cit Renn (1992) op cit, Slovic (2000) op cit Fischhoff (1994) op cit 31 53 Fed. Reg., at 28,513. cited in Fischhoff (1994) op cit Southern Cross Bioethics Institute 4 Some authors have compared a variety of ways to answer the question "which risks are acceptable?"2³2. Some consider that muddling through may be the only ethical way to make choices about public risk³³. Fischhoff presents a detailed description of 'orderly muddling' risk regulation³4. He and others suggest that the reasonable person standard could be used to determine generally acceptable tradeoffs, and that acceptable tradeoffs must be ones that citizens endorse in principle (rather than actual or hypothetical consent) ³5. IMPLICATIONS FOR GMOS a) cost-benefit analysis b) revealed preferences based on behaviour(assumes people have information and can use it optimally) c) expressed preferences by directly asking people what they prefer(considered more democratic) d) natural standards ('biological wisdom', that is, assuming that the optimal level of exposure is that under which the species evolved is acceptable) e) multiple hazards (considering many hazards at once, and therefore needing to prioritise) f) facing political realities (can't please everyone at once) g) muddling through intelligently (no approach is clearly superior, so careful consideration should be given to all aspects, and good analysis should be insightful but not necessarily conclusive) h) a combined approach (using the various approaches well enough in combination so that they complement one another's strengths rather than compound each other's weaknesses) - this is the one recommended by these authors. There has been a tendency for a 'Mexican stand-off' between companies involved in experiments with GMOs and large sections of the community that seem to distrust the safety and necessity for these developments. Coming from the scientific and industry side, the developmental stages in risk management identified by Fischhoff seem to have been applied so far: 32 34 35 ● All we have to do is get the numbers right All we have to do is tell the public the numbers All we have to do is explain what we mean by the numbers All we have to do is show the public that they have accepted similar risks in the past ● Slovic, Fischhoff, Lichtenstein (2000) op cit 33 Teuber (1990) op cit All we have to do is show the public that it's a good deal for them All we have to do is treat the public nicely Fischhoff (1994) op cit Fischhoff (1994) op cit, Thompson (1989) op cit Southern Cross Bioethics Institute 5

  • https://engagedscholarship.csuohio.edu
    • PDF
    • Suggested

    CLEVELAND STATE ***00 a 1990 UNIVERSITY 1964 CSU College of Law Library Volume 38 Issue 1 Issues 1 and 2: Symposium: Natural Law and Legal Reasoning Cleveland State Law Review Allocating Risks and Suffering: Some Hidden Traps John Finnis Oxford University Follow this and additional works at: https://engaged scholarship.csuohio.edu/clevstlrev Part of the Legal History Commons How does access to this work benefit you? Let us know! Recommended Citation John Finnis, Allocating Risks and Suffering: Some Hidden Traps, 38 Clev. St. L. Rev. 193 (1990) available at https://engagedscholarship.csuohio.edu/clevstlrev/vol38/iss1/13 Article This Article is brought to you for free and open access by the Journals at EngagedScholarship@CSU. It has been accepted for inclusion in Cleveland State Law Review by an authorized editor of EngagedScholarship@CSU. For more information, please contact [email protected]. ALLOCATING RISKS AND SUFFERING: SOME HIDDEN TRAPS JOHN FINNIS* I In his first lectures on jurisprudence, Adam Smith told his students that the imperfect rights which are the subject of distributive justice do not properly belong to jurisprudence. For they fall not under the juris- diction of the laws but rather under "a system of morals.” “We are there- fore in what follows to confine ourselves entirely to the perfect rights, and what is called commutative justice."¹ Of course, he did not in fact confine himself to commutative justice, but discoursed broadly on the foundations of government, on family and slavery, and on the wealth of nations ("national opulence" as he then called it) which can meet the natural wants of mankind by a division of labor based on the industry of the people and proportioned to an appropriately regulated system of com- merce. All this as Jurisprudence (and for students aged from 14 to 16)! His The Theory of the Moral Sentiments, published three years earlier in 1759, convincingly demonstrates Adam Smith's real familiarity with the classics. Yet when he speaks of "distributive justice,”" to dismiss it from Jurisprudence, he uses the term quite differently from Aristotle.² For Aristotle, distributive justice concerns the problem of allocating the community's common stock of resources, opportunities, profits and ad- vantages, roles and offices, responsibilities, taxes and other burdens. In Adam Smith's formal treatments of justice, this whole problem has dis- appeared, along with the corresponding virtue of Aristotelian distributive justice. Smith's "distributive justice" and hereabouts he wishes to be relying on Grotius, Pufendorf and his own Scottish philosophical mentor Hutcheson, as well as his admired friend David Hume concerns only such matters as the weak, imperfect and metaphorical "right" of brilliant or learned people to be praised, or the right of beggars to seek other people's charity (which is not a right to be given anything, nor indeed an enforceable right even to beg). Adam Smith's jurisprudence is crippled by inattention to distributive justice, to the allocation of the common stock of goods and roles (with the risks incident to them). But in our day jurisprudence not only risks for- getting commutative justice, but also risks misunderstanding (and mis- locating the rationality of) the principles for resolving the problems of distributive justice. * Professor of Law and Legal Philosophy, Oxford University. ¹A. SMITH, LECTURES ON JURISPRUDENCE 9 (R. Meek, D. Raphael, & P. Stein eds. 1978). 2 A. SMITH, THE THEORY OF THE MORAL SENTIMENTS pt. VII, § ii, ch.1, 436 (West ed. 1976) [hereinafter A. SMITH]. Published by EngagedScholarship@CSU, 1990 193 1 194 CLEVELAND STATE LAW REVIEW [Vol. 38:193 II The economic analysis of which Adam Smith is a principal founder is helpful in practical reasoning about problems of justice precisely insofar as it systematically calls attention to the side-effects of individual choices and actions and behavior. Adam Smith's interest in side-effects is intense and pointed even in his Theory of the Moral Sentiments, published 17 years before his The Wealth of Nations. In a brilliant chapter of the earlier treatise he identifies the way in which admiration for technical accom- plishment and potential utility overwhelms reasonableness in human action - the way in which people, in my jargon, confuse capabilities and attainments in the fourth order with reasonable goals and fulfillments in the third.³ Anticipating the age of mail-order electronic gadgetry, gentlemen of his time could be found with "their pockets stuffed with little conveniences." "How many people ruin themselves by laying out money on trinkets of frivolous utility? What pleases these lovers of toys, is not so much the utility as the aptness of the machines which are fitted to promote it." And in this sort of confusion and misdirection of sentiment and of purpose, Adam Smith sees something which is "often the secret motive of the most serious and important pursuits of both private and public life"5 Seen without the distorting influence of this confusion, power and riches are nothing but "enormous and operose machines con- trived to produce a few trifling conveniences to the body...” But it is good that nature induces in us this sort of confusion. For "It is this deception which arouses and keeps in continual motion the industry of mankind." For, from the heap of wealth produced by this industry in the deluded pursuit of riches and power,the rich only select... what is most precious and agreeable. They consume little more than the poor; and in spite of their natural selfishness and rapacity, though they mean only their own conveniency, though the sole end which they propose from the labors of all the thousands whom they employ be the grat- ification of their own vain and insatiable desires, they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life which would have been made had the earth been divided into equal portions among all its inhabit- ants; and thus, without intending it, without knowing it, ad- vance the interest of the society... When providence divided the earth among a few lordly masters, it neither forgot nor abandoned those who had been left out of the partition. These, too, enjoy their share of all that it produces. In what constitutes ³ On the four orders of reality in which human life is lived, see Finnis, Natural Law and Legal Reasoning, 38 Clev. St. L. Rev. 1, 8-9 (1990). * A. SMITH, supra note 2, at pt. IV, ch.1, 299. 5 Id. This use of "motive" is not to be confused with reasons which motivate by entering into conscious deliberation and choice. 6 Id. at 302. ¹ Id. at 303. https://engagedscholarship.csuohio.edu/clevstlrev/vol38/iss1/13 2 1990] ALLOCATING RISKS the real happiness of human life, they are in no respect inferior to those who would seem so much above them.8 195 Here the invisible hand accomplishes what might have been the work of the missing distributive justice. In The Wealth of Nations, the same or a similar "hand" accomplishes rather the initial productive work of en- hancing (domestic, national) wealth: the individual engaging in or, rather, directing production “intends only his own gain,” but “he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention." In this later, more famous passage, Adam Smith has qualified the reference to unawareness of the side-effects ("without knowing it" is replaced by: not knowing "how much he is promoting it"); he now focuses rather on what is essential: the contrast with intention. A side-effect, as I use the term, simply is an effect not intended – not chosen either as end or as means however much it may have been foreseen by the one who intends, chooses and acts. But you will have noticed Adam Smith's optimistic partiality in at- tending to the side-effects of production. Not only is he sublimely opti- mistic about the automaticity of the trickle-down benefits to the poor. He is also entirely inattentive to the question of harmful side-effects of the processes of production, distribution and consumption by the “vain and insatiable" wealthy. Doubtless this inattention is little more than one aspect of Adam Smith's failure to anticipate certain important aspects of the impending industrial revolution. Still, it is notable. If providence, without human attention or intention, is allocating to the poor a share in the wealth produced at the direction of the rich, it is also allocating to them an abundant share in the environmental degradation and risk of environmental catastrophe engendered by the economic and security policies of the rich(er). The brilliance of Adam Smith's first account of the invisible hand con- sists in his transformation of the classical reasons for denying that wealth and power are the point of human existence. Without rejecting the clas- sical critique of confusing mere means with intrinsic ends, Smith deepens it by identifying the attraction of these means as an admiration for ex- cellence - "aptness" - but excellence in an order (I call it the technical) which is not to be confused with the existential order of human choices oriented by intrinsic human goods. But the irony is this: Adam Smith's own account, at least in this work, falls prey to the very fascination with the technical which it denounces. Captivated by his genuine and fruitful mastery of a new technique let us call it economic(s) the technique of identifying causal systems created and constituted by the side-effects of human choices whose intentions lie quite elsewhere, Smith neglects to subordinate the technique to the ends and principles of reasonable human existence, ends and principles which would have directed him to an impartial survey of the risks and suffering engendered by the same invisible hand of the economic system he misleadingly styles “provi- dence." 8 Id. at 304-305 (emphasis added). A. SMITH, THE WEALTH OF NATIONS pt. IV, § ii, ch. 10, at 476 (Campbell & Skinner eds. 1976). Published by EngagedScholarship@CSU, 1990 3 CLEVELAND STATE LAW REVIEW Of course, the developed analytical art of economics is well able and used to make such an even-handed survey. And that, I repeat, is its true utility as a help-mate for ethics and political theory and thus for juris- prudence. Still, it would be a mistake to conclude that we need only a more adequate account of the benefits and burdens up for distribution or allocation by those responsible for the common good or general fate. We need also to bear in mind what Smith did not forget and what economics does not comprehend, the requirements of commutative justice. To see this, we may look at one of the economic and security policies of the rich which was developed in England in the half-century after Adam Smith's flourishing: the policy of laying hidden traps. 196 III [Vol. 38:193 These mantraps were, typically, spring-guns: heavily loaded shot-guns, with triggers attached to springs and wires arranged in hidden lines along which the blast of shot would travel when anybody tripped them. They were set in woods and gardens to deter, disable and punish poachers, who under the law of the day were no more than trespassers. The ground of principle on which such lethal outdoor mantraps were eventually prohibited by law is clearly stated by Holmes giving the ma- jority judgment of the Supreme Court in 1921 in United Zinc & Chemical Co. v. Britt: "The liability for spring guns and mantraps arises from the fact that the defendant has not rested in [the] assumption [that tres- passers would obey the law and not trespass], but on the contrary has expected the trespasser and prepared an injury which is no more justified than if he had held the gun and fired it."¹⁰ In other words, shooting trespasser, who is engaged in no act of violence against oneself or another, is simply killing with intent to kill, i.e. murder, or at least with intent to do serious bodily harm; and setting a spring gun is just arranging to do the same "without personally firing the shot."¹¹ What one lawfully cannot, with intent, accomplish "directly" (in person) one cannot, with the same intent, accomplish "indirectly" (mechanically). "10 12 It is embarrassing to have to say that when this argument was squarely put before a strong Court of King's Bench in 1820 it was unanimously rejected. ¹2 The arguments employed by the English judges to distinguish shooting by machine from shooting in person are weak and of little in- terest. Of greater interest is the preliminary argument employed in the first two of the four judgments, an argument to which Holmes is respond- ing when he says that the defendant landowner has not been content to assume that trespassers will not trespass but has expected the trespasser 10 258 U.S. 268 at 275 (1922) (emphasis added). ¹¹ Addie v. Dumbreck, A.C. 358, 376, 98 L.J.P.C. 119 (H.L. 1929). ¹2 Ilott v. Wilkes, 3 B. & Ald. 304, 311, 106 Eng. Rep. 674 (K.B. 1820). See the opening sentences of the argument of counsel for the plaintiff-respondent. Id. at 307, 106 Eng. Rep. at 676. https://engagedscholarship.csuohio.edu/clevstlrev/vol38/iss1/13 4

  • https://www.youtube.com
    The Ethics of Risk

    When is it morally acceptable for a person, business, or government knowingly to expose others to risk without their consent? It is tempting to think that the answer is never, and that it is wrong to expose someone to risk unless they have agreed. But a moment’s thought shows that this would make normal life impossible. Driving a car imposes risks on other drivers and pedestrians. Even walking imposes some risks on others. And as we have recognised recently, staying at home imposes risks on those who make deliveries. Risk is an avoidable part of life, yet surely not all risk imposition is acceptable. What makes the difference? In this talk we discussed how the ethical issues can be addressed, and what problems still remain. Speaker Introduction: Jo Wolff Jonathan Wolff is the Alfred Landecker Professor of Values and Public Policy and Governing Body Fellow at Wolfson College. He was formerly Blavatnik Chair in Public Policy at the School, and before that Professor of Philosophy and Dean of Arts and Humanities at UCL. He is currently developing a new research programme on revitalising democracy and civil society, in accordance with the aims of the Alfred Landecker Professorship. His other current work largely concerns equality, disadvantage, social justice and poverty, as well as applied topics such as public safety, disability, gambling, and the regulation of recreational drugs, which he has discussed in his books Ethics and Public Policy: A Philosophical Inquiry (Routledge 2011) and The Human Right to Health (Norton 2012). His most recent book is An Introduction to Moral Philosophy (Norton 2018). #EthicsOfRisk #EthicalReading #JonathanWolf Join us If you haven’t already, please join the Ethical Reading movement! Individual membership is free. Simply click on the ‘Login / Register’ link at the top of any page of the website (https://ethicalreading.org.uk/). You can sign up to join our mailing list by filling in the form on the homepage. We also welcome organisations to join us as Partners. Please contact us at [email protected] to find out more.

  • https://onlinelibrary.wiley.com

    Issues of risk are important in many fields of applied ethics. Risk assessments tend to have implicit value assumptions, which need to be identified so that they can be subject to discussions and del...

  • https://plato.stanford.edu

    Stanford Encyclopedia of Philosophy Menu Browse Table of Contents What's New Random Entry Chronological Archives About Editorial Information About the SEP Editorial Board How to Cite the SEP Special Characters Advanced Tools Contact Support SEP Support the SEP PDFs for SEP Friends Make a Donation SEPIA for Libraries Entry Navigation Entry Contents Bibliography Academic Tools Friends PDF Preview Author and Citation Info Back to Top Risk First published Tue Mar 13, 2007; substantive revision Thu Dec 8, 2022 Since the 1970s, studies of risk have grown into a major interdisciplinary field of research. Although relatively few philosophers have focused their work on risk, there are important connections between risk studies and several philosophical subdisciplines. This entry summarizes the most well-developed of these connections and introduces some of the major topics in the philosophy of risk. It consists of seven sections dealing with the definition of risk and with treatments of risk related to epistemology, the philosophy of science, the philosophy of technology, ethics, decision theory, and the philosophy of economics. 1. Defining risk 2. Epistemology 3. Philosophy of science 4. Philosophy of technology 5. Ethics 5.1 A difficulty for moral theories 5.2 Utilitarianism 5.3 Rights-based moral theories 5.4 Deontological moral theories 5.5 Contract theories 5.6 Summary and outlook 6. Decision theory 6.1 Decision weights 6.2 Pessimism, cautiousness and the precautionary principle 7. Risk in economic analysis 7.1 Measures of economic risks 7.2 Measures of attitudes to risks 7.3 Experimental economics 7.4 Risk-benefit analysis Bibliography Academic Tools Other Internet Resources Related Entries 1. Defining risk In non-technical contexts, the word “risk” refers, often rather vaguely, to situations in which it is possible but not certain that some undesirable event will occur. In technical contexts, the word has several more specialized uses and meanings. Five of these are particularly important since they are widely used across disciplines: risk = an unwanted event that may or may not occur. An example of this usage is: “Lung cancer is one of the major risks that affect smokers.” risk = the cause of an unwanted event that may or may not occur. An example of this usage is: “Smoking is by far the most important health risk in industrialized countries.” (The unwanted event implicitly referred to here is a disease caused by smoking.) Both (1) and (2) are qualitative senses of risk. The word also has quantitative senses, of which the following is the oldest one: risk = the probability of an unwanted event that may or may not occur. This usage is exemplified by the following statement: “The risk that a smoker’s life is shortened by a smoking-related disease is about 50%.” risk = the statistical expectation value of an unwanted event that may or may not occur. The expectation value of a possible negative event is the product of its probability and some measure of its severity. It is common to use the number of killed persons as a measure of the severity of an accident. With this measure of severity, the “risk” (in sense 4) associated with a potential accident is equal to the statistically expected number of deaths. Other measures of severity give rise to other measures of risk. Although expectation values have been calculated since the 17th century, the use of the term “risk” in this sense is relatively new. It was introduced into risk analysis in the influential Reactor Safety Study, WASH-1400 (Rasmussen et al., 1975, Rechard 1999). Today it is the standard technical meaning of the term “risk” in many disciplines. It is regarded by some risk analysts as the only correct usage of the term. risk = the fact that a decision is made under conditions of known probabilities (“decision under risk” as opposed to “decision under uncertainty”) In addition to these five common meanings of “risk” there are several other more technical meanings, which are well-established in specialized fields of inquiry. Some of the major definitions of risk that are used in economic analysis will be introduced below in section 7.1. Although most of the above-mentioned meanings of “risk” have been referred to by philosophers, a large part of the philosophical literature on risk refers to risk in the more informal sense that was mentioned at the beginning of this section, namely as a state of affairs in which an undesirable event may or may not occur. Several philosophers have criticized the technical definitions of risk for being too limited and not covering all aspects that should be included in risk assessments (Buchak 2014; Pritchard 2015; Shrader-Frechette 1991). Linguistic evidence indicates that technical definitions of risk have had virtually no impact on the non-technical usage of the word (Boholm et al. 2016). Terminological note: Some philosophers distinguish between “subjective” and “objective” probabilities. Others reserve the term “probability” for the subjective notion. Here, the former terminology is used, i.e. “probability” can refer either to subjective probability or to objective chances. 2. Epistemology When there is a risk, there must be something that is unknown or has an unknown outcome. Therefore, knowledge about risk is knowledge about lack of knowledge. This combination of knowledge and lack thereof contributes to making issues of risk complicated from an epistemological point of view. In non-regimented usage, “risk” and “uncertainty” differ along the subjective—objective dimension. Whereas “uncertainty” seems to belong to the subjective realm, “risk” has a strong objective component. If a person does not know whether or not the grass snake is poisonous, then she is in a state of uncertainty with respect to its ability to poison her. However, since this species has no poison there is no risk to be poisoned by it. The relationship between the two concepts “risk” and “uncertainty” seems to be in part analogous to that between “truth” and “belief”. Regimented decision-theoretical usage differs from this. In decision theory, a decision is said to be made “under risk” if the relevant probabilities are available and “under uncertainty” if they are unavailable or only partially available. Partially determined probabilities are sometimes expressed with probability intervals, e.g., “the probability of rain tomorrow is between 0.1 and 0.4”. (The term “decision under ignorance” is sometimes used about the case when no probabilistic information at all is available.) Although this distinction between risk and uncertainty is decision-theoretically useful, from an epistemological point of view it is in need of clarification. Only very rarely are probabilities known with certainty. Strictly speaking, the only clear-cut cases of “risk” (known probabilities) seem to be idealized textbook cases that refer to devices such as dice or coins that are supposed to be known with certainty to be fair. In real-life situations, even if we act upon a determinate probability estimate, we are not fully certain that this estimate is exactly correct, hence there is uncertainty. It follows that almost all decisions are made “under uncertainty”. If a decision problem is treated as a decision “under risk”, then this does not mean that the decision in question is made under conditions of completely known probabilities. Rather, it means that a choice has been made to simplify the description of this decision problem by treating it as a case of known probabilities. This is often a highly useful idealization in decision theory. However, in practical applications it is important to distinguish between those probabilities that can be treated as known and those that are uncertain and therefore much more in need of continuous updating. Typical examples of the former are the failure frequencies of a technical component that are inferred from extensive and well-documented experience of its use. The latter case is exemplified by experts’ estimates of the expected failure frequencies of a new type of component. A major problem in the epistemology of risk is how to deal with the limitations that characterize our knowledge of the behaviour of unique complex systems that are essential for estimates of risk, such as the climate system, ecosystems, the world economy, etc. Each of these systems contains so many components and potential interactions that important aspects of it are unpredictable. However, in spite of this uncertainty, reasonably reliable statements about many aspects of these systems can be made. Anthropogenic climate change is an example of this. Although many details are unknown, the general picture is clear, and there is no reasonable doubt about the existence of anthropogenic climate change, about its major causes and mechanisms, or the overall nature of the risks that it creates for our societies (Mayer et al. 2017; Lewandowsky et al. 2018). The epistemological status of such partial knowledge about complex systems, and the nature of the uncertainty involved, are still in need of further clarification (McKinney 1996, Shrader-Frechette 1997). In the risk sciences, it is common to distinguish between “objective risk” and “subjective risk”. The former concept is in principle fairly unproblematic since it refers to a frequentist interpretation of probability. The latter concept is more ambiguous. In the psychometric literature on risk from the 1970s, subjective risk was often conceived as a subjective estimate of objective risk. In more recent literature, a more complex picture has emerged. Subjective appraisals of (the severity of) risk depend to a large extent on factors that are not covered in traditional measures of objective risk, such as control and tampering with nature. If the terms are taken in this sense, subjective risk is influenced by the subjective estimate of objective risk, but cannot be identified with it. In the psychological literature, subjective risk is often conceived as the individual’s overall assessment of the seriousness of a danger or alleged danger. Such individual assessments are commonly called “risk perception”, but strictly speaking the term is misleading. This is not a matter of perception, but rather a matter of attitudes and expectations. Subjective risk can be studied with methods of attitude measurement and psychological scaling (Sjöberg 2004). The probabilistic approach to risk dominates in philosophy as well as in other disciplines, but some philosophers have investigated an alternative, modal account of risk. According to this account, “[t]o say that a target event is risky is to say that (keeping relevant initial conditions for that event fixed) it obtains in close possible worlds” (Pritchard 2016, 562). The risk is smaller, the more distant the nearest possible world is in which the target event takes place. This approach has interesting connections with psychological accounts of risk, but it is far from clear how distance between possible worlds should be defined and determined. 3. Philosophy of science The role of values in science has been particularly controversial in issues of risk. Risk assessments have frequently been criticized for containing “hidden” values that induce a too high acceptance of risk (Cranor 2016; 2017; Intemann 2016; Heinzerling 2000). There is also a discussion on the need to strengthen the impact of certain values in risk assessment, such as considerations of justice (Shrader-Frechette 2005a), human rights (Shrader-Frechette 2005b), and the rights and welfare of future people (Caney 2009; Ng 2005). Issues of risk have also given rise to heated debates on what levels of scientific evidence are needed for policy decisions. The proof standards of science are apt to cause difficulties whenever science is applied to practical problems that require standards of proof or evidence different from those of science. A decision to accept or reject a scientific statement (for instance an hypothesis) is in practice always subject to the possibility of error. The chance of such an error is often called the inductive risk (Hempel 1965, 92). There are two major types of errors. The first of these consists in concluding that there is a phenomenon or an effect when in fact there is none. This is called an error of type I (false positive). The second consists in missing an existing phenomenon or effect. This is called an error of type II (false negative). In the internal dealings of science, errors of type I are in general regarded as more problematic than those of type II. The common scientific standards of statistical significance substantially reduce the risk of type I errors but do not protect against type II errors (Shrader-Frechette 2008; John 2017). Many controversies on risk assessment concern the balance between risks of type I and type II errors. Whereas science gives higher priority to avoiding type I errors than to avoiding type II errors, the balance can shift when errors have practical consequences. This can be seen from a case in which it is uncertain whether there is a serious defect in an airplane engine. A type II error, i.e., acting as if there were no such a defect when there is one, would in this case be counted as more serious than a type I error, i.e., acting as if there were such a defect when there is none. (The distinction between type I and type II errors depends on the delimitation of the effect under study. In discussions of risk, this delimitation is mostly uncontroversial. Lemons et al. 1997; van den Belt and Gremmen 2002.) In this particular case it is fairly uncontroversial that avoidance of type II error should be given priority over avoidance of type I error. In other words, it is better to delay the flight and then find out that the engine was in good shape than to fly with an engine that turns out to malfunction. However, in other cases the balance between the two error types is more controversial. Controversies are common, for instance, over what degree of evidence should be required for actions against possible negative effects of chemical substances on human health and the environment. Figure 1. The use of scientific data for policy purposes. Such controversies can be clarified with the help of a simple but illustrative model of how scientific data influence both scientific judgments and practical decisions (Hansson 2008). Scientific knowledge begins with data that originate in experiments and other observations. (See Figure 1.) Through a process of critical assessment, these data give rise to the scientific corpus (arrow 1). Roughly speaking, the corpus consists of those statements that could, for the time being, legitimately be made without reservation in a (sufficiently detailed) textbook. When determining whether or not a scientific hypothesis should be accepted, for the time being, as part of the corpus, the onus of proof falls on its adherents. Similarly, those who claim the existence of an as yet unproven phenomenon have the burden of proof. These proof standards are essential for the integrity of science. The most obvious way to use scientific information for policy-making is to employ information from the corpus (arrow 2). For many purposes, this is the only sensible thing to do. However, in risk management decisions, exclusive reliance on the corpus may have unwanted consequences. Suppose that toxicological investigations are performed on a substance that has not previously been studied from a toxicological point of view. These investigations turn out to be inconclusive. They give rise to science-based suspicions that the substance is dangerous to human health, but they do not amount to full scientific proof in the matter. Since the evidence is not sufficient to warrant an addition to the scientific corpus, this information cannot influence policies in the standard way (via arrows 1 and 2). There is a strict requirement to avoid type I errors in the process represented by arrow 1, and this process filters out information that might in this case have been practically relevant and justified certain protective measures. In cases like this, a direct road from data to policies is often taken (arrow 3). This means that a balance between type I and type II errors is determined in the particular case, based on practical considerations, rather than relying on the standard scientific procedure with its strong emphasis on the avoidance of type I errors. It is essential to distinguish here between two kinds of risk-related decision processes. One consists in determining which statements about risks should be included in the scientific corpus. The other consists in determining how risk-related information should influence practical measures to protect health and the environment. It would be a strange coincidence if the criteria of evidence in these two types of decisions were always the same. Strong reasons can be given for strict standards of proof in science, i.e. high entry requirements for the corpus. At the same time, there can be valid policy reasons to allow risk management decisions to be influenced by scientifically plausible indications of danger that are not yet sufficiently well-confirmed to qualify for inclusion into the scientific corpus. The term inductive risk is usually reserved for the (type I and type II) risks that follow directly from the acceptance or rejection of an hypothesis. The term epistemic risk is used for a wider category of risks in belief formation, such as risks taken when choosing a methodology, accepting a background assumption, or deciding how to interpret data (Biddle 2016). Policy issues concerning risk have often been the targets of extensive disinformation campaigns characterized by science denial and other forms of pseudoscience (Oreskes 2010). Several philosophers have been active in the repudiation of invalid claims and the defence of science in risk-related issues (Cranor 2005; 2016; 2017; Goodwin 2009; Prothero 2013; Shrader-Frechette 2014; Hansson 2017). 4. Philosophy of technology Safety and the avoidance of risk are major concerns in practical engineering. Safety engineering has also increasingly become the subject of academic investigations. However, these discussions are largely fragmented between different areas of technology. The same basic ideas or “safety philosophies” are discussed under different names for instance in chemical, nuclear, and electrical engineering. Nevertheless, much of the basic thinking seems to be the same in the different areas of safety engineering (Möller and Hansson 2008). Simple safety principles, often expressible as rules of thumb, have a central role in safety engineering. Three of the most important of these are inherent safety, safety factors, and multiple barriers. Inherent safety, also called primary prevention, consists in the elimination of a hazard. It is contrasted with secondary prevention that consists in reducing the risk associated with a hazard. For a simple example, consider a process in which inflammable materials are used. Inherent safety would consist in replacing them by non-inflammable materials. Secondary prevention would consist in removing or isolating sources of ignition and/or installing fire-extinguishing equipment. As this example shows, secondary prevention usually involves added-on safety equipment. The major reason to prefer inherent safety to secondary prevention is that as long as the hazard still exists, it can be realized by some unanticipated triggering event. Even with the best of control measures, if inflammable materials are present, some unforeseen chain of events can start a fire. Safety factors are numerical factors employed in the design process for our houses, bridges, vehicles, tools, etc., in order to ensure that our constructions are stronger than the bare minimum expected to be required for their functions. Elaborate systems of safety factors have been specified in norms and standards. A safety factor most commonly refers to the ratio between a measure of the maximal load not leading to a specified type of failure and a corresponding measure of the maximal expected load. It is common to make bridges and other constructions strong enough to withstand twice or three times the predicted maximal load. This means that a safety factor of two or three is employed. Safety factors are also used in regulatory toxicology and ecotoxicology. For instance, in food toxicology, the highest dose allowed for human exposure has traditionally been calculated as one hundredth of the highest dose (per kilogram body weight) that gave no observable negative effect in experimental animals. Today, higher safety factors than 100 have become common (Dourson and Stara 1983; Pressman et al. 2017). Safety barriers are often arranged in chains. Ideally, each barrier is independent of its predecessors so that if the first fails, then the second is still intact, etc. For example, in an ancient fortress, if the enemy managed to pass the first wall, then additional layers would protect the defending forces. Some engineering safety barriers follow the same principle of concentric physical barriers. Others are arranged serially in a temporal or functional rather than a spatial sense. One of the lessons that engineers learned from the Titanic disaster is that improved construction of early barriers is not of much help if it leads to neglect of the later barriers (in that case lifeboats). The major problem in the construction of safety barriers is how to make them as independent of each other as possible. If two or more barriers are sensitive to the same type of impact, then one and the same destructive force can get rid of all of them in one swoop. For instance, if three safety valves are installed in one and the same factory hall, each with the probability 1/1,000 of failure, it does not follow that the probability of all three failing is \(1 \times 10^{-9}\). The three valves may all be destroyed in the same fire, or damaged by the same mistake in maintenance operations. This is a common situation for many types of equipment. Inherent safety, safety factors, and multiple barriers have an important common feature: They all aim at protecting us not only against risks that can be assigned meaningful probability estimates, but also against dangers that cannot be probabilized, such as the possibility that some unanticipated type of event gives rise to an accident. It remains, however, for philosophers of technology to investigate the principles underlying safety engineering more in detail and to clarify how they relate to other principles of engineering design (Doorn and Hansson 2015). Many attempts have been made to predict the risks associated with emerging and future technologies. The role of philosophers in these endeavours has often been to point out the difficulties and uncertainties involved in such predictions (Allhoff 2009; Gordijn 2005). Experience shows that even after extensive efforts to make a new product safe, there is a need for post market surveillance (PMS) in order to discover unexpected problems. For instance, before the massive introduction of automobile air bags around 1990, safety engineers performed laboratory tests of different crash scenarios with dummies representing a variety of body weights and configurations (including infants and pregnant women). But in spite of the adjustments of the construction that these tests gave rise to, inflated airbags caused a considerable number of (mostly minor) injuries. By carefully analyzing experiences from actual accidents, engineers were able to substantially reduce the frequency and severity of such injuries (Wetmore 2008). For pharmaceutical drugs and some medical devices, post market surveillance is legally required in many jurisdictions. 5. Ethics 5.1 A difficulty for moral theories Until recently, problems of risk have not been treated systematically in moral philosophy. A possible defence of this limitation is that moral philosophy can leave it to decision theory to analyse the complexities that indeterminism and lack of knowledge give rise to in real life. According to the conventional division of labour between the two disciplines, moral philosophy provides assessments of human behaviour in well-determined situations. Decision theory takes assessments of these cases for given, adds the available probabilistic information, and derives assessments for rational behaviour in an uncertain and indeterministic world. On this view, no additional input of moral values is needed to deal with indeterminism or lack of knowledge, since decision theory operates exclusively with criteria of rationality. Examples are easily found that exhibit the problematic nature of this division between the two disciplines. Compare the act of throwing down a brick on a person from a high building to the act of throwing down a brick from a high building without first making sure that there is nobody beneath who can be hit by the brick. The moral difference between these two acts is not obviously expressible in a probability calculus. An ethical analysis of the difference will have to refer to the moral aspects of risk impositions as compared to intentional ill-doing. More generally speaking, a reasonably complete account of the ethics of risk must distinguish between intentional and unintentional risk exposure and between voluntary risk-taking, risks imposed on a person who accepts them, and risks imposed on a person who does not accept them. This cannot be done in a framework that treats risks as probabilistic mixtures of outcomes. In principle, these outcomes can be so widely defined that they include all relevant moral aspects, including rights infringements as well as intentionality and other pertinent mental states. However, this would still not cover the moral implications of risk taking per se, since these are not inherent properties of any of the potential outcomes. Methods of moral analysis are needed that can guide decisions on risk-takings and risk impositions. A first step is to investigate how standard moral theories can deal with problems of risk that are presented in the same way as in decision theory, namely as the (moral) evaluation of probabilistic mixtures of (deterministic) scenarios. 5.2 Utilitarianism In utilitarian theory, there are two obvious approaches to such problems. One is the actualist solution. It consists in assigning to a (probabilistic) mixture of potential outcomes a utility that is equal to the utility of the outcome that actually materializes. To exemplify this approach, consider a decision whether or not to reinforce a bridge before it is used for a single, very heavy transport. There is a 50% risk that the bridge will fall down if it is not reinforced. Suppose that a decision is made not to reinforce the bridge and that everything goes well; the bridge is not damaged. According to the actualist approach, the decision was right. This is, of course, contrary to common moral intuitions. The other established utilitarian approach is the maximization of expected utility. This means that the utility of a mixture of potential outcomes is defined as the probability-weighted average of the utilities of these outcomes. The expected utility criterion has been criticized along several lines. One criticism is that it disallows a common form of cautiousness, namely disproportionate avoidance of large disasters. For example, provided that human deaths are valued equally and additively, as most utilitarians are prone to do, this framework does not allow that one prefers a probability of 1 in 1000 that one person will die to a probability of 1 in 100000 that fifty persons will die. The expected utility framework can also be criticized for disallowing a common expression of strivings for fairness, namely disproportionate avoidance of high-probability risks for particular individuals. Hence, in the choice between exposing one person to a probability of 0.9 to be killed and exposing each of one hundred persons to a probability of 0.01 of being killed, it requires that the former alternative be chosen. In summary, expected utility maximization prohibits what seem to be morally reasonable standpoints on risk-taking and risk imposition. However, it should be noted that the expected utility criterion does not necessarily follow from utilitarianism. Utilitarianism in a wide sense (Scanlon 1982) is compatible with other ways of evaluating uncertain outcomes (most notably with actual consequence utilitarianism, but in principle also for instance with a maximin criterion). Therefore, criticism directed against expected utility maximization does not necessarily show a defect in utilitarian thinking. 5.3 Rights-based moral theories The problem of dealing with risk in rights-based moral theories was formulated by Robert Nozick: “Imposing how slight a probability of a harm that violates someone’s rights also violates his rights?” (Nozick 1974, 74). An extension of a rights-based moral theory to indeterministic cases can be obtained by prescribing that if A has a right that B does not bring about a certain outcome, then A also has a right that B does not perform any action that (at all) increases the probability of that outcome. Unfortunately, such a strict extension of rights is untenable in social practice. Presumably, A has the right not to be killed by B, but it would not be reasonable to extend this right to all actions by B that give rise to a very small increase in the risk that A dies — such as driving a car in the town where A lives. Such a strict interpretation would make human society impossible. Hence, a right not to be risk-exposed will have to be defeasible so that it can be overridden in some (but not necessarily all) cases when the increase in probability is small. However, it remains to find a credible criterion for when it should be overridden. As Nozick observed, a probability limit is not credible in “a tradition which holds that stealing a penny or a pin or anything from someone violates his rights. That tradition does not select a threshold measure of harm as a lower limit, in the case of harms certain to occur” (Nozick 1974, 75). 5.4 Deontological moral theories The problem of dealing with risks in deontological theories is similar to the corresponding problem in rights-based theories. The duty not to harm other people can be extended to a duty not to perform actions that increase their risk of being harmed. However, society as we know it is not possible without exceptions to this rule. The determination of criteria for such exceptions is problematic in the same way as for rights-based theories. All reasonable systems of moral obligations will contain a general prohibition against actions that kill another person. Such a prohibition can (and should) be extended to actions that involve a large risk that a person is killed. However, it cannot be extended to all actions that lead to a minuscule increase in the risk that a person is killed, since that would exclude many actions and behaviours that few of us would be willing to give up. A limit must be drawn between reasonable and unreasonable impositions of risk. It seems as if such delimitations will have to appeal to concepts, such as probabilities and/or the size of the benefits obtained by taking a risk, that are not part of the internal resources of deontological theories. 5.5 Contract theories Contract theories may appear somewhat more promising than the theories discussed above. The criterion that they offer for the deterministic case, namely consent among all those involved, can also be applied to risky options. It could be claimed that risk impositions are acceptable if and only if they are supported by a consensus. Such a consensus, as conceived in contract theories, is either actual or hypothetical. Actual consensus is unrealistic in a complex society where everyone performs actions with marginal but additive effects on many people’s lives. According to the criterion of actual consensus, any local citizen will have a veto against anyone else who wants to drive a car in the town where she lives. In this way citizens can block each other, creating a society of stalemates. Hypothetical consensus has been developed as a criterion in contract theory in order to deal with inter-individual problems. We are invited to consider a hypothetical initial situation in which the social order of a future society has not yet been decided. When its future citizens meet to choose a social order, each of them is ignorant of her or his position in any of the social arrangements which they can choose among. According to John Rawls’s theory of justice, they will then all opt for a maximin solution, i.e. a social order in which the worst position that anyone can have in that society is as good as possible. In arguing for that solution, Rawls relied heavily on the assumption that none of the participants knows anything at all about the probability that she will end up in one or other of the positions in a future social order (Rawls 1957; 1971; 1974). John Harsanyi, who discussed this problem prior to Rawls, assumed that the probability of finding oneself in a particular social position is equal to the share of the population that will have the position in question, and that this is also known by all participants. Hence, if a fifth of the population in a certain type of society will be migrant workers, then each participant in Harsanyi’s initial situation will assume that she has a twenty per cent probability of becoming a migrant worker, whereas none of the participants in Rawls’s initial situation will have a clue what that probability can be. In Harsanyi’s initial situation, the participants will choose the social order with the highest expected utility (probability-weighted utility), thus taking all potential future positions into account, rather than only the least favourable one (Harsanyi 1953; 1955; 1975). However, in discussions about various risks in our existing societies we do not have much use for the hypothetical initial situations of contract theory. The risks and uncertainties in real life are of quite a different nature than the hypothetical uncertainty (or ignorance) about one’s own social position and conditions which is a crucial requirement in the initial situation. The thought experiment of an initial situation does not seem to provide us with any intellectual tools for the moral appraisal of risks in addition to those to which we have access even without trying to think away who we are. 5.6 Summary and outlook In summary, the problem of appraising risks from a moral point of view does not seem to have any satisfactory solution in the common versions of the above-mentioned types of moral theories. The following are three possible elements of a solution: It may be useful to shift the focus from risks, described two-dimensionally in terms of probability and severity (or one-dimensionally as the product of these), to actions of risk-taking and risk-imposing. Such actions have many morally relevant properties in addition to the two dimensions mentioned, such as who contributes causally to the risk and in what ways and with what intentions, and how the risk and its associated benefits are distributed. Important moral intuitions are accounted for by assuming that each person has a prima facie moral right not to be exposed to risk of negative impact, such as damage to her health or her property, through the actions of others. However, this is a prima facie right that has to be overridden in quite a few cases, in order to make social life at all possible. Therefore, the recognition of this right gives rise to what can be called an exemption problem, namely the problem of determining when it is rightfully overridden. Part of the solution to the exemption problem may be obtained by allowing for reciprocal exchanges of risks and benefits. Hence, if A is allowed to drive a car, exposing B to certain risks, then in exchange B is allowed to drive a car, exposing A to the corresponding risks. In order to deal with the complexities of modern society, this principle must also be applied to exchanges of different types of risks and benefits. Exposure of a person to a risk can then be regarded as acceptable if it is part of an equitable social system of risk-taking that works to her advantage. Such a system can be required to contain mechanisms that eliminate, or compensate for, social inequalities that are caused by disease and disability. (Hansson 2003; 2013) Discussions on the overall issue of risk acceptance can be found in Macpherson (2008), Hansson (2013) and Oberdiek (2014). Justice in risk impositions is discussed in Ferretti 2010 and Heyward & Roser 2016. Issues of rights and risks are discussed in Thomson 1986 and, with a particular emphasis on responsibilities, in Kermisch 2012 and van de Poel, et al. 2012. 6. Decision theory Decision theory is concerned with determining the best way to achieve as valuable an outcome as possible, given the values that we have. In decision theory, our values and goals are taken as given, and the analysis concerns how best to achieve them to an as high degree as possible. Decision-making under risk and uncertainty is one of the major topics in decision theory. It is usually assumed that if the values of a set of potential outcomes are known (for instance from moral philosophy), then purely instrumental considerations are sufficient for determining how best to act under risk or uncertainty in order to achieve the best possible result. (For a critical discussion of that presumption, see Hansson 2013, 49–51.) The values taken for given in decision-theoretical analysis can, but need not, be moral values of the types that are developed and analyzed in moral philosophy. 6.1 Decision weights Decision theory has traditionally had a predilection for consequentialism, whose structure is suitable for most formal models of decision-making. The standard decision-theoretical approach to risk is maximization of expected utility, which can be seen as a smooth extension of (act) utilitarianism. In expected utility theory, the value associated with an uncertain situation is equal to the sum of the probability-weighted values of all its possible outcomes. Let \(p\) be a function that assigns probabilities to outcomes, and \(u\) a function that assigns values to them. Then the value associated with a situation with three possible outcomes \(x_1\), \(x_2\) and \(x_3\), is equal to \[p(x_1) \cdot u(x_1) + p(x_2) \cdot u(x_2)+ p(x_3) \cdot u(x_3).\] However, influential proposals have been made for alternative decision rules. There are two major types of justification for such endeavours. First, examples have been put forward in which it would seem implausible to claim that expected utility maximization is the only normatively reasonable decision rule (Allais 1953; Ellsberg 1961). Secondly, numerous psychological experiments have shown that human decision-makers tend to deviate substantially from expected utility maximization. The first type of justification puts the normative soundness of expected utility in question, where the second exposes its shortcomings as a descriptive model. In an important class of alternative decision rules, the probabilities used in expected utility calculations are replaced by some other numbers (“decision weights”). This approach was proposed by William Fellner (1961). In most of these constructions, all probabilities are transformed by some transformation function r. Instead maximizing the standard expected utility \[p(x) \cdot u(x)\] the agent will then maximize \[r(p(x)) \cdot u(x)\] Several decision rules with this structure have been proposed. One of the earliest was Handa (1977). Currently, the best known proposal in this tradition is prospect theory (Kahneman and Tversky 1979; Tversky and Kahneman 1986), which was developed in order to describe observations from psychological decision experiments more accurately than in expected utility theory. Prospect theory a is a fairly complex theory that also deviates in other ways from expected utility

  • https://onlinelibrary.wiley.com

    Ethical analysis is often needed in the preparation of policy decisions on risk. A three-step method is proposed for performing an ethical risk analysis (eRA). In the first step, the people concerned...

  • https://www.youtube.com
    The History of Risk

    Risk has become one of the defining features of modern society. Almost daily, we are preoccupied with assessing, discussing, or preventing a wide variety of risks. It is a cornerstone notion for businesses and organizations, but also for nation states and their many levels of government. And even for individuals, risk and the avoidance or embracing thereof, is a key theme. The course Risk in Modern Society sheds light on the broad concept of risk. In five distinctive weeks, this course closely examines various types of safety and security risks, and how these are perceived and dealt with in a wide array of professional and academic fields, ranging from criminology, counter-terrorism and cyber security, to philosophy, safety and medical science. Developed in collaboration with scholars from three universities (Leiden, Delft and Erasmus), this course will search for answers to questions such as: “what is risk?”, “how do we study and deal with risk?”, “does ‘perceived risk’ correspond to 'real' risk?”, and “how should we deal with societal perceptions of risk, safety and security? Subscribe here: https://www.coursera.org/learn/risk-in-modern-society/home/info

  • Sustainable Risk Management
    https://play.google.com

    Here, expert authors delineate approaches that can support both decision makers as well as their concerned populations in overcoming unwarranted fears and in elaborating policies based on scientific evidence. Four exemplary focus areas were chosen for in-depth review, namely:- The scientific basis of risk management- Risk management in the area of environmental and ecological policy- Risk management in radiation medicine- Risk management in context with digitalization and roboticsGeneral as well as specific recommendations are summarized in a memorandum. Fundamental thoughts on the topic are presented in the introductory part of the book. The idea for and contents of the book were developed at a workshop on “Sustainable Risk Management: How to manage risks in a sensible and responsible manner?” held in Feldafing at Lake Starnberg (Germany) on April 14 to 16, 2016. The book offers important information and advice for scientists, entrepreneurs, administrators and politicians.