164 results found (0.006 seconds)

Human Enhancement

Human beings have long sought to acquire additional powers or to develop those they have already, or to diminish or eliminate what they regarded as defects. They have pursued this desire through mental and physical training, and through selective breeding – from informal choosing of mates who have the desired hereditary characteristics, to social policies of restricted reproduction (eugenics). Until relatively recently, however, these means were rather limited and often unsuccessful. With the development and spread of medical technology, artificial intelligence, IT, robotics and applied genetics, however, the ambition and possibilities of human enhancement have increased. Advocates argue that just as we have used medicine, exercise and diet to improve human health, so we should use these new techniques to elevate human nature. Critics counter that this a) risks unforeseen harmful consequences, b) deepens existing or creates new social inequalities, and c) represents a kind of existential dissatisfaction that can never be eased but will only be intensified.

  • https://www.thenewatlantis.com
    • Suggested

    New biotechnologies promise to revolutionize human existence — not only by delivering therapeutic treatments and cures, but also by offering physical and mental “enhancements”: creating stronger bodies and more powerful minds for ourselves and for the children we will carefully select. Biotechnology will offer us the option of controlling our genetic composition in ways that were previously unimaginable, as we — in British bioethicist John Harris’s formulation — replace “natural selection with deliberate selection, Darwinian evolution with ‘enhancement evolution.’” Those bioethicists who, like Harris, express great enthusiasm for our “post-human future” often dismiss the reservations of critics concerned about biotech...

  • http://www.vanderbilt.edu
    • PDF
    • Suggested

    BEYOND THERAPY Biotechnology and the Pursuit of Happiness THE COUNCIL PRESIDENT'S wy j CIL ON BIOETHICS A Report of The President's Council on Bioethics BEYOND THERAPY BEYOND THERAPY BIOTECHNOLOGY AND THE PURSUIT OF HAPPINESS A Report of The President's Council on Bioethics Washington, D.C. October 2003 www.bioethics.gov LETTER OF TRANSMITTAL TO THE PRESIDENT CONTENTS MEMBERS OF THE PRESIDENT'S COUNCIL ON BIOETHICS COUNCIL STAFF AND CONSULTANTS PREFACE 1 BIOTECHNOLOGY AND THE PURSUIT OF HAPPINESS I. THE GOLDEN AGE: ENTHUSIASM AND CONCERN 4 II. THE CASE FOR PUBLIC ATTENTION 7 III. DEFINING THE TOPIC 10 IV. ENDS AND MEANS 11 V. THE LIMITATIONS OF THE "THERAPY VS. ENHANCEMENT" DISTINCTION 13 VI. BEYOND NATURAL LIMITS: DREAMS OF PERFECTION AND HAPPINESS 16 VII. STRUCTURE OF THE INQUIRY: THE PRIMACY OF HUMAN ASPIRATIONS VIII. METHOD AND SPIRIT 21 ENDNOTES 24 2 BETTER CHILDREN I. IMPROVING NATIVE POWERS: GENETIC KNOWLEDGE AND TECHNOLOGY 20 C. ETHICAL ANALYSIS 1. Benefits. 44 28 A. AN OVERVIEW 28 B. TECHNICAL POSSIBILITIES 30 1. Prenatal Diagnosis and Screening Out. 32 2. Genetic Engineering of Desired Traits ("Fixing Up"). 35 3. Selecting Embryos for Desired Traits ("Choosing In"). 38 42 V xi XV xvii xix 1 25

  • https://www.youtube.com
    Does Enhancement Violate Human "Nature"?

    Abstract: I place the term ‘nature’ in quotation marks in the title as one of the fundamental questions that must be addressed before we can inquire about the ethics of enhancing human beings is whether there is an essential nature in which all human beings share. Various affirmative responses to this perennial philosophical question have been challenged by discoveries in evolutionary biology, cultural anthropology, cognitive psychology, and other relevant fields. Nevertheless, several theories of human nature persist in light of such discoveries. Whether there is a universal human nature and what essential qualities define that nature inform whether it may be altered using biotechnological or other means of enhancement. On one side of the debate are bioconservatives who reject any non-therapeutic interventions that could alter the putatively definitive qualities of human nature. On the other side are transhumanists who argue in favor of “morphological freedom” to reshape ourselves in any non-harmful way one chooses. Between these views are several proposals that allow for certain forms of enhancement that may increase individual or collective well-being within the limits of a defined human nature, eschewing the creation or transformation of human beings into a new ontological species of “posthumans.” I have previously defended certain moderate forms of physical, cognitive, emotive, or moral enhancement as compatible with a Thomistic account of human nature and flourishing (Eberl 2014b, 2017a, 2018). In this paper, I will not be defending a specifically Thomistic account of human nature. Rather, I will canvass several views of human nature that have informed diverse perspectives on the ethics of enhancement. I begin by considering why we need an account of human nature to adequately engage questions concerning enhancement. Faculty bio: https://www.slu.edu/arts-and-sciences/bioethics/faculty/eberl-jason.php Chapters: 0:00:00 - Presentation 0:46:53 - Q&A

  • https://plato.stanford.edu

    Stanford Encyclopedia of Philosophy Menu Browse Table of Contents What's New Random Entry Chronological Archives About Editorial Information About the SEP Editorial Board How to Cite the SEP Special Characters Advanced Tools Contact Support SEP Support the SEP PDFs for SEP Friends Make a Donation SEPIA for Libraries Entry Navigation Entry Contents Bibliography Academic Tools Friends PDF Preview Author and Citation Info Back to Top Neuroethics First published Wed Feb 10, 2016; substantive revision Wed Mar 3, 2021 Neuroethics is an interdisciplinary field focusing on ethical issues raised by our increased and constantly improving understanding of the brain and our ability to monitor and influence it. 1. The rise and scope of neuroethics 2. The ethics of neuroscience 2.1 The ethics of enhancement 2.1.1 Arguments for Enhancement 2.1.2 Arguments against Enhancement 2.2 Cognitive liberty 2.2.1 Privacy 2.2.2 Autonomy and authenticity 2.3 Personal Identity 2.4 Consciousness, life, and death 2.5 Practical neuroethics 2.6 Public perception of neuroscience 2.6.1 The seductive allure 2.6.2 Media Hype 2.7 Neuroscience and justice 3. The Neuroscience of Ethics 4. Looking forward: New neurotechnologies Bibliography Academic Tools Other Internet Resources Related Entries 1. The rise and scope of neuroethics Neuroethics focuses on ethical issues raised by our continually improving understanding of the brain, and by consequent improvements in our ability to monitor and influence brain function. Significant attention to neuroethics can be traced to 2002, when the Dana Foundation organized a meeting of neuroscientists, ethicists, and other thinkers, entitled Neuroethics: Mapping the Field. A participant at that meeting, columnist and wordsmith William Safire, is often credited with introducing and establishing the meaning of the term “neuroethics”, defining it as ‘the examination of what is right and wrong, good and bad about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain’ (Marcus, 2002, p.5). Others contend that the word “neuroethics” was in use prior to this (Illes, 2003; Racine, 2010), although all agree that these earlier uses did not employ it in a disciplinary sense, or to refer to the entirety of the ethical issues raised by neuroscience. Another attendee at that initial meeting, Adina Roskies, in response to a perceived lack of recognition of the potential novelty of neuroethics, penned “Neuroethics for the new millennium” (Roskies, 2002), an article in which she proposed a bipartite division of neuroethics into the “ethics of neuroscience”, which encompasses the kinds of ethical issues raised by Safire, and “the neuroscience of ethics”, thus suggesting an extension of the scope of neuroethics to encompass our burgeoning understanding of the biological basis of ethical thought and behavior and the ways in which this could itself influence and inform our ethical thinking. This broadening of the scope of neuroethics highlights the obvious and not-so-obvious ways that understanding our own moral thinking might affect our moral views; it is one aspect of neuroethics that distinguishes it from traditional bioethics. Another way of characterizing the field is as a study of ethical issues arising from what we can do to the brain (e.g. with neurotechnologies) and from what we know about it (including, for example, understanding the basis of ethical behavior). Although Roskies’ definition remains influential, it has been challenged in various ways. Some have argued that neuroethics should not be limited to the neuroscience of ethics, but rather be broadened to the cognitive science of ethics (Levy, personal communication), since so much work that enables us to understand the brain takes place in disciplines outside of neuroscience, strictly defined. This is in fact in the spirit of the original proposal, since it has been widely recognized that the brain sciences encompass a wide array of disciplines, methods, and questions. However, the most persistent criticisms have been from those who have questioned whether the neuroscience of ethics should be considered a part of neuroethics at all: they argue that understanding our ethical faculties is a scientific and not an ethical issue, and thus should not be part of neuroethics. This argument is usually followed by a denial that neuroethics is sufficiently distinct from traditional bioethics to warrant being called a discipline in its own right. The response to these critics is different: Whether or not these various branches of inquiry form a natural kind or are themselves a focus of ethical analysis is quite beside the point. Neuroethics is porous. One cannot successfully engage with many of the ethical issues without also understanding the science. In addition, academic or intellectual disciplines are at least in part (if not entirely) social constructs. And in this case the horse is out of the barn: It is clear that interesting and significant work is being pursued regarding the brain bases of ethical thought and behavior, and that this theoretical understanding has influenced, and has the potential to influence, our own thinking about ethics and our ethical practices. That neuroethics exists is undeniable: Neuroethical lines of research have borne interesting fruit over the last 10–15 years; neuroethics is now recognized as an area of study both nationally and internationally; neuroethics courses are taught at many universities; and training programs, professional societies, and research centers for neuroethics have already been established. The NIH BRAIN Initiative has devoted considerable resources to encouraging neuroscientific projects that incorporate neuroethical projects and analyses. Neuroethics is a discipline in its own right in part because we already structure our practices in ways that recognize it as such. What is most significant about neuroethics is not whether both the ethics of neuroscience and the neuroscience of ethics are given the same overarching disciplinary name, but that there are people working on both endeavors and that they are in dialogue (and sometimes, the very same people do both). Of course, to the extent that neuroethicists asks questions about disease, treatment, and so on, the questions will look familiar, and for answers they can and should look to extant work in traditional bioethics so as not to reinvent the wheel. But, ultimately, Farah is correct in saying that “New ethical issues are arising as neuroscience gives us unprecedented ways to understand the human mind and to predict, influence, and even control it. These issues lead us beyond the boundaries of bioethics into the philosophy of mind, psychology, theology, law and neuroscience itself. It is this larger set of issues that has…earned it a name of its own” (Farah 2010, p. 2). 2. The ethics of neuroscience Neuroethics is driven by neurotechnologies: it is concerned with the ethical questions that attend the development and effects of novel neurotechnologies, as well as other ethical and philosophical issues that arise from our growing understanding of how brains give rise to the people that we are and the social structures that we inhabit and create. These questions are intimately bound up with scientific questions about what kinds of knowledge can be acquired with particular techniques: what are the scope and limits of what a technique can tell us? With many new techniques, answers to these questions are obscure not only to the lay public, but often to the scientists themselves. The uncertainty about the reach of these technologies adds to the challenge of grappling with the ethical issues raised. Many new neurotechnologies enable us to monitor brain processes and increasingly, to understand how the brain gives rise to certain behaviors; others enable us to intervene in these processes, to change and perhaps to control behaviors, traits, or abilities. Although it will be impossible to fully canvass the range of questions neuroethics has thus far contemplated, discussion of the issues raised by a few neurotechnologies will allow me to illustrate the range of questions neuroethics entertains. The following is a not-exhaustive list of topics that fall under the general rubric of neuroethics. 2.1 The ethics of enhancement While medicine’s traditional goal of treating illness is pursued by the development of drugs and other treatments that counteract the detrimental effects of disease or insult, the same kinds of compounds and methods that are being developed to treat disease may also enhance normal cognitive functioning. We already possess the ability to improve some aspects of cognition above baseline, and will certainly develop other ways of doing so. Thus, a prominent topic in neuroethics is the ethics of neuroenhancement: What are the arguments for and against the use of neurotechnologies to enhance one’s brain’s capacities and functioning? Proponents of enhancement are sometimes called “transhumanists,” and opponents are identified as “bioconservatives”. These value-laden appellations may unnecessarily polarize a debate that need not pit extreme viewpoints against each other, and that offers many nuanced intermediate positions that recognize shared values (Parens, 2005) and make room for embracing the benefits of enhancement while recognizing the need for some type of regulation (e.g. Lin and Alhoff, 2008). The relevance of this debate itself depends to some extent upon a philosophical issue familiar to traditional bioethicists: the notorious difficulty of identifying the line between disease and normal function, and the corresponding difference between treatment and enhancement. However, despite the difficulty attending the principled drawing of this line, there are already clear instances in which a technology such as a drug is used with the aim of improving a capacity or behavior that is by no means clinically dysfunctional, or with the goal of improving a capacity beyond the range of normal functioning. One common example is the use, now widespread on college campuses and beyond, of methylphenidate, a stimulant typically prescribed for the treatment of ADHD. Known by the brand name Ritalin, methylphenidate has been shown to improve performance on working memory, episodic memory and inhibitory control tasks. Many students use it as a study aid, and the ethical standing of such off-label use is a focus of debate among neuroethicists (Sahakian, 2007; Greely et al., 2008). As in the example above, the enhancements neuroethicists most often discuss are cognitive enhancements: technologies that allow normal people to function cognitively at a higher level than they might without use of the technology (Knafo and Venero, 2015). One standing theoretical issue for neuroethics is a careful and precise articulation of whether, how and why cognitive enhancement has a philosophical status different than any other kind of enhancement, such as enhancement of physical capacities by the use of steroids (Dresler, 2019). Often overlooked are other interesting potential neuroenhancements. These are less frequently discussed than cognitive enhancements, but just as worthy of consideration. They include social/moral enhancements, such as the use of oxytocin to enhance pro-social behavior, and other noncognitive but biological enhancements, such as potential physical performance enhancers controlled by brain-computer interfaces (BCIs) (see, e.g. Savulescu and Persson, 2012; Douglas, 2008; Dubljevíc and Racine, 2017; Annals of NYAC, 2004). In many ways, discussions regarding these kinds of enhancement effectively recapitulate the cognitive enhancement debate, but in some respects they raise different concerns and prompt different arguments. 2.1.1 Arguments for Enhancement Naturalness: Although the aim of cognitive enhancement may at first seem ethically questionable at best, it is plausible that humans naturally engage in many forms of enhancement, including cognitive enhancement. Indeed, we typically applaud and value these efforts. After all, the aim of education is to cognitively enhance students (which, we now understand, occurs by changing their brains), and we look askance at those who devalue this particular enhancement, rather than at those who embrace it. So some kinds of cognitive enhancement are routine and unremarkable. Proponents of neuroenhancement will argue that there is no principled difference between the enhancements we routinely engage in, and enhancement by use of drugs or other neurotechnologies. Many in fact argue that we are a species whose nature it is to develop and use technology for augmenting our capacities, and that continual pursuit of enhancement is a mark of the human. Cognitive liberty: Those who believe that “cognitive liberty” (see section 2.2 below) is a fundamental right argue that an important element of the autonomy at stake in cognitive liberty is the liberty to determine for ourselves what to do with our minds and to them, including cognitive enhancement, if we so choose. Although many who champion “cognitive liberty” do so in the context of a strident political libertarianism (e.g. Boire, 2001), one can recognize the value of cognitive liberty without swallowing an entire political agenda. So, for example, even if we think that there is a prima facie right to determine our own cognitive states, there may be justifiable limits to that right. More work needs to be done to establish the boundaries of the cognitive liberty we ought to safeguard. Utilitarian arguments: Many proponents of cognitive enhancement point to the positive effects of enhancement and argue that the benefits outweigh the costs. In these utilitarian arguments it is important to consider the positive and negative effects not only for individuals, but also for society more broadly (see, e.g. Selgelid, 2007). Deontological arguments: Sometimes enhancements are argued to be an avenue for leveling the playing field, in pursuit of fairness and equity. Such arguments are bolstered by the finding that at least for some interventions, enhancement effects are greater for those who have lower baseline functioning than those starting with a higher baseline (President’s Commission on Bioethics, 2015). Practical arguments: These often point to the difficulty in enforcing regulations of extant technology, or the detrimental effects of trying to do so. They tend to be not really arguments in favor of enhancement, but rather reasons not to oppose its use. 2.1.2 Arguments against Enhancement There are a variety of arguments against enhancement. Most fall into the following types: Harms: The simplest and most powerful argument against enhancement is the claim that brain interventions carry with them the risk of harm, risks that make the use of these interventions unacceptable. The low bar for acceptable risk is an effect of the context of enhancement: risks deemed reasonable to incur when treating a deficiency or disease with the potential benefit of restoring normal function may be deemed unreasonable when the payoff is simply augmenting performance above a normal baseline. Some suggest that no risk is justified for enhancement purposes. In evaluating the strength of a harm-based argument against enhancement, several points should be considered: 1) What are the actual and potential harms and benefits (medical and social) of a given enhancement? 2) Who should make the judgments about appropriate tradeoffs? Different individuals may judge differently at what point the risk/benefit threshold occurs, and their judgments may depend upon the precise natures of the risks and benefits. Notice, too, the harm argument is toothless against enhancements that don’t pose any risks. Unnaturalness: A number of thinkers argue, in one form or another, that use of drugs or technologies to enhance our capacities is unnatural, and the implication is that unnatural implies immoral. Of course, to be a good argument, more reason has to be given both for why it is unnatural (see an argument for naturalness, above), and for why naturalness and morality align. Some arguments suggest that manipulating our cognitive machinery amounts to tinkering with “God-given” capacities, and usurping the role of God as creator can be easily understood as transgressive in a religious-moral framework. Despite its appeal to religious conservatives, a neuroethicist may want to offer a more ecumenical or naturalistic argument to support the link between unnatural and immoral, and will have to counter the claim, above, that it is natural for humans to enhance themselves. Diminishing human agency: Another argument suggests that the effect of enhancement will be to diminish human agency by undermining the need for real effort, and allowing for success with morally meaningless shortcuts. Human life will lose the value achieved by the process of striving for a goal and will be belittled as a result (see, e.g. Schermer, 2008; Kass, 2003). Although this is a promising form of argument, more needs to be done to undergird the claims that effort is intrinsically valuable. Recent work suggests no general argument to this effect is forthcoming (Douglas, 2019). After all, few find compelling the argument that we ought to abandon transportation by car for horses, walking, or bicycling, because these require more effort and thus have more moral value. The hubris objection: This interesting argument holds that the type of attitude that seems to underlie pursuit of such interventions is morally defective in some way, or is indicative of a morally defective character trait. So, for example, Michael Sandel suggests that the attitude underlying the attempt to enhance ourselves is a “Promethean” attitude of mastery that overlooks or underappreciates the “giftedness of human life.” It is the expression and indulgence of a problematic attitude of dominion toward life to which Sandel primarily objects: “The moral problem with enhancement lies less in the perfection it seeks than in the human disposition it expresses and promotes” (Sandel, 2002). Others have pushed back against this tack, arguing that the hubris objection against enhancement is at base a religious one, or that it fundamentally misunderstands the concepts it relies upon (Kahane, 2011). Equality and Distributive Justice: One question that routinely arises with new technological advances is “who gets to benefit from them?” As with other technologies, neuroenhancements are not free. However, worries about access are compounded in the case of neuroenhancements (as they may also be with other learning technologies). As enhancements increase capacities of those who use them, they are likely to further widen the already unconscionable gap between the haves and have-nots: We can foresee that those already well-off enough to afford enhancements will use them to increase their competitive advantage against others, leaving further behind those who cannot afford them. Not all arguments in this vein militate against enhancement. For example, the finding mentioned above -- that at least with some cognitive enhancement technologies, those who have lower baseline functioning experience greater improvements than those starting at a higher level -- could ground pro-enhancement fairness and equity arguments for leveling the playing field (President’s Commission on Bioethics, 2015). As public consciousness about racial and economic disparities increases, we should expect more neuroethical work on this topic. Although one can imagine policy solutions to distributive justice concerns, such as having enhancements covered by health insurance, having the state distribute them to those who cannot afford them, etc., widespread availability of neuroenhancements will inevitably raise questions about coercion. Coercion: The prospect of coercion is raised in several ways. Obviously, if the state decides to mandate an enhancement, treating its beneficial effects as a public health issue, this is effectively coercion. We see this currently in the backlash against vaccinations: they are mandated with the aim of promoting public health, but in some minds the mandate raises concerns about individual liberty. I would submit that the vaccination case demonstrates that at least on some occasions coercion is justified. The question is whether coercion could be justifiable for enhancement, rather than for harm prevention. Although some coercive ideas, such as the suggestion that we put Prozac or other enhancers in the water supply, are unlikely to be taken seriously as a policy issue (however, see Appel 2010 [2011]), less blatant forms of coercion are more realistic. For example, if people immersed in tomorrow’s competitive environment are in the company of others who are reaping the benefits from cognitive enhancement, they may feel compelled to make use of the same techniques just to remain competitive, even though they would rather not use enhancements. The danger is that respecting the autonomy of some may put pressure on the autonomy of others. There is unlikely to be any categorical resolution of the ethics of enhancement debate. The details of a technology will be relevant to determining whether a technology ought to be made available for enhancement purposes: we ought to treat a highly enhancing technology that causes no harm differently from one that provides some benefit at noticeable cost. Moreover, the magnitude of some of the equality-related issues will depend upon empirical facts about the technologies. Are neurotechnologies equally effective for everyone? As mentioned, there is evidence that some known enhancers such as the psychostimulants are more effective for those with deficiencies than for the unimpaired: studies suggest the beneficial effects of these drugs are proportional to the degree to which a capacity is impaired (Hussain et al., 2011). Other reports claim that normal subjects’ capacities are not actually enhanced by these drugs, and some aspects of functioning may actually be impaired (Mattay, et al., 2000; Ileva et al., 2013). If this is a widespread pattern, it may alleviate some worries about distributive justice and contributions to social and economic stratification, since people with a deficit will benefit proportionately more than those using the drug for enhancement purposes. Bear in mind, however, that biology is rarely that equitable, and it would be surprising if this pattern turned out to be the norm. Since the technologies that could provide enhancements are extremely diverse, ranging from drugs to implants to genetic manipulations, assessment of the risks and benefits and the way in which these technologies bear upon our conception of humanity will have to be empirically grounded. 2.2 Cognitive liberty Freedom is a cornerstone value in liberal democracies like our own, and one of the most cherished kinds of freedom is freedom of thought. The main elements of freedom of thought, or “cognitive liberty” as it is sometimes called (Sententia, 2013), include privacy and autonomy. Both of these can be challenged by the new developments in neuroscience. The value of, potential threat to, and ways to protect these aspects of freedom are a concern for neuroethics. Several recent papers have posited novel rights in this realm, such as rights to cognitive liberty, to mental privacy, to mental integrity, and to psychological continuity (Ienca and Andorno, 2017), or to psychological integrity and mental self-determination (Bublitz, 2020). 2.2.1 Privacy As the framers of our constitution were well aware, freedom is intimately linked with privacy: even being monitored is considered potentially “chilling” to the kinds of freedoms our society aims to protect. One type of freedom that has been championed in American jurisprudence is “the right to be let alone” (Warren and Brandeis, 1890), to be free from government or other intrusion in our private lives. In the past, mental privacy could be taken for granted: the first-person accessibility of the contents of consciousness ensured that the contents of one’s mind remained hidden to the outside world, until and unless they were voluntarily disclosed. Instead, the battles for freedom of thought were waged at the borders where thought meets the outside world -- in expression -- and were won with the First Amendment’s protections for those freedoms (note, however, that these protections are only against government infringement). Over the last half century, technological advances have eroded or impinged upon many traditional realms of worldly privacy. Most of the avenues for expression can be (and increasingly are) monitored by third parties. It is tempting to think that the inner sanctum of the mind remains the last bastion of real privacy. This may still be largely true, but even the privacy of the mind can no longer to be taken for granted. Our neuroscientific achievements have already made significant headway in allowing others to discern some aspects of our mental content through neurotechnologies. Noninvasive methods of brain imaging have revolutionized the study of human cognition and have dramatically altered the kinds of knowledge we can acquire about people and their minds. Niether is the threat to mental privacy as simple as the naive claim that neuroimaging can read our thoughts, nor are the capabilities of imaging so innocuous and blunt that we needn’t worry about that possibility. A focus of neuroethics is to determine the real nature of the threat to mental privacy, and to evaluate its ethical implications, many of which are relevant to legal, medical, and other social issues (Shen, 2013). For example, in a world in which the bastion of the mind may be lowering its drawbridges, do we need extra protections? Doing so effectively will require both a solid understanding of the neuroscientific technologies and the neural bases of thought, as well as a sensitivity to the ethical problems raised by our growing knowledge and ever-more-powerful neurotechnologies. These dual necessities illustrate why neuroethicists must be trained both in neuroscience and in ethics. In what follows I briefly discuss the most relevant neurotechnology and its limitations and then canvas a few ways in which privacy may be infringed by it. 2.2.1.2 An illustration: Potential threats to privacy with Functional MRI One of the most prominent neurotechnologies poised to pose a threat to privacy is Magnetic Resonance Imaging, or MRI. MRI can provide both structural and functional information about a person’s brain with minimal risk and inconvenience. In general, MRI is a tool that allows researchers noninvasively to examine or monitor brain structure and activity, and to correlate that structure or function with behavior. Structural or anatomical MRI provides high-resolution structural images of the brain. While structural imaging in the biosciences is not new, MRI provides much higher resolution and better ability to differentiate tissues than prior techniques such as x-rays or CT scans. However, it is not structural but functional MRI (fMRI) that has revolutionized the study of human cognition. fMRI provides information about correlates of neuronal activity, from which neural activity can be inferred. Recent advances in analysis methods for neuroimaging data such as multi-voxel pattern analysis and related techniques now allow relatively fine-grained “decoding” of brain activity. Decoding involves probabilistic matching, using machine learning, of an observed pattern of brain activation with experimentally established correlations between activity patterns and some kind of functional variable, such as task, behavior, or content. The kind of information provided by functional imaging promises to provide important evidence useful for three goals: Decoding mental content, diagnosis, and prediction. Neuroethical questions arise in all these areas. Before discussing these issues, it is important to remember that neuroimaging is a technology that is subject to a number of significant limitations, and these technical issues limit how precise the inferences can be. For example: The correlations between the fMRI signal and neural activity are rough: the signal is delayed in time from the neuronal activity, and spatially smeared, thus limiting the spatial and temporal precision of the information that can be inferred. A number of dynamic factors relate the fMRI signal to activity, and the precise underlying model is not yet well-understood. There is relatively low signal-to-noise, necessitating averaging across trials and often across people. Individual brains differ both in brain structure and in function. Variability makes determining when differences are clinically or scientifically relevant difficult, and leads to noisy data. Due to natural individual variability in structure and function, and brain plasticity (especially during development), even large differences in structure or deviation from the norm may not be indicative of any functional deficiency. Cognitive strategies can also affect variability in the data. These sources of variability can complicate the analysis of data and provide even more leeway for differences to exist without implying dysfunction. Activity in a brain area does not entail that the region is necessary for performance of the task. fMRI is so sensitive to motion that it would be virtually impossible to get information from a noncompliant subject. This makes the prospect of reading content from an unwilling mind virtually impossible. Without appreciating these technical issues and the resulting limits to what can legitimately be inferred from fMRI, one is likely to overestimate or mischaracterize the potential threat that it poses. In fact, much of the fear of mindreading expressed in non-scientific publications stems from a lack of understanding of the science (Racine, 2015). For example, there is no scientific basis to the worry that imaging would enable the reading of mental content without our knowing it. Thus, fears that the government is able to remotely or covertly monitor the thoughts of citizens are unfounded. 2.2.1.1 Decoding of mental content Noninvasive ways of inferring neural activity have led many to worry that mindreading is possible, not just in theory, but even now. Using decoding techniques fMRI can be used, for example, to reconstruct a visual stimulus from activity of the visual cortex while a subject is looking at a scene or to determine whether a subject is looking at a familiar face, or hearing a particular sound. If mental content supervenes on the physical structure and function of our brains, as most philosophers and neuroscientists think it does, then in principle it should be possible to read minds by reading brains. Because of the potential to identify mental content, decoding raises issues about mental privacy. Despite the remarkable advances in brain imaging technology, however, when it comes to mental content, our current abilities to “mind-read” are relatively limited, but continually improving (Roskies, 2015, 2020). Although some aspects of content can be decoded from neural data, these tend to be quite general and nonpropositional in character. The ability to infer semantic meaning from ideation or visual stimulation tends to work best when the realm of possible contents are quite constrained. Our current abilities allow us to infer some semantic atoms, such as representations denoting one of a prespecified set of concrete objects, but not unconstrained content, or entire propositions. Of course, future advances might make worries about mindreading more pressing. For example, if we develop robust means for decoding compositional meaning, we may one day come to be able to decode propositional thought. Still, some worries are warranted. Even if neuroimaging is not at the stage where mindreading is possible, it can nonetheless threaten aspects of privacy in ways that should give us pause. It is possible to identify individuals on the basis of their brain scans (Valizadeh et al., 2018). In addition, neuroimaging can provide some insights into attributes of people that they may not want known or disclosed. In some cases, subjects may not even know that these attributes are being probed, thinking they are being scanned for other purposes. A willing subject may not want certain things to be monitored. In what follows, I consider a few of these more realistic worries. Implicit bias: Although explicitly acknowledged racial biases are declining, this may be due to a reporting bias attributable to the increased negative social valuation of racial prejudice. Much contemporary research now focuses on examining implicit racial biases, which are automatic or unconscious reflections of racial bias. With fMRI and EEG, it is possible to interrogate implicit biases, sometimes without the subject’s awareness that that is what is being measured (Checkroud, 2014)[3]. While there is disagreement about how best to interpret implicit bias results (e.g., as a measure of perceived threat, as in-group/out-group distinctions, etc.), and what relevance they have for behavior, the possibility that implicit biases can be measured, either covertly or overtly, raises scientific and ethical questions (Molenberghs and Louis, 2018). When ought this information be collected? What procedures must be followed for subjects legitimately to consent to implicit measures? What significance should be attributed to evidence of biases? What kind of responsibility should be attributed to people who hold them? What predictive power might they hold? Should they be used for practical purposes? One can imagine obvious but controversial potential uses for implicit bias measures in legal situations, in employment contexts, in education, and in policing, all areas in which concerns of social justice are significant. Lie detection: Several neurotechnologies are being used to detect deception or neural correlates of lying or concealing information in experimental situations. For example, both fMRI measures and EEG analysis techniques relying on the P300 signal have been used in the laboratory to detect deception with varying levels of success. These methods are subject to a variety of criticisms (Farah et al., 2014). For example, almost all experimental studies fail to study real lying or deception, but instead investigate some version of instructed misdirection. The context, tasks, and motivations differ greatly between actual instances of lying and these experimental analogs, calling into question the ecological validity of these experimental techniques. Moreover, accuracy, though significantly higher than chance, is far from perfect, and because of the inability to determine base rates of lying, error rates cannot be effectively assessed. Thus, we cannot establish their reliability for real-world uses. Finally, both physical and mental countermeasures decrease the accuracy of these methods (Hsu et al., 2019). Despite these limitations, several companies have marketed neurotechnologies for this purpose. Character traits: Neurotechnologies have shown some promise in identifying or predicting aspects of personality or character. In an interesting study aimed at determining how well neuroimaging could detect lies, Greene and colleagues gave subjects in the fMRI scanner a prediction task in a game of chance that they could easily cheat on. By using statistical analysis the researchers could identify a group of subjects who clearly cheated and others who did not (Greene and Paxton, 2009). Although they could not determine with neuroimaging on which trials subjects cheated, there were overall differences in brain activation patterns between cheaters and those who played fair and were at chance in their predictions. Moreover, Greene and colleagues repeated this study at several months remove, and found that the character trait of honesty or dishonesty was stable over time: cheaters the first time were likely to cheat (indeed, cheated even more the second time), and honest players remained honest the second time around. Also interesting was the fact that the brain patterns suggested that cheaters had to activate their executive control systems more than noncheaters, not only when they cheated, but also when deciding not to cheat. While the differential activations cannot be linked specifically to the propensity to cheat rather than to the act of cheating, the work suggests that these task-related activation patterns may reflect correlates of trustworthiness. The prospect of using methods for detecting these sorts of traits or behaviors in real-world situations raises a host of thorny issues. What level of reliability should be required for their employment? In what circumstances should they be admissible as evidence in the courtroom? For other purposes? Using lie detection or decoding techniques from neuroscience in legal contexts may raise constitutional concerns: Is brain imaging a search or seizure as protected by the 4th Amendment? Would its forcible use be precluded by 5th Amendment rights? These questions, though troubling, might not be immediately pressing: in a landmark case (US v. Semrau, 2012) the court ruled that fMRI lie detection is inadmissible, given its current state of development. However, the opinion left open the possibility that it may be admissible in the future, if methods improve. Finally, to the extent that relevant activation patterns may be found to correlate significantly with activation patterns on other tasks, or with a task-free measure such as default-network activity, it raises the possibility that information about character could be inferred merely by scanning them doing something innocuous, without their knowledge of the kind of information being sought. Thus, there are multiple dimensions to the threat to privacy posed by imaging techniques. 2.2.1.2 Diagnosis Increasingly, neuroimaging information can bear upon diagnoses for diseases, and in some instances may provide predictive information prior to the onset of symptoms. Work on the default network is promising for improving diagnosis in certain diseases without requiring that subjects perform specific tasks in the scanner (Buckner et al., 2008). For some diseases, such as in Alzheimer’s disease, MRI promises to provide diagnostic information that previously could only be established at autopsy (Liu et al., 2018). fMRI signatures have also been linked to a variety of psychiatric diseases, although not yet with the reliability required for clinical diagnosis (Aydin et al., 2019). Neuroethical issues also arise regarding ways to handle incidental findings, that is, evidence of unsymptomatic tumors or potentially benign abnormalities that appear in the course of scanning research subjects for non-medical purposes (Illes et al. 2006; Illes and Sahakian, 2011). The ability to predict future functional deficits raises a host of issues, many of which have been previously addressed by genethics (the ethics of genetics), since both provide information about future disease risk. What may be different is that the diseases for which neurotechnologies are diagnostically useful are those that affect the brain, and thus potentially mental competence, mood, personality, or sense of self. As such they may raise peculiarly neuroethical questions (see below). 2.2.1.3 Prediction As discussed, decoding methods allow one to associate observed brain activity with previously observed brain/behavior correlations. In addition, such methods can also be used to predict future behaviors, insofar as these are correlated with observations of brain activity patterns. Some studies have already reported predictive power over upcoming decisions (Soon et al., 2008). Increasingly, we will see neuroscience or neuroimaging data that will give us some predictive power over longer-range future behaviors. For example, brain imaging may allow us to predict the onset of psychiatric symptoms such as psychotic or depressive episodes. In cases in which this behavior is indicative of mental dysfunction it raises questions about stigma, but also may allow more effective interventions. One confusion regarding neuro prediction should be clarified immediately: When neuroimages are said to “predict” future activity, it means they

  • On Genome Editing and the Politics of the Human Future

    Inevitable Progress: GenomeEditing, Sovereign Science &? the Politics of the Human Future J. Benjamin Hurlbut, School of Life Sciences, Arizona State University קולוקוויום בר אילן למדע, טכנולוגיה וחברה יום א׳, 17ינואר, 18:00 Following the advent ofCRISPR/Cas9, leading scientists expressed worries that this powerful andaccessible genome editing tool might be applied to human embryos, creatingheritable genetic changes in the human germline. Even as the)’ called forstrict limits, many also asserted that heritable human genome editing wasinevitable. Several years later this prophesy was fulfilled when the worldlearned that a young Chinese scientist, He Jiankui, had produced babies whosegenomes had been edited. This talk will explore how an imaginary ofinevitability shapes approaches to ethical deliberation and governance ofemerging biotechnology, focusing on the case of human genome editing. Drawingon interviews with He Jiankui and his colleagues, this talk will examine hismotivations, the advice and support he received from senior figures in thesciences and government, and the reactions from the international scientificcommunity that followed. I show how He’s project was situated within, ratherthan an aberration from, an approach to ethics and governance that is regulatedby the presumption of technological inevitability. I argue that the imaginaryof inevitability is an imaginary of right governance: it asserts relationsbetween science, technolog)׳ and society that construct ethical deliberation asnecessarily reactive, science as at once intrinsically progressive andsovereign, and governance as driven by and subsidiary to technologicalinnovation. Predicting the inevitable illicitly authorizes science to definethe parameters of deliberation even as it empowers scientists to declare whatthe future shall be.

  • https://www.rep.routledge.com

    Access to the full content is only available to members of institutions that have purchased access. If you belong to such an institution, please log in or find out more about how to order. Share Cite Cite close Loading content We were unable to load the content Print Contents Article Summary content locked 1 What is moral enhancement? content locked 2 Arguments for moral bioenhancement content locked 3 Objections to successful moral bioenhancement content locked 4 Misuse or misfiring content locked 5 Further questions content locked Bibliography Thematic Moral enhancement By Forsberg, Lisa Douglas, Thomas DOI 10.4324/9780415249126-L169-1 Published 2021 DOI: 10.4324/9780415249126-L169-1 Version: v1,  Published online: 2021 Retrieved June 06, 2023, from https://www.rep.routledge.com/articles/thematic/moral-enhancement/v-1 Article Summary Moral enhancements aim to morally improve a person, for example by increasing the frequency with which an individual does the right thing or acts from the right motives. Most of the applied ethics literature on moral enhancement focuses on moral bioenhancement – moral enhancement pursued through biomedical means – and considers examples such as the use of drugs to diminish aggression, suppress implicit racial biases, or amplify empathy. A number of authors have defended the voluntary pursuit of moral bioenhancement, or the development of technologies that would enable it. They have highlighted the need for humans to morally improve themselves in order to address moral failures such as the oppression of women, the mistreatment of animals, and anthropogenic climate change. They have also emphasised the moral similarities between moral bioenhancement and more familiar forms of moral enhancement, such as that achieved through childhood education, introspective reflection, and engagement with literature. Critics of moral enhancement have argued that it may undermine our freedom to ‘fall’ (i.e. be immoral), and therefore our moral agency, or exacerbate the domination of individuals by political authorities. They have also questioned the potential for biomedical interventions to produce the deepest and most valuable forms of moral improvement, and have highlighted the risks that technologies for moral bioenhancement might misfire or be intentionally misused, thereby producing moral deterioration. Underlying some of these worries is the observation that there is little agreement on which psychological transformations would constitute moral improvements, and in which contexts. Defenders of moral enhancement have made various proposals for resolving or side-stepping these disagreements, but it remains unclear how far these proposals can take us beyond establishing consensus on the worst types of moral failure. Share Cite Cite close Loading content We were unable to load the content Print Citing this article: Forsberg, Lisa and Thomas Douglas. Moral enhancement, 2021, doi:10.4324/9780415249126-L169-1. Routledge Encyclopedia of Philosophy, Taylor and Francis, https://www.rep.routledge.com/articles/thematic/moral-enhancement/v-1. Copyright © 1998-2023 Routledge. Related Articles Moral education By White, John Free will By Strawson, Galen Moral motivation By Wallace, R. Jay Virtues and vices By Hurka, Thomas

  • https://www.youtube.com
    CRISPR, Gene Editing, and Human Flourishing

    On December 4th we were grateful to partner with the Cade Museum for Creativity and Invention to host Stanford neurobiologist Bill Hurlbut. Bill, a physician, research scientist, ethicist and Trinity Forum Senior Fellow discussed exciting advancements in the field of gene editing, and the moral and social implications of this technological achievement. Hurlbut has referred to CRISPR technology as “the Swiss Army knife of genetics,” noting that it has opened exciting possibilities for the treatment, even eradication of various genetic diseases. At the same time, it has made urgent deep and thorny ethical dilemmas, such that Hurlbut has also called it “the deepest challenge our species has ever faced.” We hope you enjoy this conversation! Special thanks to our sponsors: Richard and Phoebe Miles and Bill and Lee Schroeder The song is Life by Matthew L. Fisher - https://www.youtube.com/watch?v=bZf0qpYnukk The painting is California Ranch by William Keith, 1908 Click here to support the work of the Trinity Forum: https://rb.gy/pdugyd

  • https://www.youtube.com
    Manufacturing Minds

    Please enjoy this recording of Manufacturing Minds: Collegium Institute's annual Fall Magi Project Event. The Magi Project for Science & Theology hosts and delivers courses, talks, seminars and other outreach activities in science and faith, helping people to think about their understanding of the physical Universe and their relationship with God, and how these ideas fit together in a complementary way. Can human minds be manufactured? What is the meaning of consciousness, and how might neuroscientists resolve the mysteries of mind and its implications for decision making? This conversation with two eminent Stanford neuroscientists, Prof. William Newsome (Vincent V.C. Woo Director of the Wu Tsai Neurosciences Institute) and Prof. William Hurlbut, MD (Stanford Medical School) explores these questions and more. Featuring: Prof. William Newsome is the Harman Family Provostial Professor of Neurobiology at the Stanford University School of Medicine, as well as Director of the Neurosciences Institute at Stanford, and a leading investigator in systems and cognitive neuroscience. His research on the neural mechanisms underlying visual perception and decision making have garnered numerous awards, including the Rank Prize in Optoelectronics, the Spencer Award, the Distinguished Scientific Contribution Award of the American Psychological Association, the Dan David Prize of Tel Aviv University, the Karl Spencer Lashley Award of the American Philosophical Society, and the Champalimaud Vision Award. Prof. William Hurlbut, MD, is Adjunct Professor and Senior Research Scholar in Neurobiology at the Stanford Medical School. He is the author of numerous publications on science and ethics including the co-edited volume Altruism and Altruistic Love: Science, Philosophy, and Religion in Dialogue (2002, Oxford), and “Science, Religion and the Human Spirit” in the Oxford Handbook of Religion and Science (2008). Formerly, he worked for NASA and was a member of the Chemical and Biological Warfare Working Group at the Center for International Security and Cooperation, and he served for nearly a decade on the President’s Council on Bioethics. This panel discussion is moderated by philosopher Janice Tzuling Chik, Ph.D., Inaugural John and Daria Barry Foundation Fellow at the University of Pennsylvania’s Program for Research on Religion and Urban Civil Society (PRRUCS) and Collegium Institute Senior Scholar. Prof. Chik is Assistant Professor of Philosophy at Ave Maria University and an Associate Member of the Aquinas Institute, Blackfriars Hall, Oxford. This webinar was cosponsored by the Program for Research on Religion and Urban Civil Society (PRRUCS) at the University of Pennsylvania, the Cornell Chapter of the Society of Catholic Scientists, the Zephyr Institute, the University of Pennsylvania Biological Basis of Behavior (BBB) program, and the University of Pennsylvania Department of Neuroscience.