80 results found (0.051 seconds)

  • https://stpaulcenter.com
    • Suggested

    Steven J. JensenNova et Vetera, Spring 2014 (Vol. 12, No. 2)SubscribePurchase This Issue< Previous Article Next Article >

  • https://www.popepaulvi.com
    • PDF
    • Suggested

    Check for updates Essay Transhumanist Medicine: Can We Direct Its Power to the Service of Human Dignity? Renée Mirkes, OSF, PhD' CATH MEDICAL In his Discourse on Method (1637), philosopher and mathematician René Descartes envisioned a radi- cally new kind of medicine, one that would make humans healthy and fulfilled ad infinitum-healthy bodies beyond aging and degeneration and vigorous minds beyond their natural powers and competen- cies. The potential of current and future biotechnolo- gical interventions to alter what it means to be human might very well convert Descartes's dream into reality. The medicalization of these transhuma- nist technologies demands our prompt and undivided attention. ASSO This article surveys the principal body/mind enhancement goals of transhumanist medicine and the means it would employ-genetic, robo, info-, and nanotechnologies to accomplish those ends (Part One). Second, it engages Christian NO CATHOLIC MEDICAL ASSOCIATION The Linacre Quarterly 1-12 O Catholic Medical Association 2019 Article reuse guidelines: sagepub.com/journals-permissions DOI: 10.1177/0024363919838134 journals.sagepub.com/home/lqr Abstract The medicalization of transhumanist technologies demands our prompt and undivided attention. This article surveys the principal body/mind enhancement goals of transhumanist medicine and the means it would employ-genetic, robo, info-, and nanotechnologies to accomplish those ends (Part One). Second, it engages Christian anthropological and natural law principles to evaluate the populist and essentialist concerns these therapeutic/enhancement interventions provoke (Part Two). And, third, it proposes formation of a Catholic medical think tank to appraise whether transhumanist biotechnologies can serve human dignity and, to the extent they can, to formulate wise clinical/administrative guidelines for their inclusion in US Catholic healthcare settings (Part Three). SSAGE Nontechnical summary: This article explores the body/mind enhancement goals of transhumanist medicine, evaluates the biotechnological means to accomplish those therapeutic/enhancement goals, and suggests the formation of a Catholic medical think tank to formulate wise clinical/administrative guidelines for the inclusion of genetic, robo, info-, and nanotechnologies in US Catholic healthcare settings. Keywords Body/mind enhancement, GRIN: genetics, robo, info-, nanotechnologies, Transhumanism, Transhumanist medicine anthropological and natural law principles to evalu- ate the populist and essentialist concerns these ther- apeutic/enhancement interventions provoke (Part Two). And, third, it proposes the formation of a Catholic medical think tank to appraise whether transhumanist biotechnologies can serve human dig- nity and, to the extent they can, to formulate wise I Center for NaProEthics, the ethics division of the Pope Paul VI Institute, Omaha, NE, USA Corresponding Author: Renée Mirkes, OSF, PhD, Center for NaProEthics, the ethics division of the Pope Paul VI Institute, Omaha, NE, USA. Email: ethics@popepaulvi.com 2 clinical/administrative guidelines for their inclusion in US Catholic healthcare settings (Part Three). Part One: Transhumanist Medicine: Goals and Means Julian Huxley, first director general of UNESCO and president of the British Eugenics Society from 1959 to 1962, was the first to coin the term transhumanism. In his 1957 essay of the same name, he wrote: "the human species can, if it wishes, transcend itself—not just sporadically an individual here in one way, an individual there in another way, but in its entirety, as humanity." And, once human beings finally take hold of their biological destiny, they will be "on the threshold of a new kind of existence, as different from ours as ours is from the Peking Man” (Huxley, 1957). Inferred in Huxley's statement is a definition of transhumanism that is, at root, a medical ideology, one promoting a technologically-mediated evolution that, according to the contemporary World Transhu- manist Association (WTA), will enhance the mind, body, and psyche of the human being, taking the human body beyond its species-typical structure, function, and abilities (Wolbring 2008). The WTA also defines the basic premise of transhumanism and, therefore, of transhumanist medicine: The belief that the present form of Homo sapiens does not represent the end of its development but a relatively nascent phase. GRIN-genetic, robotic, info, and nano-technologies will eventually artifi- cially accelerate the natural evolutionary process, freeing the human being from the vagaries of random mutations and the incremental nature of variation and adaptation. As the Transhumanist Declaration states: "We favor morphological freedom-the right to modify and enhance one's body, cognition, and emotions" (Trippett 2018). According to Oxford historian and cofounder of the World Transhumanist Association, Nick Bos- trom, Transhumanism's goal is "to make good the half-baked project that is human nature" (McNamee and Edwards 2006, 514) Therefore, when transhu- manist medicine sets its sights on overcoming evolu- tion, it also presupposes surmounting disease, death, and human nature itself. This model of medicine replaces the traditional concept of medical ther- apy using its biotechnical capacity to treat patients with disease or disabilities to restore them to a nor- mal state of health with the notion of enhance- ment the technological alteration not of disease processes "but the normal workings of the human body and psyche, to augment or improve their native capacities and performances" (President's Council The Linacre Quarterly XX(X) on Bioethics [PCB] 2003, 30). The insignia of the transhumanist movement-'h'or 'humanity plus'- speaks for itself. It defines enhancement beyond species-typical functioning as productive of least two scenarios. "Humanity plus" people, transhu- mans, or superhumans people who are better than well; people who have superpowers; people who retain their human bodies but are much faster, smar- ter, stronger, healthier, and live longer/forever young lives than unenhanced people. Or they will generate posthumans: people who, after abandoning their bodies completely, upload their consciousness or even their entire brains to computers, so they can "live a virtual life forever" on the earth or in space. Ray Kurzweil, Google's director of engineering pre- dicts we will be able to upload our entire brains to computers by 2045 ("How Soon Will We Be Able to Upload Our Minds to a Computer?" 2018). The Transhumanist Declaration succinctly depicts the goals of transhumanist model of medi- cine: "We envision the possibility of broadening human potential by overcoming aging, cognitive shortcoming, involuntary suffering, and our confine- ment to earth" (Sutton 2015, 117). The means of realizing these transhumanist goals are the various GRIN technological interventions, some of which are described below. Note the consistent pattern: initially, prescribing the biotech- nology for therapeutic ends for sick people; subse- quently, using it solely for enhancement ends for healthy persons. Neuro-enhancements ● Early and primitive brain-machine interfaces (BCIS) have already been used for therapeutic purposes: to help restore some mobility to those with paralysis or to give partial sight to people with certain kinds of blindness (Masci 2016). Patients equipped with BCIS use their minds to control their wheelchairs, advanced neuroprosthetic limbs, and drones (Bohan 2017). Scientists predict, in the not so distant future, that BCIs will do everything from helping stroke victims regain speech and mobility to successfully bring people out of locked-in syndrome. Daniel Faggella, a futurist who founded TechEmergence, a market research firm focusing on cognitive enhancement and the crossroads of technology and psychology, anticipates that BCI technology intended to ameliorate medical conditions will inevitably be put to enhancement uses. "Once we have Mirkes boots on the ground and the ameliorative stuff becomes more normal," Fagella argues, "people will then start to say: we can do more with this." Doing more inevitably will involve augmenting brain function, which, in a rela- tively simple way, has also already begun. For instance, scientists have been using electrodes placed on the head to run a mild electrical cur- rent through the brain, a procedure known as transcranial direct-current stimulation (tDCS). Research shows that tDCS may increase brain plasticity, making it easier for neurons to fire. This, in turn, improves cognition, making it easier for people taking tests to learn and retain things, anywhere from new languages to mathematics (Masci 2016). In 2016, Elon Musk inaugurated the idea of "neural lace," an advanced BCI in which a per- son's biological brain seamlessly meshes with nonbiological computing (Bohan 2017). Although neural lace may not yet be ready for clinical application, Musk is dedicating hefty amounts of money into its development in his new research firm, Neuralink. And he's colla- borating with another Silicon Valley futurist, Bryan Johnson, whose Kernel start-up is work- ing on similar projects. For one, Kernel is focusing on "neuroprosthetics." His researchers have bro- ken the code for the storage and retrieval of memories in the hippocampus paving the way for memory augmentation by an implant. Like mechanical prosthetics, neuroprosthetics will first be tested out in patients who are already suffering the progressive loss of their cognitive faculties and memory. However, as leaders of Stanford University's NeuroTechnology Ini- tiative predict, neuroprosthetics will be per- fected to the point where it will be accessed as a valuable enhancement. As these research- ers argue, BCIs "will transform medicine, technology, and society" and "future devices will likely not only restore, but also augment, human capacities" (Tracinski 2017). Simi- larly, Musk argues that adding a layer of digi- tal intelligence to one's normally functioning brain with a neural lace implant-using it exclusively for enhancement ends will allow humans to compete with artificial intelligence. ● Synthetic blood has thus far been manufac- tured for therapeutic goals. Engineered to clot more quickly than natural human blood, arti- ficial blood could prevent people from bleed- ing to death and could also monitor a person's 3 arteries, keeping them free of plaque and pre- venting a heart attack. Whether produced through nano- or genetic engineering, one obvious task for "smart blood" would be to increase the amount of oxygen a person's hemoglobin can carry. Anders Sandberg, neuroscientist and fellow at Oxford University's Future of Humanity Institute, explains: “In principle, the way our blood stores oxygen is very limited. So we could dramatically enhance our physical selves if we could increase the carrying capac- ity of hemoglobin. Smart blood would give you a lot more energy, which would be a kind of cognitive enhancement" (Masci 2016). Nootropic drugs (from nous, the Greek word for mind) are drugs that affect and theoreti- cally enhance cognition. Popular with resi- dents of Silicon Valley as a way to attain sharp mental function, nootropics come from a combination of exotic dietary supplements and research chemicals that gives an individ- ual an edge in his job-improved memory, increased clarity, and enhanced problem- solving without side effects (Tracinski 2017). In search of enhanced performance, some people are experimenting with the drug modafinil, a treatment intended for narcolepsy. Others regularly take selective serotonin reup- take inhibitors like Paxil and Zoloft to regulate their moods. Transhumanist researchers pre- dict these drugs are the forerunners of a new generation of neuro-enhancers that promises shortcuts to ever greater intellectual prowess (Honigsbaum 2013). Body Enhancements Dr. Gregor Wolbring, a bioethicist and science and technology studies researcher at the University of Calgary, points out that the ever-increasing appear- ance of internal and external enhancements of the human body to treat injuries promotes a growing cul- tural demand for, and approval of, modifications of the human body, its structure, function, and abilities, beyond species-typical boundaries (Berger 2008). ● The development of artificial or bionic mus- cles is progressing rapidly (Berger 2007). Researchers discovered the solution to the production of fast-contracting muscles is to use nanotechnology. The challenge for scien- tists is to simulate the intricacy of natural muscle in their artificial muscle systems. 4 ● These bionic muscles would initially have therapeutic uses for patients whose muscles have been wasted by disease or destroyed in catastrophic events. But when the technology advances beyond the capacity of natural mus- cles, people could opt, for enhancement ends, to swap normal, but less agile, natural mus- cles with their bionic counterparts. Biohackers citizen cyborgs are enthusiasti- cally getting radio frequency identification (RFID) chips implanted in their hands or wrists in do-it-yourself surgery in tattoo parlors. Pos- sible uses: making tap-and-go payments, regis- tering boarding passes, and opening a home or office door electronically. The chip would eliminate the need to carry keys and could also replace public transport cards. In respect to more serious applications, RFIDs could soon be used on a national scale for identification and security, to replace paper passports and to record personal medical data. Accident victims wearing RFIDs who would be brought to the ER in need of a blood transfu- sion could immediately be scanned for their blood type and allergies, for their medical power of attorney, for their organ donor wishes, and for their end of life directives (Bohan 2017). Bionics and prosthetics are the form of bodily augmentation already being tested out for a small number of special users. Right now you can attend the Cyborg Olympics, a competi- tion testing whose bionic limbs and robotics exoskeletons are the best. Exoskeletons that don't replace the normal human body but give it extra strength and, in some cases, extra dex- terity are currently being used to help the paralyzed walk or, as a robotic glove, to help people with limited strength or range of motion in their hands. Exoskeletons are also beginning to be used in industrial applications to help factory workers execute heavy lifts more safely. The military sees significant value in exoskeletons that could help soldiers travel farther and faster and carry heavier loads all with less fatigue. The ultimate goal for mil- itary applications is an armored robotic super- suit like "Iron Man" (Tracinski 2017). Genetic Engineering The CRISPR revolution began when Jennifer Doudna, University of California, Berkeley; The Linacre Quarterly XX(X) Emmanuelle Charpentier, Max Planck Insti- tute, Berlin; and Feng Zhang, Broad Institute of Harvard and MIT, realized that the CRISPR system in bacteria is programmable, that is, it can be customized to locate and then edit disable, repair, or augment-any gene in any species: microorganisms, plants, ani- mals, and humans. In sum, the designable CRISPR-Cas9 is revolutionary in giving scientists and clinicians the ability to wield unparalleled control over the human genome with the singular result of a radical face-lift for genetic research and genomic medicine. Transhumanists have their eye on two cur- rent human applications of CRISPR technol- ogy. The first showcases Dr. Carl June who recently led researchers from three institu- tions in the first preclinical CRISPR trial. June enrolled approximately 18 terminal can- cer patients in this phase-1 study, comprising the most extensive manipulation of the human genome to date. In this first-ever US CRISPR trial involving patients, June and his team are treating the patients' cancers-mul- tiple myeloma, myeloma, and sarcoma-with CRISPR-edited cells. If these trials are even remotely successful, transhumanists predict (Bohan 2017) it won't be long before the pub- lic demands the use of these editing tools on early human IVF embryos to prevent genetic diseases. And, down the road, to design babies, that is, to edit early IVF embryos according to parents' wishes for traits such as eye and hair color and, even further down the road, characteristics, like intelligence or athleticism, involving the engineering of a whole complex of genes. And that's precisely why transhumanists are focused on the second of CRISPR appli- cations. Shoukrat Mitalipov, Director of the Center for Embryonic Cell and Gene Ther- apy, Oregon Health & Science University, led a team of researchers in programming CRISPR-Cas9 to target the MYBPC3 gene mutation that can cause hypertrophic cardio- myopathy (HCM), a disease causing sudden death in young athletes. Then they produced fifty-eight lab embryos by co-injecting CRISPR and sperm from a man who carried one copy of the mutant gene into the cyto- plasm of each donor egg. Study results showed that CRISPR efficiently targeted the MYBPC3 gene mutation in 72.2 percent of the embryos. Second, forty-two of the CRISPR'd Mirkes ● embryos corrected the majority of the tar- geted mutations by copying the normal gene from the egg donor. And, third, all of the CRISPR'd embryos showed no off target cuts and developed normally to their morula stage. These data suggested to Mitalipov et al. that human embryonic CRISPR therapy, if it should ever meet future safety, reliability, and ethical standards, could someday be used "to reduce the burden of [an] heritable disease [like HCM] on the family and eventually the human population" (Mirkes 2017). Antiaging Technologies ● As futurist Nick Bostrom, director of the Future of Humanity Institute, a think tank at Oxford University, explains: “This may be the area where serious [genetic] enhancement first becomes possible, because it's much eas- ier to do many things at the embryonic stage than it is in adults using traditional drugs or machine implants" (Masci 2016). George M. Church is a geneticist holding positions with Harvard Medical School and MIT. Recently, he wrote an article in the MIT Technology Review recounting his plans to reverse aging in dogs by correcting genetic errors that shorten the lives of certain canine breeds. But the subtitle of the arti- cle "Biologist George Church says the idea is to live to 130 in the body of a 22-year- old" gets at the endgame of his genetic engineering endeavors. His company, Reju- venate Bio, hopes to abolish biological mortal- ity by curing aging, humanity's primary disease. Church and other proponents of antiaging medicine, hailing from respected universities and institutions throughout the country, have begun to convince lawmakers and healthcare providers to support their focus on treating conditions that accelerate aging. They insist that "technologies exist that will rejuvenate the aged and give them lives that resemble those of the super-agers" (Cox 2018). Age X, a subsidiary of BioTime, is another research entity working on radical rejuvena- tion. Founder Michael West hopes to accom- plish his antiaging goals not by permanently altering the embryonic genome as Church's germline genetic engineering research would do but by temporarily reactivating embryonic gene pathways that would bring older adults to the state of a healthy twenty-year-old. In the 5 not so distant future, West's adult patient, say age sixty, would undergo inverted Terminal Repeat (ITR) that would theoretically restore the patient to the twenty-year-old stage, from which point he would age normally. To stop the aging process altogether, the individual would have to repeat the iTR periodically when, for example, the patient once more ages to sixty, so the person would continually revert to the vitality of a twenty-year-old. The Salk Institute and the Weizmann Institute are also working on ways to regenerate peo- ple who are approaching old age. Another company, BioViva, is using gene-editing technology to lengthen the telomeres at the end of chromosomes and thus surpass the Hayflick Limit (forty to sixty normal human cell population divisions) defying cellular senescence and aging. Part Two: Ethics Critique The following ethics assessment relies on the popu- list and essentialist concerns President George W. Bush's bioethics council applied to the enhancement technologies that existed in the early years of the 21st century. I raise these concerns here because they help to adequately evaluate the more developed and sophisticated GRIN technologies of 2019. Populist Concerns See Transhumanist Technologies as Threats to: Safety and Efficacy Using BCIs such as neural lace to produce smarter people, as Elon Musk hopes to do, runs the very real risk of overloading the brain's “carrying capacity." Neuroscience experts, including Martine Dresler, argue the natural evolutionary process has already "forced brains to develop toward optimal….. functioning” (Masci 2016). When we try to ramp-up intelligence beyond that point, we do so at our own risk. Furthermore, given scientific ignorance regarding the intercon- nectivity between body and mind, changing the neural system may have unpredictable and dele- terious impacts on other bodily systems. The effort to genetically engineer smarter people runs up against the wall of the genetic complexity of human intelligence. Many scientists estimate that a dance of thousands of genes is responsible for

  • https://academic.oup.com
    • Suggested

    429 Too Many Requests You have sent too many requests in a given amount of time.

  • The Ethics of Generating Posthumans: Philosophical and Theological Reflections on Bringing New Persons into Existence

    Should transhuman and posthuman persons ever be brought into existence? And if so, could they be generated in a good and loving way? This study explores how society may respond to the actual generation of new kinds of persons from ethical, philosophical, and theological perspectives. Contributors to this volume address a number of essential questions, including the ethical ramifications of generating new life, the relationships that generators may have with their creations, and how these creations may consider their generation. This collection's interdisciplinary approach traverses the philosophical writings of Aristotle, Aquinas, Kant, Nietzsche, and Heidegger, alongside theological considerations from Jewish, Christian, and Islamic traditions. It invites academics, faith leaders, policy makers, and stakeholders to think through the ethical gamut of generating posthuman and transhuman persons.

  • https://www.youtube.com
    Engineering and Transhumanism

    Antonio Diéguez. Professor of Logic and Philosophy of Science. Universidad de Málaga. Transhumanism and bionics. Paolo Benanti. Professor of Ethics, Moral Biology and Bioethics. Pontificia Università Gregoriana. Robotics: the human is perfected, robots are humanized. Carissa Veliz. Associate Professor at the Faculty of Philosophy and the Institute for Ethics in AI. University of Oxford. Artificial intelligence and ethics. Privacy is power. Moderated by José Miguel Mohedano. Assistant Director of Integral Training at the Polytechnic School. Universidad Francisco de Vitoria. expandedreasonawards.org

  • https://plato.stanford.edu

    Stanford Encyclopedia of Philosophy Menu Browse Table of Contents What's New Random Entry Chronological Archives About Editorial Information About the SEP Editorial Board How to Cite the SEP Special Characters Advanced Tools Contact Support SEP Support the SEP PDFs for SEP Friends Make a Donation SEPIA for Libraries Entry Navigation Entry Contents Bibliography Academic Tools Friends PDF Preview Author and Citation Info Back to Top Neuroethics First published Wed Feb 10, 2016; substantive revision Wed Mar 3, 2021 Neuroethics is an interdisciplinary field focusing on ethical issues raised by our increased and constantly improving understanding of the brain and our ability to monitor and influence it. 1. The rise and scope of neuroethics 2. The ethics of neuroscience 2.1 The ethics of enhancement 2.1.1 Arguments for Enhancement 2.1.2 Arguments against Enhancement 2.2 Cognitive liberty 2.2.1 Privacy 2.2.2 Autonomy and authenticity 2.3 Personal Identity 2.4 Consciousness, life, and death 2.5 Practical neuroethics 2.6 Public perception of neuroscience 2.6.1 The seductive allure 2.6.2 Media Hype 2.7 Neuroscience and justice 3. The Neuroscience of Ethics 4. Looking forward: New neurotechnologies Bibliography Academic Tools Other Internet Resources Related Entries 1. The rise and scope of neuroethics Neuroethics focuses on ethical issues raised by our continually improving understanding of the brain, and by consequent improvements in our ability to monitor and influence brain function. Significant attention to neuroethics can be traced to 2002, when the Dana Foundation organized a meeting of neuroscientists, ethicists, and other thinkers, entitled Neuroethics: Mapping the Field. A participant at that meeting, columnist and wordsmith William Safire, is often credited with introducing and establishing the meaning of the term “neuroethics”, defining it as ‘the examination of what is right and wrong, good and bad about the treatment of, perfection of, or unwelcome invasion of and worrisome manipulation of the human brain’ (Marcus, 2002, p.5). Others contend that the word “neuroethics” was in use prior to this (Illes, 2003; Racine, 2010), although all agree that these earlier uses did not employ it in a disciplinary sense, or to refer to the entirety of the ethical issues raised by neuroscience. Another attendee at that initial meeting, Adina Roskies, in response to a perceived lack of recognition of the potential novelty of neuroethics, penned “Neuroethics for the new millennium” (Roskies, 2002), an article in which she proposed a bipartite division of neuroethics into the “ethics of neuroscience”, which encompasses the kinds of ethical issues raised by Safire, and “the neuroscience of ethics”, thus suggesting an extension of the scope of neuroethics to encompass our burgeoning understanding of the biological basis of ethical thought and behavior and the ways in which this could itself influence and inform our ethical thinking. This broadening of the scope of neuroethics highlights the obvious and not-so-obvious ways that understanding our own moral thinking might affect our moral views; it is one aspect of neuroethics that distinguishes it from traditional bioethics. Another way of characterizing the field is as a study of ethical issues arising from what we can do to the brain (e.g. with neurotechnologies) and from what we know about it (including, for example, understanding the basis of ethical behavior). Although Roskies’ definition remains influential, it has been challenged in various ways. Some have argued that neuroethics should not be limited to the neuroscience of ethics, but rather be broadened to the cognitive science of ethics (Levy, personal communication), since so much work that enables us to understand the brain takes place in disciplines outside of neuroscience, strictly defined. This is in fact in the spirit of the original proposal, since it has been widely recognized that the brain sciences encompass a wide array of disciplines, methods, and questions. However, the most persistent criticisms have been from those who have questioned whether the neuroscience of ethics should be considered a part of neuroethics at all: they argue that understanding our ethical faculties is a scientific and not an ethical issue, and thus should not be part of neuroethics. This argument is usually followed by a denial that neuroethics is sufficiently distinct from traditional bioethics to warrant being called a discipline in its own right. The response to these critics is different: Whether or not these various branches of inquiry form a natural kind or are themselves a focus of ethical analysis is quite beside the point. Neuroethics is porous. One cannot successfully engage with many of the ethical issues without also understanding the science. In addition, academic or intellectual disciplines are at least in part (if not entirely) social constructs. And in this case the horse is out of the barn: It is clear that interesting and significant work is being pursued regarding the brain bases of ethical thought and behavior, and that this theoretical understanding has influenced, and has the potential to influence, our own thinking about ethics and our ethical practices. That neuroethics exists is undeniable: Neuroethical lines of research have borne interesting fruit over the last 10–15 years; neuroethics is now recognized as an area of study both nationally and internationally; neuroethics courses are taught at many universities; and training programs, professional societies, and research centers for neuroethics have already been established. The NIH BRAIN Initiative has devoted considerable resources to encouraging neuroscientific projects that incorporate neuroethical projects and analyses. Neuroethics is a discipline in its own right in part because we already structure our practices in ways that recognize it as such. What is most significant about neuroethics is not whether both the ethics of neuroscience and the neuroscience of ethics are given the same overarching disciplinary name, but that there are people working on both endeavors and that they are in dialogue (and sometimes, the very same people do both). Of course, to the extent that neuroethicists asks questions about disease, treatment, and so on, the questions will look familiar, and for answers they can and should look to extant work in traditional bioethics so as not to reinvent the wheel. But, ultimately, Farah is correct in saying that “New ethical issues are arising as neuroscience gives us unprecedented ways to understand the human mind and to predict, influence, and even control it. These issues lead us beyond the boundaries of bioethics into the philosophy of mind, psychology, theology, law and neuroscience itself. It is this larger set of issues that has…earned it a name of its own” (Farah 2010, p. 2). 2. The ethics of neuroscience Neuroethics is driven by neurotechnologies: it is concerned with the ethical questions that attend the development and effects of novel neurotechnologies, as well as other ethical and philosophical issues that arise from our growing understanding of how brains give rise to the people that we are and the social structures that we inhabit and create. These questions are intimately bound up with scientific questions about what kinds of knowledge can be acquired with particular techniques: what are the scope and limits of what a technique can tell us? With many new techniques, answers to these questions are obscure not only to the lay public, but often to the scientists themselves. The uncertainty about the reach of these technologies adds to the challenge of grappling with the ethical issues raised. Many new neurotechnologies enable us to monitor brain processes and increasingly, to understand how the brain gives rise to certain behaviors; others enable us to intervene in these processes, to change and perhaps to control behaviors, traits, or abilities. Although it will be impossible to fully canvass the range of questions neuroethics has thus far contemplated, discussion of the issues raised by a few neurotechnologies will allow me to illustrate the range of questions neuroethics entertains. The following is a not-exhaustive list of topics that fall under the general rubric of neuroethics. 2.1 The ethics of enhancement While medicine’s traditional goal of treating illness is pursued by the development of drugs and other treatments that counteract the detrimental effects of disease or insult, the same kinds of compounds and methods that are being developed to treat disease may also enhance normal cognitive functioning. We already possess the ability to improve some aspects of cognition above baseline, and will certainly develop other ways of doing so. Thus, a prominent topic in neuroethics is the ethics of neuroenhancement: What are the arguments for and against the use of neurotechnologies to enhance one’s brain’s capacities and functioning? Proponents of enhancement are sometimes called “transhumanists,” and opponents are identified as “bioconservatives”. These value-laden appellations may unnecessarily polarize a debate that need not pit extreme viewpoints against each other, and that offers many nuanced intermediate positions that recognize shared values (Parens, 2005) and make room for embracing the benefits of enhancement while recognizing the need for some type of regulation (e.g. Lin and Alhoff, 2008). The relevance of this debate itself depends to some extent upon a philosophical issue familiar to traditional bioethicists: the notorious difficulty of identifying the line between disease and normal function, and the corresponding difference between treatment and enhancement. However, despite the difficulty attending the principled drawing of this line, there are already clear instances in which a technology such as a drug is used with the aim of improving a capacity or behavior that is by no means clinically dysfunctional, or with the goal of improving a capacity beyond the range of normal functioning. One common example is the use, now widespread on college campuses and beyond, of methylphenidate, a stimulant typically prescribed for the treatment of ADHD. Known by the brand name Ritalin, methylphenidate has been shown to improve performance on working memory, episodic memory and inhibitory control tasks. Many students use it as a study aid, and the ethical standing of such off-label use is a focus of debate among neuroethicists (Sahakian, 2007; Greely et al., 2008). As in the example above, the enhancements neuroethicists most often discuss are cognitive enhancements: technologies that allow normal people to function cognitively at a higher level than they might without use of the technology (Knafo and Venero, 2015). One standing theoretical issue for neuroethics is a careful and precise articulation of whether, how and why cognitive enhancement has a philosophical status different than any other kind of enhancement, such as enhancement of physical capacities by the use of steroids (Dresler, 2019). Often overlooked are other interesting potential neuroenhancements. These are less frequently discussed than cognitive enhancements, but just as worthy of consideration. They include social/moral enhancements, such as the use of oxytocin to enhance pro-social behavior, and other noncognitive but biological enhancements, such as potential physical performance enhancers controlled by brain-computer interfaces (BCIs) (see, e.g. Savulescu and Persson, 2012; Douglas, 2008; Dubljevíc and Racine, 2017; Annals of NYAC, 2004). In many ways, discussions regarding these kinds of enhancement effectively recapitulate the cognitive enhancement debate, but in some respects they raise different concerns and prompt different arguments. 2.1.1 Arguments for Enhancement Naturalness: Although the aim of cognitive enhancement may at first seem ethically questionable at best, it is plausible that humans naturally engage in many forms of enhancement, including cognitive enhancement. Indeed, we typically applaud and value these efforts. After all, the aim of education is to cognitively enhance students (which, we now understand, occurs by changing their brains), and we look askance at those who devalue this particular enhancement, rather than at those who embrace it. So some kinds of cognitive enhancement are routine and unremarkable. Proponents of neuroenhancement will argue that there is no principled difference between the enhancements we routinely engage in, and enhancement by use of drugs or other neurotechnologies. Many in fact argue that we are a species whose nature it is to develop and use technology for augmenting our capacities, and that continual pursuit of enhancement is a mark of the human. Cognitive liberty: Those who believe that “cognitive liberty” (see section 2.2 below) is a fundamental right argue that an important element of the autonomy at stake in cognitive liberty is the liberty to determine for ourselves what to do with our minds and to them, including cognitive enhancement, if we so choose. Although many who champion “cognitive liberty” do so in the context of a strident political libertarianism (e.g. Boire, 2001), one can recognize the value of cognitive liberty without swallowing an entire political agenda. So, for example, even if we think that there is a prima facie right to determine our own cognitive states, there may be justifiable limits to that right. More work needs to be done to establish the boundaries of the cognitive liberty we ought to safeguard. Utilitarian arguments: Many proponents of cognitive enhancement point to the positive effects of enhancement and argue that the benefits outweigh the costs. In these utilitarian arguments it is important to consider the positive and negative effects not only for individuals, but also for society more broadly (see, e.g. Selgelid, 2007). Deontological arguments: Sometimes enhancements are argued to be an avenue for leveling the playing field, in pursuit of fairness and equity. Such arguments are bolstered by the finding that at least for some interventions, enhancement effects are greater for those who have lower baseline functioning than those starting with a higher baseline (President’s Commission on Bioethics, 2015). Practical arguments: These often point to the difficulty in enforcing regulations of extant technology, or the detrimental effects of trying to do so. They tend to be not really arguments in favor of enhancement, but rather reasons not to oppose its use. 2.1.2 Arguments against Enhancement There are a variety of arguments against enhancement. Most fall into the following types: Harms: The simplest and most powerful argument against enhancement is the claim that brain interventions carry with them the risk of harm, risks that make the use of these interventions unacceptable. The low bar for acceptable risk is an effect of the context of enhancement: risks deemed reasonable to incur when treating a deficiency or disease with the potential benefit of restoring normal function may be deemed unreasonable when the payoff is simply augmenting performance above a normal baseline. Some suggest that no risk is justified for enhancement purposes. In evaluating the strength of a harm-based argument against enhancement, several points should be considered: 1) What are the actual and potential harms and benefits (medical and social) of a given enhancement? 2) Who should make the judgments about appropriate tradeoffs? Different individuals may judge differently at what point the risk/benefit threshold occurs, and their judgments may depend upon the precise natures of the risks and benefits. Notice, too, the harm argument is toothless against enhancements that don’t pose any risks. Unnaturalness: A number of thinkers argue, in one form or another, that use of drugs or technologies to enhance our capacities is unnatural, and the implication is that unnatural implies immoral. Of course, to be a good argument, more reason has to be given both for why it is unnatural (see an argument for naturalness, above), and for why naturalness and morality align. Some arguments suggest that manipulating our cognitive machinery amounts to tinkering with “God-given” capacities, and usurping the role of God as creator can be easily understood as transgressive in a religious-moral framework. Despite its appeal to religious conservatives, a neuroethicist may want to offer a more ecumenical or naturalistic argument to support the link between unnatural and immoral, and will have to counter the claim, above, that it is natural for humans to enhance themselves. Diminishing human agency: Another argument suggests that the effect of enhancement will be to diminish human agency by undermining the need for real effort, and allowing for success with morally meaningless shortcuts. Human life will lose the value achieved by the process of striving for a goal and will be belittled as a result (see, e.g. Schermer, 2008; Kass, 2003). Although this is a promising form of argument, more needs to be done to undergird the claims that effort is intrinsically valuable. Recent work suggests no general argument to this effect is forthcoming (Douglas, 2019). After all, few find compelling the argument that we ought to abandon transportation by car for horses, walking, or bicycling, because these require more effort and thus have more moral value. The hubris objection: This interesting argument holds that the type of attitude that seems to underlie pursuit of such interventions is morally defective in some way, or is indicative of a morally defective character trait. So, for example, Michael Sandel suggests that the attitude underlying the attempt to enhance ourselves is a “Promethean” attitude of mastery that overlooks or underappreciates the “giftedness of human life.” It is the expression and indulgence of a problematic attitude of dominion toward life to which Sandel primarily objects: “The moral problem with enhancement lies less in the perfection it seeks than in the human disposition it expresses and promotes” (Sandel, 2002). Others have pushed back against this tack, arguing that the hubris objection against enhancement is at base a religious one, or that it fundamentally misunderstands the concepts it relies upon (Kahane, 2011). Equality and Distributive Justice: One question that routinely arises with new technological advances is “who gets to benefit from them?” As with other technologies, neuroenhancements are not free. However, worries about access are compounded in the case of neuroenhancements (as they may also be with other learning technologies). As enhancements increase capacities of those who use them, they are likely to further widen the already unconscionable gap between the haves and have-nots: We can foresee that those already well-off enough to afford enhancements will use them to increase their competitive advantage against others, leaving further behind those who cannot afford them. Not all arguments in this vein militate against enhancement. For example, the finding mentioned above -- that at least with some cognitive enhancement technologies, those who have lower baseline functioning experience greater improvements than those starting at a higher level -- could ground pro-enhancement fairness and equity arguments for leveling the playing field (President’s Commission on Bioethics, 2015). As public consciousness about racial and economic disparities increases, we should expect more neuroethical work on this topic. Although one can imagine policy solutions to distributive justice concerns, such as having enhancements covered by health insurance, having the state distribute them to those who cannot afford them, etc., widespread availability of neuroenhancements will inevitably raise questions about coercion. Coercion: The prospect of coercion is raised in several ways. Obviously, if the state decides to mandate an enhancement, treating its beneficial effects as a public health issue, this is effectively coercion. We see this currently in the backlash against vaccinations: they are mandated with the aim of promoting public health, but in some minds the mandate raises concerns about individual liberty. I would submit that the vaccination case demonstrates that at least on some occasions coercion is justified. The question is whether coercion could be justifiable for enhancement, rather than for harm prevention. Although some coercive ideas, such as the suggestion that we put Prozac or other enhancers in the water supply, are unlikely to be taken seriously as a policy issue (however, see Appel 2010 [2011]), less blatant forms of coercion are more realistic. For example, if people immersed in tomorrow’s competitive environment are in the company of others who are reaping the benefits from cognitive enhancement, they may feel compelled to make use of the same techniques just to remain competitive, even though they would rather not use enhancements. The danger is that respecting the autonomy of some may put pressure on the autonomy of others. There is unlikely to be any categorical resolution of the ethics of enhancement debate. The details of a technology will be relevant to determining whether a technology ought to be made available for enhancement purposes: we ought to treat a highly enhancing technology that causes no harm differently from one that provides some benefit at noticeable cost. Moreover, the magnitude of some of the equality-related issues will depend upon empirical facts about the technologies. Are neurotechnologies equally effective for everyone? As mentioned, there is evidence that some known enhancers such as the psychostimulants are more effective for those with deficiencies than for the unimpaired: studies suggest the beneficial effects of these drugs are proportional to the degree to which a capacity is impaired (Hussain et al., 2011). Other reports claim that normal subjects’ capacities are not actually enhanced by these drugs, and some aspects of functioning may actually be impaired (Mattay, et al., 2000; Ileva et al., 2013). If this is a widespread pattern, it may alleviate some worries about distributive justice and contributions to social and economic stratification, since people with a deficit will benefit proportionately more than those using the drug for enhancement purposes. Bear in mind, however, that biology is rarely that equitable, and it would be surprising if this pattern turned out to be the norm. Since the technologies that could provide enhancements are extremely diverse, ranging from drugs to implants to genetic manipulations, assessment of the risks and benefits and the way in which these technologies bear upon our conception of humanity will have to be empirically grounded. 2.2 Cognitive liberty Freedom is a cornerstone value in liberal democracies like our own, and one of the most cherished kinds of freedom is freedom of thought. The main elements of freedom of thought, or “cognitive liberty” as it is sometimes called (Sententia, 2013), include privacy and autonomy. Both of these can be challenged by the new developments in neuroscience. The value of, potential threat to, and ways to protect these aspects of freedom are a concern for neuroethics. Several recent papers have posited novel rights in this realm, such as rights to cognitive liberty, to mental privacy, to mental integrity, and to psychological continuity (Ienca and Andorno, 2017), or to psychological integrity and mental self-determination (Bublitz, 2020). 2.2.1 Privacy As the framers of our constitution were well aware, freedom is intimately linked with privacy: even being monitored is considered potentially “chilling” to the kinds of freedoms our society aims to protect. One type of freedom that has been championed in American jurisprudence is “the right to be let alone” (Warren and Brandeis, 1890), to be free from government or other intrusion in our private lives. In the past, mental privacy could be taken for granted: the first-person accessibility of the contents of consciousness ensured that the contents of one’s mind remained hidden to the outside world, until and unless they were voluntarily disclosed. Instead, the battles for freedom of thought were waged at the borders where thought meets the outside world -- in expression -- and were won with the First Amendment’s protections for those freedoms (note, however, that these protections are only against government infringement). Over the last half century, technological advances have eroded or impinged upon many traditional realms of worldly privacy. Most of the avenues for expression can be (and increasingly are) monitored by third parties. It is tempting to think that the inner sanctum of the mind remains the last bastion of real privacy. This may still be largely true, but even the privacy of the mind can no longer to be taken for granted. Our neuroscientific achievements have already made significant headway in allowing others to discern some aspects of our mental content through neurotechnologies. Noninvasive methods of brain imaging have revolutionized the study of human cognition and have dramatically altered the kinds of knowledge we can acquire about people and their minds. Niether is the threat to mental privacy as simple as the naive claim that neuroimaging can read our thoughts, nor are the capabilities of imaging so innocuous and blunt that we needn’t worry about that possibility. A focus of neuroethics is to determine the real nature of the threat to mental privacy, and to evaluate its ethical implications, many of which are relevant to legal, medical, and other social issues (Shen, 2013). For example, in a world in which the bastion of the mind may be lowering its drawbridges, do we need extra protections? Doing so effectively will require both a solid understanding of the neuroscientific technologies and the neural bases of thought, as well as a sensitivity to the ethical problems raised by our growing knowledge and ever-more-powerful neurotechnologies. These dual necessities illustrate why neuroethicists must be trained both in neuroscience and in ethics. In what follows I briefly discuss the most relevant neurotechnology and its limitations and then canvas a few ways in which privacy may be infringed by it. An illustration: Potential threats to privacy with Functional MRI One of the most prominent neurotechnologies poised to pose a threat to privacy is Magnetic Resonance Imaging, or MRI. MRI can provide both structural and functional information about a person’s brain with minimal risk and inconvenience. In general, MRI is a tool that allows researchers noninvasively to examine or monitor brain structure and activity, and to correlate that structure or function with behavior. Structural or anatomical MRI provides high-resolution structural images of the brain. While structural imaging in the biosciences is not new, MRI provides much higher resolution and better ability to differentiate tissues than prior techniques such as x-rays or CT scans. However, it is not structural but functional MRI (fMRI) that has revolutionized the study of human cognition. fMRI provides information about correlates of neuronal activity, from which neural activity can be inferred. Recent advances in analysis methods for neuroimaging data such as multi-voxel pattern analysis and related techniques now allow relatively fine-grained “decoding” of brain activity. Decoding involves probabilistic matching, using machine learning, of an observed pattern of brain activation with experimentally established correlations between activity patterns and some kind of functional variable, such as task, behavior, or content. The kind of information provided by functional imaging promises to provide important evidence useful for three goals: Decoding mental content, diagnosis, and prediction. Neuroethical questions arise in all these areas. Before discussing these issues, it is important to remember that neuroimaging is a technology that is subject to a number of significant limitations, and these technical issues limit how precise the inferences can be. For example: The correlations between the fMRI signal and neural activity are rough: the signal is delayed in time from the neuronal activity, and spatially smeared, thus limiting the spatial and temporal precision of the information that can be inferred. A number of dynamic factors relate the fMRI signal to activity, and the precise underlying model is not yet well-understood. There is relatively low signal-to-noise, necessitating averaging across trials and often across people. Individual brains differ both in brain structure and in function. Variability makes determining when differences are clinically or scientifically relevant difficult, and leads to noisy data. Due to natural individual variability in structure and function, and brain plasticity (especially during development), even large differences in structure or deviation from the norm may not be indicative of any functional deficiency. Cognitive strategies can also affect variability in the data. These sources of variability can complicate the analysis of data and provide even more leeway for differences to exist without implying dysfunction. Activity in a brain area does not entail that the region is necessary for performance of the task. fMRI is so sensitive to motion that it would be virtually impossible to get information from a noncompliant subject. This makes the prospect of reading content from an unwilling mind virtually impossible. Without appreciating these technical issues and the resulting limits to what can legitimately be inferred from fMRI, one is likely to overestimate or mischaracterize the potential threat that it poses. In fact, much of the fear of mindreading expressed in non-scientific publications stems from a lack of understanding of the science (Racine, 2015). For example, there is no scientific basis to the worry that imaging would enable the reading of mental content without our knowing it. Thus, fears that the government is able to remotely or covertly monitor the thoughts of citizens are unfounded. Decoding of mental content Noninvasive ways of inferring neural activity have led many to worry that mindreading is possible, not just in theory, but even now. Using decoding techniques fMRI can be used, for example, to reconstruct a visual stimulus from activity of the visual cortex while a subject is looking at a scene or to determine whether a subject is looking at a familiar face, or hearing a particular sound. If mental content supervenes on the physical structure and function of our brains, as most philosophers and neuroscientists think it does, then in principle it should be possible to read minds by reading brains. Because of the potential to identify mental content, decoding raises issues about mental privacy. Despite the remarkable advances in brain imaging technology, however, when it comes to mental content, our current abilities to “mind-read” are relatively limited, but continually improving (Roskies, 2015, 2020). Although some aspects of content can be decoded from neural data, these tend to be quite general and nonpropositional in character. The ability to infer semantic meaning from ideation or visual stimulation tends to work best when the realm of possible contents are quite constrained. Our current abilities allow us to infer some semantic atoms, such as representations denoting one of a prespecified set of concrete objects, but not unconstrained content, or entire propositions. Of course, future advances might make worries about mindreading more pressing. For example, if we develop robust means for decoding compositional meaning, we may one day come to be able to decode propositional thought. Still, some worries are warranted. Even if neuroimaging is not at the stage where mindreading is possible, it can nonetheless threaten aspects of privacy in ways that should give us pause. It is possible to identify individuals on the basis of their brain scans (Valizadeh et al., 2018). In addition, neuroimaging can provide some insights into attributes of people that they may not want known or disclosed. In some cases, subjects may not even know that these attributes are being probed, thinking they are being scanned for other purposes. A willing subject may not want certain things to be monitored. In what follows, I consider a few of these more realistic worries. Implicit bias: Although explicitly acknowledged racial biases are declining, this may be due to a reporting bias attributable to the increased negative social valuation of racial prejudice. Much contemporary research now focuses on examining implicit racial biases, which are automatic or unconscious reflections of racial bias. With fMRI and EEG, it is possible to interrogate implicit biases, sometimes without the subject’s awareness that that is what is being measured (Checkroud, 2014)[3]. While there is disagreement about how best to interpret implicit bias results (e.g., as a measure of perceived threat, as in-group/out-group distinctions, etc.), and what relevance they have for behavior, the possibility that implicit biases can be measured, either covertly or overtly, raises scientific and ethical questions (Molenberghs and Louis, 2018). When ought this information be collected? What procedures must be followed for subjects legitimately to consent to implicit measures? What significance should be attributed to evidence of biases? What kind of responsibility should be attributed to people who hold them? What predictive power might they hold? Should they be used for practical purposes? One can imagine obvious but controversial potential uses for implicit bias measures in legal situations, in employment contexts, in education, and in policing, all areas in which concerns of social justice are significant. Lie detection: Several neurotechnologies are being used to detect deception or neural correlates of lying or concealing information in experimental situations. For example, both fMRI measures and EEG analysis techniques relying on the P300 signal have been used in the laboratory to detect deception with varying levels of success. These methods are subject to a variety of criticisms (Farah et al., 2014). For example, almost all experimental studies fail to study real lying or deception, but instead investigate some version of instructed misdirection. The context, tasks, and motivations differ greatly between actual instances of lying and these experimental analogs, calling into question the ecological validity of these experimental techniques. Moreover, accuracy, though significantly higher than chance, is far from perfect, and because of the inability to determine base rates of lying, error rates cannot be effectively assessed. Thus, we cannot establish their reliability for real-world uses. Finally, both physical and mental countermeasures decrease the accuracy of these methods (Hsu et al., 2019). Despite these limitations, several companies have marketed neurotechnologies for this purpose. Character traits: Neurotechnologies have shown some promise in identifying or predicting aspects of personality or character. In an interesting study aimed at determining how well neuroimaging could detect lies, Greene and colleagues gave subjects in the fMRI scanner a prediction task in a game of chance that they could easily cheat on. By using statistical analysis the researchers could identify a group of subjects who clearly cheated and others who did not (Greene and Paxton, 2009). Although they could not determine with neuroimaging on which trials subjects cheated, there were overall differences in brain activation patterns between cheaters and those who played fair and were at chance in their predictions. Moreover, Greene and colleagues repeated this study at several months remove, and found that the character trait of honesty or dishonesty was stable over time: cheaters the first time were likely to cheat (indeed, cheated even more the second time), and honest players remained honest the second time around. Also interesting was the fact that the brain patterns suggested that cheaters had to activate their executive control systems more than noncheaters, not only when they cheated, but also when deciding not to cheat. While the differential activations cannot be linked specifically to the propensity to cheat rather than to the act of cheating, the work suggests that these task-related activation patterns may reflect correlates of trustworthiness. The prospect of using methods for detecting these sorts of traits or behaviors in real-world situations raises a host of thorny issues. What level of reliability should be required for their employment? In what circumstances should they be admissible as evidence in the courtroom? For other purposes? Using lie detection or decoding techniques from neuroscience in legal contexts may raise constitutional concerns: Is brain imaging a search or seizure as protected by the 4th Amendment? Would its forcible use be precluded by 5th Amendment rights? These questions, though troubling, might not be immediately pressing: in a landmark case (US v. Semrau, 2012) the court ruled that fMRI lie detection is inadmissible, given its current state of development. However, the opinion left open the possibility that it may be admissible in the future, if methods improve. Finally, to the extent that relevant activation patterns may be found to correlate significantly with activation patterns on other tasks, or with a task-free measure such as default-network activity, it raises the possibility that information about character could be inferred merely by scanning them doing something innocuous, without their knowledge of the kind of information being sought. Thus, there are multiple dimensions to the threat to privacy posed by imaging techniques. Diagnosis Increasingly, neuroimaging information can bear upon diagnoses for diseases, and in some instances may provide predictive information prior to the onset of symptoms. Work on the default network is promising for improving diagnosis in certain diseases without requiring that subjects perform specific tasks in the scanner (Buckner et al., 2008). For some diseases, such as in Alzheimer’s disease, MRI promises to provide diagnostic information that previously could only be established at autopsy (Liu et al., 2018). fMRI signatures have also been linked to a variety of psychiatric diseases, although not yet with the reliability required for clinical diagnosis (Aydin et al., 2019). Neuroethical issues also arise regarding ways to handle incidental findings, that is, evidence of unsymptomatic tumors or potentially benign abnormalities that appear in the course of scanning research subjects for non-medical purposes (Illes et al. 2006; Illes and Sahakian, 2011). The ability to predict future functional deficits raises a host of issues, many of which have been previously addressed by genethics (the ethics of genetics), since both provide information about future disease risk. What may be different is that the diseases for which neurotechnologies are diagnostically useful are those that affect the brain, and thus potentially mental competence, mood, personality, or sense of self. As such they may raise peculiarly neuroethical questions (see below). Prediction As discussed, decoding methods allow one to associate observed brain activity with previously observed brain/behavior correlations. In addition, such methods can also be used to predict future behaviors, insofar as these are correlated with observations of brain activity patterns. Some studies have already reported predictive power over upcoming decisions (Soon et al., 2008). Increasingly, we will see neuroscience or neuroimaging data that will give us some predictive power over longer-range future behaviors. For example, brain imaging may allow us to predict the onset of psychiatric symptoms such as psychotic or depressive episodes. In cases in which this behavior is indicative of mental dysfunction it raises questions about stigma, but also may allow more effective interventions. One confusion regarding neuro prediction should be clarified immediately: When neuroimages are said to “predict” future activity, it means they

  • https://www.youtube.com
    Manufacturing Minds

    Please enjoy this recording of Manufacturing Minds: Collegium Institute's annual Fall Magi Project Event. The Magi Project for Science & Theology hosts and delivers courses, talks, seminars and other outreach activities in science and faith, helping people to think about their understanding of the physical Universe and their relationship with God, and how these ideas fit together in a complementary way. Can human minds be manufactured? What is the meaning of consciousness, and how might neuroscientists resolve the mysteries of mind and its implications for decision making? This conversation with two eminent Stanford neuroscientists, Prof. William Newsome (Vincent V.C. Woo Director of the Wu Tsai Neurosciences Institute) and Prof. William Hurlbut, MD (Stanford Medical School) explores these questions and more. Featuring: Prof. William Newsome is the Harman Family Provostial Professor of Neurobiology at the Stanford University School of Medicine, as well as Director of the Neurosciences Institute at Stanford, and a leading investigator in systems and cognitive neuroscience. His research on the neural mechanisms underlying visual perception and decision making have garnered numerous awards, including the Rank Prize in Optoelectronics, the Spencer Award, the Distinguished Scientific Contribution Award of the American Psychological Association, the Dan David Prize of Tel Aviv University, the Karl Spencer Lashley Award of the American Philosophical Society, and the Champalimaud Vision Award. Prof. William Hurlbut, MD, is Adjunct Professor and Senior Research Scholar in Neurobiology at the Stanford Medical School. He is the author of numerous publications on science and ethics including the co-edited volume Altruism and Altruistic Love: Science, Philosophy, and Religion in Dialogue (2002, Oxford), and “Science, Religion and the Human Spirit” in the Oxford Handbook of Religion and Science (2008). Formerly, he worked for NASA and was a member of the Chemical and Biological Warfare Working Group at the Center for International Security and Cooperation, and he served for nearly a decade on the President’s Council on Bioethics. This panel discussion is moderated by philosopher Janice Tzuling Chik, Ph.D., Inaugural John and Daria Barry Foundation Fellow at the University of Pennsylvania’s Program for Research on Religion and Urban Civil Society (PRRUCS) and Collegium Institute Senior Scholar. Prof. Chik is Assistant Professor of Philosophy at Ave Maria University and an Associate Member of the Aquinas Institute, Blackfriars Hall, Oxford. This webinar was cosponsored by the Program for Research on Religion and Urban Civil Society (PRRUCS) at the University of Pennsylvania, the Cornell Chapter of the Society of Catholic Scientists, the Zephyr Institute, the University of Pennsylvania Biological Basis of Behavior (BBB) program, and the University of Pennsylvania Department of Neuroscience.

  • Interreligious Perspectives on Mind, Genes and the Self: Emerging Technologies and Human Identity.

    Attitudes towards science, medicine and the body are all profoundly shaped by people’s worldviews. When discussing issues of bioethics, religion often plays a major role. In this volume, the role of genetic manipulation and neurotechnology in shaping human identity is examined from multiple religious perspectives. This can help us to understand how religion might affect the impact of the initiatives such as the UNESCO Declaration in Bioethics and Human Rights. The book features bioethics experts from six major religions: Buddhism, Confucianism, Christianity, Islam, Hinduism, and Judaism. It includes a number of distinct religious and cultural views on the anthropological, ethical and social challenges of emerging technologies in the light of human rights and in the context of global bioethics. The contributors work together to explore issues such as: cultural attitudes to gene editing; neuroactive drugs; the interaction between genes and behaviours; the relationship between the soul, the mind and DNA; and how can clinical applications of these technologies benefit the developing world. This is a significant collection, demonstrating how religion and modern technologies relate to one another. It will, therefore, be of great interest to academics working in bioethics, religion and the body, interreligious dialogue, and religion and science, technology and neuroscience.

  • https://www.youtube.com
    Augustine and transhumanism

    Celia Deane-Drummond engages Augustine’s account of curiositas to critique transhumanist approaches to human flourishing. Scholar bio: https://www.christianflourishing.com/celia-deane-drummond "Human Flourishing in a Technological World: A Christian Vision" is a three year research project seeking to answer the question, "What does it mean to be human in a technological world?" Starting in 2018, an established group of scholars from the fields of biology, general humanities, philosophy, psychiatry, and theology have sought to provide a comprehensive theological assessment of recent technologies' impact on human nature and human life. Dr. Jens Zimmermann, Dr. Michael Burdett, Dr. John Behr and several other leading scholars contribute (see the full list here: https://www.christianflourishing.com/...). View all the free content released by these scholars: https://www.christianflourishing.com/ #HumanFlourishing #Technology #ChristianEthics #Embodiment --- Homepage: https://www.christianflourishing.com/ Blog: https://www.patheos.com/blogs/humanfl... Facebook: https://www.facebook.com/humanflouris... Project Directors: Jens Zimmermann: https://www.christianflourishing.com/... Michael Burdett: https://www.christianflourishing.com/... All scholars: https://www.christianflourishing.com/

  • https://www.youtube.com
    Gender Theory & Transhumanism

    Jason T. Eberl is an author, Professor of Philosophy and Director of the Albert Gnaegi Center for Health Care Ethics at Saint Louis University. His latest book is "The Nature of Human Persons: Metaphysics and Bioethics (Notre Dame Studies in Medical Ethics and Bioethics)" Spotify: https://spoti.fi/3dmnZ72 Apple Podcast: https://apple.co/3cgdlgL Android: https://bit.ly/2TTZ6rx Facebook - https://www.facebook.com/KZNGRM/ Instagram - https://www.instagram.com/kazingramdialogue/ Twitter - https://twitter.com/KZNGRM