106 results found (0.002 seconds)

Moral Disagreement

Disagreement is to be distinguished from mere moral difference. Two people or groups may have different moral values without those being in conflict. Disagreement occurs, however, where opposing judgements are made about the same contested general or particular matters, e.g. abortion, capital punishment, drug-use, euthanasia, sexual ethics, warfare, etc. The fact that such disagreements seem to characterise moral discussion and prove intractable is often cited as evidence that morality is just a matter of individual or group attitudes and is not objective. But disagreement does not itself show that, and in fact it tends to favour the assumption that there is some truth of the matter that one or another side is failing to recognise or accept. Where A and B are in moral disagreement about some issue, this may be due to them a) not taking account of the same considerations, or b) differing in their interpretations of their significance, or c) to the possibility that there is more than one reasonable and defensible position. This last may seem to amount to relativism but again that need not be so, for there may simply be a plurality of reasonable positions each emphasising different values or principles (e.g. in relation abortion ‘the rights of the mother’ and ‘the rights of the unborn child’. There is also the fact that of their nature moral issues are difficult to understand and work through (otherwise there would not be ongoing debate about them). Important in this connection, then, is the idea of reasonable moral disagreement in which each side acknowledges that the other is neither stupid nor wicked, but is concerned to arrive at the truth. This can go far to remove rancour from such disagreements.

  • https://core.ac.uk
    • PDF
    • Suggested

    CORE Provided by Notre Dame Law School: NDLScholarship Notre Dame Law School NDLScholarship Journal Articles 1998 Natural Law and the Ethics of Discourse John M. Finnis Notre Dame Law School, [email protected] Metadata, citation and similar papers at core.ac.uk Recommended Citation John M. Finnis, Natural Law and the Ethics of Discourse, 43 Am. J. Juris. 53 (1998). Available at: https://scholarship.law.nd.edu/law_faculty_scholarship/872 Publications Follow this and additional works at: https://scholarship.law.nd.edu/law_faculty_scholarship Part of the Legal History Commons, and the Natural Law Commons This Article is brought to you for free and open access by the Publications at NDLScholarship. It has been accepted for inclusion in Journal Articles by an authorized administrator of NDLScholarship. For more information, please contact [email protected]. NATURAL LAW AND THE ETHICS OF DISCOURSE JOHN FINNIS I In launching (or relaunching on a sound basis) the philosophical discourse about natural law which has continued to this day, Plato explored with still unsurpassed penetration the ethics of discourse itself. For the dialogue of the Gorgias facilitates its readers' meditative appropriation both of morality's deepest sources (principia) and of the conditions for very significant kinds of action: truth-seeking dialogue, discussion or discourse, and meditation or reflective deliberation. The framework of the dialogue satisfies the procedural conditions for fruitful discourse. The parties-Socrates and Gorgias, Polus, Callicles, and Chaerophon-are equals in freedom of status and of speech,' unconstrained by any pressure for proximate decision and action, united in the mutual comprehension afforded by a shared and highly articulate and reflective culture, and assembled among free and equal fellow citizens who similarly are culturally united and (unlike, say, the audience for Socrates' apologia) unconstrained. And from the outset, and again and again, Socrates points to further conditions. 2 The first of those further conditions, the one most overtly articulated, is that the parties to discourse shall set aside speech-making and engage only in discussion,³ in which answer follows and responds to question and is not employed to block further questions. But there are other conditions, and Socrates, while indicating them here and there throughout the dialogue, 4 1. They meet and discourse in the city where there is "more freedom of speech than anywhere in Greece” (i.e. in the world): Gorgias 461e. Note: In general I quote from the translation by R.E. Allen, The Dialogues of Plato, vol. I (New Haven: Yale University Press, 1984). Allen's prefatory "Comment" on the Gorgias (ibid., 189-230) is valuable, not least his demonstration of the wide philosophical superiority of Plato's Callicles (not to mention Callicles' philosophical superior, Socrates!) to Nietzsche: ibid. 219-221; and his showing (206) that the fallacies in Socrates' arguments often denounced by modern commentators (cf. Terence Irwin, Plato: Gorgias [New York: Oxford University Press, 1979], v) are liable to be in the eye of the beholder. 2. Gorgias at 461d: “observe one condition . . . bridle that long answer method." 3. Discussion: dialegesthai (447c-contrasted with "a performance"; 449b-contrasted with "that lengthy kind of discourse [logōn] Polus began"; 453c-discussion as discourse motivated by desire to really know its subject matter). 4. Especially 449b. 53 54 THE AMERICAN JOURNAL OF JURISPRUDENCE (1998) states them most summarily on the occasion when he also articulates the formal relation between truth and consensus under ideal conditions of discourse. Under conditions we today would call "ideal," Socrates/Plato affirms, persons engaged in discourse will agree: that is a mark of truth.5 The conditions? "Knowledge, good will, and frankness": (i) a sound, wide-ranging education; (ii) good will towards the other parties to the discourse/discussion (indeed, the kind regard one has towards one's friends); and (iii) willingness to speak frankly (even when that involves admitting one's mistakes, self-contradictions, and self-refutation), and not to feign agreement. In the absence of these conditions, even universal assent to a proposition would be no evidence (let alone a guarantee) of its truth.³ And when the conditions are fulfilled, the discussants' convergence is not a criterion of truth, a standard to which one can appeal to discriminate, within argument, between sound and unsound judgments. Rather it is a mark of truth, a welcome and confirmatory consequence of their common willingness to attend to what every truth-seeking discussion must have as its objective: "things which are" (the truth of the matter) (ta onta)," what "is so," "what is true and false concerning the matters of which we speak-for it is of common good to all that the thing itself become manifest." But this reality to which true propositions correspond is not something accessible or intelligible (still less is it adequately imaginable) otherwise than by question and answer, coherent, self-consistent thought, attention to all relevant evi- dence, all pertinent considerations.¹2 Nor, therefore, can there be any other 9⁹10 5. 486e5-6; also 487e, 513d. On "marks of truth," see the discussion of Wiggins in Finnis, Fundamentals of Ethics (Washington, D.C.: Georgetown University Press, 1983) 63- 4. 6. Gorgias 487a2-3: epistēmē, eunoia, parrēsia. 7. 487a-e; see also 473a, 492d, 495a, 500b-c, 521a. 8. See e.g. 472a, 475e. One must add what is not so often noted by those who speak of the "burdens of judgment" and the "fact of pluralism," that in nonideal conditions (i.e. all actual and foreseeable conditions) the absence of universal assent to, and the existence of widespread dissent from, a proposition is no evidence of its falsity. 9. 495a8. 10. 509b1. 11. 505e4-6. 12. This article was written for a conference in 1998 on, and with, Jürgen Habermas. So it is worth noting here that Plato is scarcely a "Platonist" as that figure appears in Habermas's pages. Plato as I read him (and particularly the Plato/Socrates of the Gorgias) would assent without difficulty to the position which Habermas articulates thus (under the description "pragmatism"): "Real" is what can be represented in true statements, whereas "true" can be explained in turn by reference to the claim one person raises before others by asserting a proposition. With the assertoric sense of her statement, a speaker raises a criticizable claim to the validity of the asserted proposition, and because no one has direct access to uninterpreted conditions of validity, “validity" (Gültigkeit) must be understood in epistemic terms as "validity (Geltung) proven for us." A justified truth claim should JOHN FINNIS 55 basis for rationally affirming or denying that “correspondence,” in relation to any particular subject matter of discursive or reflective inquiry.13 The indispensable conditions on which discussion is worthwhile, then, can be reduced to respect-and-concern for the two human goods which Socrates/Plato keeps tirelessly before the attention of the reader of the Gorgias: truth (and knowledge of it), and friendship (goodwill towards other human persons). These conditions are rich and powerfully exclusive. The reader cannot fail to observe what Socrates never explicitly affirms: many of the participants in actual discourse-communities, not least (and not most) in wealthy democracies, do not meet those conditions. It is therefore impossible, I suggest, to justify a modern discourse ethics [which] adopts the intersubjective approach of pragma- tism and conceives of practical discourse as a public practice of shared, reciprocal perspective taking: each individual finds himself compelled to adopt the perspective of everyone else in order to test whether a proposed regulation is also acceptable from the perspective of every other person's understanding of himself and the world." The proposal that in discourse, and equally in choosing "regulations" of social life generally, we should "adopt the perspective of every other allow its proponent to defend it with reasons against the objections of possible opponents; in the end she should be able to gain the rationally motivated agreement of the interpretation community as a whole. Habermas, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy, trans. W. Rehg (Cambridge, Mass.: MIT Press, 1996), 14. While assenting to all this, Plato, Aristotle, and the tradition would be inclined to add (quite reasonably) that there is a legitimate reflective inquiry into what it is about the world (including people) and rationality that makes possible this expectation that fully rational and reasonable people considering the relevant data would concur. 13. "The illusion which underpins most denials of the objectivity of ethics is this: that to which true judgments have their truth by corresponding ("the facts," "the world," "reality" .) somehow lies open to an inspection conducted otherwise than by rationally arriving at true judgments of the type in question (scientific, historical, cryptographic. and, why not? evaluative...). That illusion is the root of all those reductive programmes which we call philosophical empiricism-programmes like those of Hobbes and Hume and successors of theirs...": Finnis, Fundamentals of Ethics, 64. Among those successors is, in his own curious way and despite his own intentions, Kant: ibid., 122-4. And, confronted by the assertion that, after an "irreversible critique" of metaphysics, this is a "postmetaphysical era,” one should add "metaphysical" to the parenthetical list of types of true judgment (for the reasons indicated by Rawls, “Reply to Habermas,” in Political Liberalism, paperback ed. [New York: Columbia University Press, 1996], 379n.) 14. Habermas, Justification and Application: Remarks on Discourse Ethics, trans. Ciaran P. Cronin (Cambridge, Mass.: MIT Press, 1993), 154 (emphases substituted). Habermas himself from time to time observes that “discourse ethics" envisages "ideal conditions ... including . . . freedom of access, equal rights to participate, truthfulness on the part of participants, absence of coercion in taking positions, and so forth": ibid. 56 (emphasis added), and (ibid.) a "cooperative quest for the truth.” 15 THE AMERICAN JOURNAL OF JURISPRUDENCE (1998) person's understanding of himself and the world" is incoherent. That is, it refutes itself in the manner Socrates explores and comments upon. For: some participants in discourse and in social life generally, perhaps many participants, understand themselves in more or less uncritical conventional patterns of thought picked up from the surrounding culture (perhaps under comforting descriptions such as "pious," "traditional," "enlightened," or "modern"). ¹6 And some, perhaps many, understand themselves just like Polus and Callicles, in their different ways: as more or less covert admirers and desirers of power's gratifications and rewards, which they prefer to any interest in truth or friendship; they understand themselves as unconcerned, on principle (so to say), with the interests or perspectives of other people as such. "Perspectives" such as these should be, not adopted but rather rejected, for the sake of discourse (not demagoguery), truth (not mendacious or myth-ridden propaganda), friendship (not self-seeking flattery), and the real interests of all (including those wrongly interested in adhering to and acting upon their immoral "perspectives")." 17 56 II Does that entail that Plato/Socrates' own willingness to discourse on friendly terms with Polus and Callicles, these inward admirers of tyranny, is itself performatively inconsistent? By no means. While they are at all willing to listen, he can and will try to illustrate and explain, to them as well as to any bystanders of goodwill, the worth-the desirability of a friendship (including a public politics) based on shared acknowledgement and respect for intrinsic human goods such as truth and (such) friendship. Such goods can be elements of a common good. That the good of truth, and of getting to know it for its own sake, is one among these basic aspects of that human well-being which can be truly common (a koinon agathon)¹8 is a truth which Socrates finds dozens of 15. E.g. Gorgias 495a, 509a. 16. Habermas himself, of course, is well aware of this, and from time to time emphasizes it strongly. But I have failed to discover the basis on which he supposes that this fact is compatible with reaching moral conclusions by the method he recommends (scil. of adopting the perspective of every other person's understanding of himself and the world). It is one thing to favor the true interests of each and every person, quite another to favor or adopt the self-understanding of those who do not know or do not care what is truly in their interests. 17. So we must read with due reserve Aquinas' (Aristotle's) generous-minded praise of his opponents in discourse; it is due only on the assumption of their goodwill, an assumption often falsified in other contexts. Sententia super Metaphysicam XII lect. 9 n. 14: Since, in choosing what to believe and what to reject, we ought to be guided more by truth's groundedness than by affection or illwill towards those who hold an opinion, so we should love both those whose opinion we follow and those whose opinion we reject--for they each were seeking to inquire after truth, and each assisted us to do so. 18. Gorgias 505e6, quoted above at n. 11.

  • https://plato.stanford.edu
    • Suggested

    Stanford Encyclopedia of Philosophy Menu Browse Table of Contents What's New Random Entry Chronological Archives About Editorial Information About the SEP Editorial Board How to Cite the SEP Special Characters Advanced Tools Contact Support SEP Support the SEP PDFs for SEP Friends Make a Donation SEPIA for Libraries Entry Navigation Entry Contents Bibliography Academic Tools Friends PDF Preview Author and Citation Info Back to Top Disagreement First published Fri Feb 23, 2018 We often find ourselves in disagreement with others. You may think nuclear energy is so volatile that no nuclear energy plants should be built anytime soon. But you are aware that there are many people who disagree with you on that very question. You disagree with your sister regarding the location of the piano in your childhood home, with you thinking it was in the primary living area and her thinking it was in the small den. You and many others believe Jesus Christ rose from the dead; millions of others disagree. It seems that awareness of disagreement can, at least in many cases, supply one with a powerful reason to think that one’s belief is false. When you learned that your sister thought the piano had been in the den instead of the living room, you acquired a good reason to think it really wasn’t in the living room, as you know full well that your sister is a generally intelligent individual, has the appropriate background experience (she lived in the house too), and is about as honest, forthright, and good at remembering events from childhood as you are. If, in the face of all this, you stick with your belief that the piano was in the living room, will your retaining that belief be reasonable? In the piano case there is probably nothing important riding on the question of what to do in the face of disagreement. But in many cases our disagreements are of great weight, both in the public arena and in our personal lives. You may disagree with your spouse or partner about whether to live together, whether to get married, where you should live, or how to raise your children. People with political power disagree about how to spend enormous amounts of money, or about what laws to pass, or about wars to fight. If only we were better able to resolve our disagreements, we would probably save millions of lives and prevent millions of others from living in poverty. This article examines the central epistemological issues tied to the recognition of disagreement. Compared to many other topics treated in this encyclopedia, the epistemology of disagreement is a mere infant. While the discussion of disagreement isn’t altogether absent from the history of philosophy, philosophers didn’t start, as a group, thinking about the topic in a rigorous and detailed way until the 21st century. For that reason, it is difficult to know what the primary issues and questions are concerning the general topic. At this early stage of investigation we are just getting our feet wet. In this essay, we begin by trying to motivate what we think should be the primary issues and questions before we move on to look at some of the main ideas in the literature. In so doing we also introduce some new terminology and make some novel distinctions that we think are helpful in navigating this relatively recent debate. 1. Disagreement and Belief 2. Belief-Disagreement vs. Action-Disagreement 3. Response to Disagreement vs. Subsequent Level of Confidence 4. Disagreement with Superiors, Inferiors, Peers, and Unknowns 5. Peer Disagreements 5.1 The Equal Weight View 5.2 The Steadfast View 5.3 The Justificationist View 5.4 The Total Evidence View 5.5 Other Issues 6. Disagreement By the Numbers 7. Disagreement and Skepticism Bibliography Academic Tools Other Internet Resources Related Entries 1. Disagreement and Belief To a certain extent, it may seem that there are just three doxastic attitudes to adopt regarding the truth of a claim: believe it’s true, believe it’s false (i.e., disbelieve it), and suspend judgment on it. In the most straightforward sense, two individuals disagree about a proposition when they adopt different doxastic attitudes toward the same proposition (i.e., one believes it and one disbelieves it, or one believes it and one suspends judgment). But of course there are levels of confidence one can have regarding a proposition as well. We may agree that global warming is occurring but you may be much more confident than I am. It can be useful to use ‘disagreement’ to cover any difference in levels of confidence: if \(X\) has one level of confidence regarding belief \(B\)’s truth while \(Y\) has a different level of confidence, then they “disagree” about \(B\)—even if this is a slightly artificial sense of ‘disagree’. These levels of confidence, or degrees of belief, are often represented as point values on a 0–1 scale (inclusive), with larger values indicating greater degrees of confidence that the proposition is true. Even if somewhat artificial, such representations allow for more precision in discussing cases. We are contrasting disagreements about belief from disagreements about matters of taste. Our focus is on disagreements where there is a fact of the matter, or at least the participants are reasonable in believing that there is such a fact. 2. Belief-Disagreement vs. Action-Disagreement Suppose Jop and Dop are college students who are dating. They disagree about two matters: whether it’s harder to get top grades in economics classes or philosophy classes, and whether they should move in together this summer. The first disagreement is over the truth of a claim: is the claim (or belief) ‘It is harder to get top grades in economics classes compared to philosophy classes’ true or not? The second disagreement is over an action: should we move in together or not (the action = moving in together)? Call the first kind of disagreement belief-disagreement; call the second kind action-disagreement. The latter is very different from the former. Laksha is a doctor faced with a tough decision regarding one of her patients. She needs to figure out whether it’s best, all things considered, to just continue with the medications she has been prescribing or stop them and go with surgery. She confers closely with some of her colleagues. Some of them say surgery is the way to go, others say she should continue with medications and see what happens, but no one has a firm opinion: all the doctors agree that it’s a close call, all things considered. Laksha realizes that as far as anyone can tell it really is a tie. In this situation Laksha should probably suspend judgment on each of the two claims ‘Surgery is the best overall option for this patient’ and ‘Medication is the best overall option for this patient’. When asked ‘Which option is best?’ she should suspend judgment. That’s all well and good, but she still has to do something. She can’t just refuse to treat the patient. Even if she continues to investigate the case for days and days, in effect she has made the decision to not do surgery. She has made a choice even if she dithers. The point is this: when it comes to belief-disagreements, there are three broad options with respect to a specific claim: believe it, disbelieve it, and suspend judgment on it. (And of course there are a great many levels of confidence to take as well.) But when it comes to action-disagreements, there are just two options with respect to an action \(X\): do \(X\), don’t do \(X\). Suspending judgment just doesn’t exist when it comes to an action. Or, to put it a different way, suspending judgment on whether to do \(X\) does exist but is pretty much the same thing as not doing \(X\), since in both cases you don’t do \(X\) (Feldman 2006c). Thus, there are disagreements over what to believe and what to do. Despite this distinction, we can achieve some simplicity and uniformity by construing disagreements over what to do as disagreements over what to believe. We do it this way: if we disagree over whether to do action \(X\), we are disagreeing over the truth of the claim ‘We should do \(X\)’ (or ‘I should do \(X\)’ or ‘\(X\) is the best thing for us to do’; no, these aren’t all equivalent). This translation of action-disagreements into claim-disagreements makes it easy for us to construe all disagreements as disagreements about what to believe, where the belief may or may not concern an action. Keep in mind, though, that this “translation” doesn’t mean that action-disagreements are just like belief-disagreements that don’t involve actions: the former still requires a choice on what one is actually going to do. With those points in mind, we can formulate the primary questions about the epistemology of disagreement. However, it is worth noting that agreement also has epistemological implications. If learning that a large number and percentage of your epistemic peers or superiors disagree with you should probably make you lower your confidence in your belief, then learning that those same individuals agree with you should probably make you raise your confidence in your belief—provided they have greater confidence in it than you did before you found out about their agreement. In posing the questions we start with a single individual who realizes that one or more other people disagree/agree with her regarding one of her beliefs. We can formulate the questions with regard to just disagreement or to agreement and disagreement; we also have the choice of focusing on just agreement/disagreement or going with levels of confidence. Here are the primary epistemological questions for just disagreement and no levels of confidence: Response Question: Suppose you realize that some people disagree with your belief \(B\). How must you respond to the realization in order for that response to be epistemically rational (or perhaps wise)? Belief Question: Suppose you realize that some people disagree with your belief \(B\). How must you respond to the realization in order for your subsequent position on \(B\) to be epistemically rational? Here are the questions for agreement/disagreement plus levels of conviction: Response Question*: Suppose you realize that some people have a confidence level in \(B\) that is different from yours. How must you respond to the realization in order for that response to be epistemically rational (or perhaps wise)? Belief Question*: Suppose you realize that some people have a confidence level in \(B\) that is different from yours. How must you respond to the realization in order for your subsequent position on \(B\) to be epistemically rational? 3. Response to Disagreement vs. Subsequent Level of Confidence A person can start out with a belief that is irrational, obtain some new relevant evidence concerning that belief, respond to that new evidence in a completely reasonable way, and yet end up with an irrational belief. This fact is particularly important when it comes to posing the central questions regarding the epistemology of disagreement (Christensen 2011). Suppose Bub’s belief that Japan is a totalitarian state, belief \(J\), is based on a poor reading of the evidence and a raging, irrational bias that rules his views on this topic. He has let his bias ruin his thinking through his evidence properly. Then he gets some new information: some Japanese police have been caught on film beating government protesters. After hearing this, Bub retains his old confidence level in \(J\). We take it that when Bub learns about the police, he has not acquired some new information that should make him think ‘Wait a minute; maybe I’m wrong about Japan’. He shouldn’t lose confidence in his belief \(J\) merely because he learned some facts that do not cast any doubt on his belief! The lesson of this story is this: Bub’s action of maintaining his confidence in his belief as a result of his new knowledge is reasonable even though his retained belief itself is unreasonable. Bub’s assessment of the original evidence concerning \(J\) was irrational, but his reaction to the new information was rational; his subsequent belief in \(J\) was (still) irrational (because although the video gives a little support to \(J\), it’s not much). The question, ‘Is Bub being rational after he got his new knowledge?’ has two reasonable interpretations: ‘Is his retained belief in \(J\) rational after his acquisition of the new knowledge?’ vs. ‘Is his response to the new knowledge rational?’ On the one hand, “rationality demands” that upon his acquisition of new knowledge Bub drop his belief \(J\) that Japan is a totalitarian state: after all, his overall evidence for it is very weak. On the other hand, “rationality demands” that upon his acquisition of new knowledge Bub keep his belief \(J\) given that that acquisition—which is the only thing that’s happened to him—gives him no reason to doubt \(J\). This situation still might strike you as odd. After all, we’re saying that Bub is being rational in keeping an irrational belief! But no: that’s not what we’re saying. The statement ‘Bub is being rational’ is ambiguous: is it saying that Bub’s retained belief \(J\) is rational or is it saying that Bub’s retaining of that belief was rational? The statement can take on either meaning, and the two meanings end up with different verdicts: the retained belief is irrational but the retaining of the belief is rational. In the first case, a state is being evaluated, in the second, an action is being evaluated. Consider a more mundane case. Jack hears a bump in the night and irrationally thinks there is an intruder in his house (he has long had three cats and two dogs, so he should know by now that bumps are usually caused by his pets; further, he has been a house owner long enough to know full well that old houses like his make all sorts of odd noises at night, pets or no). Jack has irrational belief \(B\): there is an intruder upstairs or there is an intruder downstairs. Then after searching upstairs he learns that there is no intruder upstairs. Clearly, the reasonable thing for him to do is infer that there is an intruder downstairs—that’s the epistemically reasonable cognitive move to make in response to the new information, given—despite the fact that the new belief ‘There is an intruder downstairs’ is irrational in an evidential sense. These two stories show that one’s action of retaining one’s belief—that intellectual action—can be epistemically fine even though the retained belief is not. And, more importantly, we have to distinguish two questions about the acquisition of new information (which need not have anything at all to do with disagreement): After you acquire some new information relevant to a certain belief \(B\) of yours, what should your new level of confidence in \(B\) be in order for your new level of confidence regarding \(B\) to be rational? After you acquire some new information relevant to a certain belief \(B\) of yours, what should your new level of confidence in \(B\) be in order for your response to the new information to be rational? The latter question concerns an intellectual action (an intellectual response to the acquisition of new information), whereas the former question concerns the subsequent level of confidence itself, the new confidence level you end up with, which comes about partially as a causal result of the intellectual action. As we have seen with the Japan and intruder stories the epistemic reasonableness of the one is partially independent of that of the other. 4. Disagreement with Superiors, Inferiors, Peers, and Unknowns A child has belief \(B\) that Hell is a real place located in the center of the earth. You disagree. This is a case in which you disagree with someone who you recognize to be your epistemic inferior on the question of whether \(B\) is true. You believe that Babe Ruth was the greatest baseball player ever. Then you find out that a sportswriter who has written several books on the history of baseball disagrees, saying that so-and-so was the greatest ever. In this case, you realize that you’re disagreeing with an epistemic superior on the matter, since you know that you’re just an amateur when it comes to baseball. In a third case, you disagree with your sister regarding the name of the town your family visited on vacation when you were children. You know from long experience that your memory is about as reliable as hers on matters like this one; this is a disagreement with a recognized epistemic peer. There are several ways to define the terms ‘superior’, ‘inferior’, and ‘peer’ (Elga, 2007; see section 5 below). You can make judgments about how likely someone is compared to you when it comes to answering ‘Is belief \(B\) true?’ correctly. If you think she is more likely (e.g., you suppose that the odds that she will answer it correctly are about 90% whereas your odds are just around 80%), then you think she is your likelihood superior on that question; if you think she is less likely, then you think she is your likelihood inferior on that question; if you think she is about equally likely, then you think she is your likelihood peer on that question. Another way to describe these distinctions is by referencing the epistemic position of the various parties. One’s epistemic position describes how well-placed they are, epistemically speaking, with respect to a given proposition. The better one’s epistemic position, the more likely one is to be correct. There are many factors that help determine one’s epistemic position, or how likely one is to answer ‘Is belief \(B\) true?’ correctly. Here are the main ones (Frances 2014): cognitive ability had while answering the question evidence brought to bear in answering the question relevant background knowledge time devoted to answering the question distractions encountered in answering the question relevant biases attentiveness when answering the question intellectual virtues possessed Call these Disagreement Factors. Presumably, what determines that \(X\) is more likely than \(Y\) to answer ‘Is \(B\) true?’ correctly are the differences in the Disagreement Factors for \(X\) and \(Y\). For any given case of disagreement between just two people, the odds are that they will not be equivalent on all Disagreement Factors: \(X\) will surpass \(Y\) on some factors and \(Y\) will surpass \(X\) on other factors. If you are convinced that a certain person is clearly lacking compared to you on many Disagreement Factors when it comes to answering the question ‘Is \(B\) true?’ then you’ll probably say that you are more likely than she is to answer the question correctly provided you are not lacking compared to her on other Disagreement Factors. If you are convinced that a certain person definitely surpasses you on many Disagreement Factors when it comes to answering ‘Is \(B\) true?’ then you’ll probably say that you are less likely than she is to answer the question correctly provided you have no advantage over her when it comes to answering ‘Is \(B\) true?’. If you think the two of you differ in Disagreement Factors but the differences do not add up to one person having a net advantage (so you think any differences cancel out), then you’ll think you are peers on that question. Notice that in this peer case you need not think that the two of you are equal on each Disagreement Factor. On occasion, a philosopher will define ‘epistemic peer’ so that \(X\) and \(Y\) are peers on belief \(B\) if and only if they are equal on all Disagreement Factors. If \(X\) and \(Y\) are equal on all Disagreement Factors, then they will be equally likely to judge \(B\) correctly, but the reverse does not hold. Deficiencies of a peer in one area may be accounted for by advantages in other areas with the final result being that the two individuals are in an equivalently good epistemic position despite the existence of some inequalities regarding particular disagreement factors. In order to understand the alternative definitions of ‘superior’, ‘inferior’, and ‘peer’, we will look at two cases of disagreement (Frances 2014). Suppose I believe \(B\), that global warming is happening. Suppose I also believe \(P\), that Taylor is my peer regarding \(B\) in this sense: I think we are equally likely to judge \(B\) correctly. I have this opinion of Taylor because I figure that she knows about as well as I do the basic facts about expert consensus, she understands and respects that consensus about as much as I do, and she based her opinion of \(B\) on those facts. (I know she has some opinion on \(B\) but I have yet to actually hear her voice it.) Thus, I think she is my likelihood peer on \(B\). But in another sense I don’t think she is my peer on \(B\). After all, if someone asked me ‘Suppose you find out later today that Taylor sincerely thinks \(B\) is false. What do you think are the odds that you’ll be right and she’ll be wrong about \(B\)?’ I would reply with ‘Over 95%!’ I would answer that way because I’m very confident in \(B\)’s truth and if I find out that Taylor disagrees with that idea, then I will be quite confident that she’s wrong and I’m right. So in that sense I think I have a definite epistemic advantage over her: given how confident I am in \(B\), I think that if it turns out we disagree over \(B\), there is a 95% chance I’m right and she’s wrong. Of course, given that I think that we are equally likely to judge \(B\) correctly and I’m very confident in \(B\), I’m also very confident that she will judge \(B\) to be true; so when I’m asked to think about the possibility that Taylor thinks \(B\) is false, I think I’m being asked to consider a very unlikely scenario. But the important point here is this: if I have the view that if it turns out that she really thinks \(B\) is false then the odds that I’m right and she’s wrong are 95%, then in some sense my view is that she’s not “fully” my peer on \(B\), as I think that when it comes to the possibility of disagreement I’m very confident that I will be in the right and she won’t be. Now consider another case. Suppose Janice and Danny are the same age and take all the same math and science classes through high school. They are both moderately good at math. In fact, they almost always get the same grades in math. On many occasions they come up with different answers for homework problems. As far as they have been able to determine, in those cases 40% of the time Janice has been right, 40% of the time Danny has been right, and 20% of the time they have both been wrong. Suppose they both know this interesting fact about their track records! Now they are in college together. Danny believes, on the basis of their track records, that on the next math problem they happen to disagree about, the probability that Janice’s answer is right equals the probability that his answer is right—unless there is some reason to think one of them has some advantage in this particular case (e.g., Danny has had a lot more time to work on it, or some other significant discrepancy in Disagreement Factors). Suppose further that on the next typical math problem they work on Danny thinks that neither of them has any advantage over the other this time around. And then Danny finds out that Janice got an answer different from his. In this math case Danny first comes to think that \(B\) (his answer) is true. But he also thinks that if he were to discover that Janice thinks \(B\) is false, the probability that he is right and Jan is wrong are equal to the probability that he is wrong and Janice is right. That’s very different from the global warming case in which I thought that if I were to discover that Taylor thinks \(B\) is false, the probability that I’m right and she’s wrong are 19 times the probability that I’m wrong and she’s right (95% is 19 times 5%). Let’s say that I think you’re my conditional peer on \(B\) if and only if before I find out your view on \(B\) but after I have come to believe \(B\) I think that if it turns out that you disbelieve \(B\), then the chance that I’m right about \(B\) is equal to the chance that you’re right about \(B\). So although I think Taylor is my likelihood peer on the global warming belief, I don’t think she is my conditional peer on that belief. I think she is my conditional inferior on that matter. But in the math case Danny thinks Janice is his likelihood peer and his conditional peer on the relevant belief. So, central to answering the Response Question and the Belief Question is the following: Better Position Question: Are the people who disagree with \(B\) in a better epistemic position to correctly judge the truth-value of the belief than the people who agree with \(B\)? Put in terms of levels of confidence we get the following: Better Position Question*: Are the people who have a confidence level in \(B\) that is different from yours in a better epistemic position to correctly judge the truth-value of the belief than the people who have the same confidence level as yours? The Better Position Question is often not very easy to answer. For the majority of cases of disagreement, with \(X\) realizing she disagrees with \(Y\), \(X\) will not have much evidence to think \(Y\) is her peer, superior, or inferior when it comes to correctly judging \(B\). For instance, if I am discussing with a neighbor whether our property taxes will be increasing next year, and I discover that she disagrees with me, I may have very little idea how we measure up on the Disagreement Factors. I may know that I have more raw intelligence than she has, but I probably have no idea how much she knows about local politics, how much she has thought about the issue before, etc. I will have little basis for thinking I’m her superior, inferior, or peer. We can call these the unknown cases. Thus, when you discover that you disagree with someone over \(B\), you need not think, or have reason to think, that she is your peer, your superior, or your inferior when it comes to judging \(B\). A related question is whether there is any important difference between cases where you are justified in believing your interlocutor is your peer and cases where you may be justified in believing that your interlocutor is not your peer but lack any reason to think that you, or your interlocutor, are in the better epistemic position. Peerhood is rare, if not entirely a fictional idealization, yet in many real-world cases of disagreement we are not justified in making a judgment regarding which party is better positioned to answer the question at hand. The question here is whether different answers to the Response Question and the Belief Question are to be given in these two cases. Plausibly, the answer is no. An analogy may help. It is quite rare for two people to have the very same weight. So for any two people it is quite unlikely that they are ‘weight peers’. That said, in many cases it may be entirely unclear which party weighs more than the other party, even if they agree that it is unreasonable to believe they weigh the exact same amount. Rational decisions about what to do where the weight of the party matters do not seem to differ in cases where there are ‘weight peers’ and cases where the parties simply lack a good reason to believe either party weighs more. Similarly, it seems that the answers to the Response Question and the Belief Question will not differ in cases of peer disagreement and cases where the parties simply lack any good reason to believe that either party is epistemically better positioned on the matter. Another challenge in answering the Better Position Question occurs when you are a novice about some topic and you are trying to determine who the experts on the topic are. This is what Goldman terms the ‘novice/expert problem’ (Goldman 2001). While novices ought to turn to experts for intellectual guidance, a novice in some domain seems ill-equipped to even determine who the experts in that domain are. Hardwig (1985, 1991) claims that such novice reliance on an expert must necessarily be blind, and thus exhibit an unjustified trust. In contrast, Goldman explores five potential evidential sources for reasonably determining someone to be an expert in a domain: Arguments presented by the contending experts to support their own views and critique their rivals’ views. Agreement from additional putative experts on one side or other of the subject in question. Appraisals by “meta-experts” of the experts’ expertise (including appraisals reflected in formal credentials earned by the experts). Evidence of the experts’ interests and biases vis-a-vis the question at issue. Evidence of the experts’ past “track-records”. (Goldman 2001, 93.) The vast majority of the literature on the epistemic significance of disagreement, however, concerns recognized peer disagreement (for disagreement with superiors, see Frances 2013). We turn now to this issue. 5. Peer Disagreements Before we begin our discussion of peer disagreements it is important to set aside a number of cases. Epistemic peers with respect to \(P\) are in an equally good epistemic position with respect to \(P\). Peers about \(P\) can both be in a very good epistemic position with respect to \(P\), or they could both be in a particularly bad epistemic position with respect to \(P\). Put differently, two fools could be peers. However, disagreement between fool peers has not been of particular epistemic interest in the literature. The literature on peer disagreement has instead focused on disagreement between competent epistemic peers, where competent peers with respect to \(P\) are in a good epistemic position with respect to \(P\)—they are likely to be correct about \(P\). Our discussion of peer disagreement will be restricted to competent peer disagreement. In the literature on peer disagreements, four main views have emerged: the Equal Weight View, the Steadfast View, the Justificationist View, and the Total Evidence View. 5.1 The Equal Weight View The Equal Weight View is perhaps the most prominently discussed view on the epistemic significance of disagreement. Competitor views of peer disagreements are best understood as a rejection of various aspects of the Equal Weight View, so it is a fitting place to begin our examination. As we see it, the Equal Weight View is a combination of three claims: Defeat: Learning that a peer disagrees with you about \(P\) gives you a reason to believe you are mistaken about \(P\). Equal Weight: The reason to think you are mistaken about \(P\) coming from your peer’s opinion about \(P\) is just as strong as the reason to think you are correct about \(P\) coming from your opinion about \(P\). Independence: Reasons to discount your peer’s opinion about \(P\) must be independent of the disagreement itself. Defenses of the Equal Weight View in varying degrees can be found in Bogardus 2009, Christensen 2007, Elga 2007, Feldman 2006, and Matheson 2015a. Perhaps the best way to understand the Equal Weight View comes from exploring the motivation that has been given for the view. We can distinguish between three broad kinds of support that have been given for the view: examining central cases, theoretical considerations, and the use of analogies. The central case that has been used to motivate the Equal Weight View is Christensen’s Restaurant Check Case. The Restaurant Check Case. Suppose that five of us go out to dinner. It’s time to pay the check, so the question we’re interested in is how much we each owe. We can all see the bill total clearly, we all agree to give a 20 percent tip, and we further agree to split the whole cost evenly, not worrying over who asked for imported water, or skipped desert, or drank more of the wine. I do the math in my head and become highly confident that our shares are $43 each. Meanwhile, my friend does the math in her head and becomes highly confident that our shares are $45 each. (Christensen 2007, 193.) Understood as a case of peer disagreement, where the friends have a track record of being equally good at such calculation, and where neither party has a reason to believe that on this occasion either party is especially sharp or dull, Christensen claims that upon learning of the disagreement regarding the shares he should become significantly less confident that the shares are $43 and significantly more confident that they are $45. In fact, he claims that these competitor propositions ought to be given roughly equal credence. The Restaurant Check Case supports Defeat since in learning of his peer’s belief, Christensen becomes less justified in his belief. His decrease in justification is seen by the fact that he must lower his confidence to be in a justified position on the issue. Learning of the disagreement gives him reason to revise and an opportunity for epistemic improvement. Further, the Restaurant Check Case supports Equal Weight, since the reason Christensen gains to believe he is mistaken is quite strong. Since he should be equally confident that the shares are $45 as that they are $43, his reasons equally support these claims. Giving the peer opinions equal weight has typically been understood to require ‘splitting the difference’ between the peer opinions, at least when the two peer opinions exhaust one’s evidence about the opinions on the matter. Splitting the difference is a kind of doxastic compromise that calls for the peers to meet in the middle. So, if one peer believes \(P\) and one peer disbelieves \(P\), giving the peer opinions equal weight would call for each peer to suspend judgment about \(P\). Applied to the richer doxastic picture that includes degrees of belief, if one peer as a 0.7 degree of belief that \(P\) and the other has a 0.3 degree of belief that \(P\), giving the peer opinions equal weight will call for each peer to adopt a 0.5 degree of belief that \(P\). It is important to note that what gets ‘split’ is the peer attitudes, not the content of the relevant propositions. For instance, in the Restaurant Check Case, splitting the difference does not require believing that the shares are $44. Perhaps it is obvious that the shares are not an even amount. Splitting the difference is only with respect to the disparate doxastic attitudes concerning any one proposition (the disputed target proposition). The content of the propositions believed by the parties are not where the compromise occurs. Finally, the Restaurant Check Case supports Independence. The reasons that Christensen could have to discount his peer’s belief about the shares could include that he had a little too much to drink tonight, that he is especially tired, that Christensen double checked but his friend didn’t, etc., but could not include that the shares actually are $43, that Christensen disagrees, etc. Theoretical support for the Equal Weight View comes from first thinking about ordinary cases of testimony. Learning that a reliable inquirer has come to believe a proposition gives you a reason to believe that proposition as well. The existence of such a reason does not seem to depend upon whether you already have a belief about that proposition. Such testimonial evidence is some evidence to believe the proposition regardless of whether you agree, disagree, or have never considered the proposition. This helps motivate Defeat, since a reason to believe the proposition when you disbelieve it amounts to a reason to believe that you have made a mistake regarding that proposition. Similar considerations apply to more fine-grained degrees of confidence. Testimonial evidence that a reliable inquirer has adopted a 0.8 degree of belief that \(P\) gives you a reason to adopt a 0.8 degree of belief toward \(P\), and this seems to hold regardless of whether you already have a level of confidence that \(P\). Equal Weight is also motivated by considerations regarding testimonial evidence. The weight of a piece of testimonial evidence is proportional to the epistemic position of the testifier (or what the hearer’s evidence supports about the epistemic position of the testifier). So, if you have reason to believe that Jai’s epistemic position with respect to \(P\) is inferior to Mai’s, then discovering that Jai believes \(P\) will be a weaker reason to believe \(P\) than discovering that Mai believes \(P\). However, in cases of peer disagreement, both parties are in an equally good epistemic position, so it would follow that their opinions on the matter should be given equal weight. Finally, Independence has been theoretically motivated by examining what kind of reasoning its denial would permit. In particular, a denial of independence has been thought to permit a problematic kind of question-begging by allowing one to use one’s own reasoning to come to the conclusion that their peer is mistaken. Something seems wrong with the following line of reasoning, “My peer believes not-\(P\), but I concluded \(P\), so my peer is wrong” or “I thought \(S\) was my peer, but \(S\) thinks not-\(P\), and I think \(P\), so \(S\) is not my peer after all” (see Christensen (2011). Independence forbids both of these ways of blocking the reason to believe that you are mistaken from the discovery of the disagreement. The Equal Weight View has also been motivated by way of analogies. Of particular prominence are analogies to thermometers. Thermometers take in pieces of information as inputs and given certain temperature verdicts as outputs. Humans are a kind of cognitive machine that takes in various kinds of information as inputs and give doxastic attitudes as outputs. In this way, humans and thermometers are analogous. Support for the Equal Weight View has come from examining what it would be rational to believe in a case of peer thermometer disagreement. Suppose that you and I know we have equally reliable thermometers and while investigating the temperature of the room we are in discover that our thermometers give different outputs (yours reads ‘75’ and mine reads ‘72’). What is it rational for us to believe about the room temperature? It seems it would be irrational for me to continue believing it was 72 simply because that was the output of the thermometer that I was holding. Similarly, it seems irrational for me to believe that your thermometer is malfunctioning simply because my thermometer gave a different output. It seems that I would need some information independent from this ‘disagreement’ to discount your thermometer. So, it appears that I have been given a reason to believe that the room’s temperature is not 72 by learning of your thermometer, that this reason is as strong as my reason to believe it is 72, and that this reason is only defeated by independent considerations. If the analogy holds, then we have reason to accept each of the three theses of the Equal Weight View. The Equal Weight View is not the only game in town when it comes to the epistemic significance of disagreement. In what follows we will examine the competitor views of disagreement highlighting where and why they depart from the Equal Weight View. 5.2 The Steadfast View On the spectrum of views on the epistemic significance of disagreement, the Equal Weight View and the Steadfast View lie on opposite ends. While the Equal Weight View is quite conciliatory, the Steadfast View maintains that sticking to one’s guns in a case of peer disagreement can be rational. That is, discovering a peer disagreement does not mandate any doxastic change. While the Equal Weight View may be seen to emphasize intellectual

  • https://www.youtube.com
    Can American Politics Survive Pluralism?

    A traditional motto of the United States is E pluribus unum: “out of many, one.” In recent decades, though, American unity has been increasingly challenged by the fact that citizens hold radically different worldviews. We disagree about both what policies we should pursue and how to live together in peace. Frustration with this pluralism has led many on the political Left and Right to express hatred and intolerance. But what if pluralism could be better understood, or even accepted? What if it could be transformed from a political weakness into a political strength? And what might this look like in terms of political institutions and practices? Renowned authors Yuval Levin and John Inazu joined David Corey on October 19th to explore how we can live together across our deep differences. Baylor in Washington was pleased to co-sponsor this event with the Institute for Human Ecology and AEI's Initiative on Faith and Public Life. Panel: Yuval Levin is the author most recently of The Fractured Republic: Renewing America’s Social Contract (2017) and A Time to Build: From Family and Community to Congress and the Campus, How Recommitting to Our Institutions Can Revive the American Dream (2020). He is the director of Social, Cultural, and Constitutional Studies at the American Enterprise Institute (AEI) where he also holds the Beth and Ravenel Curry Chair in Public Policy. John Inazu is the author most recently of Confident Pluralism: Surviving and Thriving Through Deep Difference (2016), and co-editor (with Tim Keller) of Uncommon Ground: Living Faithfully in a World of Difference (2020). He is the Sally D. Danforth Distinguished Professor of Law and Religion at Washington University in St. Louis. Moderated By: Dr. David Corey is the Director of Baylor in Washington and a professor of Political Science focusing on political philosophy in the Honors Program at Baylor University. He is also an affiliated member of the departments of Philosophy and Political Science. He is the author of two books, The Just War Tradition (with J. Daryl Charles) (2012) and The Sophists in Plato’s Dialogues (2015).

  • https://www.youtube.com
    Social Peace in a Divided Time

    This is a keynote address of the 2021 Ethics Awareness Week sponsored by Utah Valley University's Center for the Study of Ethics. Entitled, "Social Peace in a Divided Time," this session features Yuval Levin, Senior Fellow at the American Enterprise Institute and editor-in-chief of National Affairs. More information about the 28th annual Ethics Awareness Week and other events can be found at https://uvu.edu/ethics Chapters: 0:00 Welcome and Introduction 3:24 Roots of our divisions 8:46 The missing structures of social life 12:25 Waning trust in institutions 16:46 Institutions failing to impose an ethic on the people 23:02 What can we do? 33:15 Question: values of a public university 37:41 Question: virtue signaling at universities 39:40 Question: media as an institution 45:16 Question: accounting for marginalization from institutions

  • The Sum Total of Our Disagreements: The Common Good and Liberal Governance

    ⭐️ Donate $5 to help keep these videos FREE for everyone! Pay it forward for the next viewer: https://go.thomisticinstitute.org/donate-youtube-a101 Prof. Capizzi delivered this lecture as part of the Thomistic Circles Conference titled "What is the Common Good?" (Feb. 26-28). The common good is both one (in the sense of indivisible) and universal (in the sense of serving the good of every particular member of that community). When political communities fail to pursue the common good, they diminish their members' flourishing and likewise when the members of political communities fail to order their activities toward the common good, not only do they undermine their own flourishing but they also diminish the community of which they are a part. Due to technical difficulties, video is only available for some of the lectures. —————————— ABOUT THOMISTIC CIRCLES: Our Thomistic Circles Conferences at the Dominican House of Studies in Washington, D.C. bring together prominent professors (principally in theology and philosophy), graduate students, seminarians, and Dominican brothers to provide a forum for examining contemporary questions from the perspective of classical Catholic theology, and to encourage the renewal of theology and philosophy in the Thomistic tradition. These conferences are distinctive not only because of their academic quality, but also because they take place in the context of a vibrant Dominican studium and religious community. As befits the Dominican tradition, the serious study of theology and philosophy is integrated with the contemplation of the mysteries of the faith. Thomistic Circles have been held under the auspices of the faculty at the Dominican House of Studies (founded in Washington, D.C. in 1905) for most of its history. ABOUT THE SPEAKER: Joseph E. Capizzi is Ordinary Professor of Moral Theology at the Catholic University of America. He teaches in the areas of social and political theology, with special interests in issues in peace and war, citizenship, political authority, and Augustinian theology. He has written, lectured, and published widely on just war theory, bioethics, the history of moral theology, and political liberalism. —————————— Subscribe to our channel here: https://www.youtube.com/c/TheThomisticInstitute?sub_confirmation=1 Stay connected on social media: https://www.facebook.com/ThomisticInstitute https://www.instagram.com/thomisticinstitute https://twitter.com/thomisticInst Visit us at: https://thomisticinstitute.org/

  • https://www.youtube.com
    Civic Friendship

    John Haldane outlines how reasonable disagreement should work in a divided society.

  • The Epistemic Benefits of Disagreement
    https://play.google.com

    This book presents an original discussion and analysis of epistemic peer disagreement. It reviews a wide range of cases from the literature, and extends the definition of epistemic peerhood with respect to the current one, to account for the actual variability found in real-world examples. The book offers a number of arguments supporting the variability in the nature and in the range of disagreements, and outlines the main benefits of disagreement among peers i.e. what the author calls the benefits to inquiry argument.