38
How (Not) to Conciliate in Cases of Peer Disagreement Consider the following two familiar scenarios: Case 1: My friend and I, two equally attentive and well-sighted individuals, stand side-by-side at the finish line of a horse race. The race is extremely close. At time t 0 , just as the first horses cross the finish line, it looks to me as though Horse A has won the race in virtue of finishing slightly ahead of Horse B; on the other hand, it looks to him as though Horse B has won in virtue of finishing slightly ahead of Horse A. At time t 1 , an instant later, we discover that we disagree about which horse has won the race. 1 Case 2: My friend and I have been going out to dinner for many years. We always tip 20% and divide the bill equally, and we always do the math in our heads. We’re quite accurate, but on those occasions where we’ve disagreed in the past, we’ve been right equally often. This evening seems typical, in that I don’t feel unusually tired or alert, and neither my friend nor I have had more wine or coffee than usual. I get $43 in my mental calculation, and become quite confident of this answer. But then my friend says she got $45. 2 What these two cases have in common is that I come to find myself disagreeing on some issue with my friend who I consider my epistemic peer regarding the relevant domain, who I have no reason to believe lacks any evidence that I 1 Adapted from Kelly (2010). 2 Adapted from Christensen (2010). 1

How (Not) to Conciliate in Cases of Peer Disagreement

Embed Size (px)

DESCRIPTION

Work in progress. Comments are welcome. Please cite with permission.

Citation preview

How (Not) to Conciliate in Cases of Peer Disagreement

Consider the following two familiar scenarios:Case 1: My friend and I, two equally attentive and well-sighted individuals, stand side-by-side at the finish line of a horse race. The race is extremely close. At time t0, just as the first horses cross the finish line, it looks to me as though Horse A has won the race in virtue of finishing slightly ahead of Horse B; on the other hand, it looks to him as though Horse B has won in virtue of finishing slightly ahead of Horse A. At time t1, an instant later, we discover that we disagree about which horse has won the race.[footnoteRef:1] [1: Adapted from Kelly (2010).]

Case 2: My friend and I have been going out to dinner for many years. We always tip 20% and divide the bill equally, and we always do the math in our heads. Were quite accurate, but on those occasions where weve disagreed in the past, weve been right equally often. This evening seems typical, in that I dont feel unusually tired or alert, and neither my friend nor I have had more wine or coffee than usual. I get $43 in my mental calculation, and become quite confident of this answer. But then my friend says she got $45.[footnoteRef:2] [2: Adapted from Christensen (2010).]

What these two cases have in common is that I come to find myself disagreeing on some issue with my friend who I consider my epistemic peer regarding the relevant domain, who I have no reason to believe lacks any evidence that I have or is less competent than I am in assessing evidence of the relevant sort. What should I do? Can I just stick to my guns? Or am I rationally obligated to revise my belief?One would expect that, in Case 1, I will reduce my confidence that Horse B has won the race and, likewise in Case 2, I will become less confident that the bill comes to $43. This is perhaps what usually happens when we find our peers disagree with us on a particular issue. If one thinks that this is also what ought to happen, it amounts to an affirmative answer to the third question we asked above and thereby puts one in the camp of what is often called conciliationism.But how exactly are disagreeing peers supposed to conciliate? They ought to split the difference, meeting their peers opinion halfway. I shall show this by rejecting competing conciliatory and non-conciliatory views. I will argue, in Section 2, that disagreeing peers ought to conciliate and, in Section 3, that the right way to conciliate is to split the difference. But, first, we need to know what it is for two persons to be in peer disagreement.

1. Preliminaries

The following is a working definition of peer disagreement I shall assume in this paper:Two subjects, A and B, are in peer disagreement regarding p only if: 1) They believe with differing confidence (or credence) that p; and 2) They reasonably see each other as epistemic peers regarding p.

A few comments are in order about this definition. Firstly, about (1). It specifies the condition for disagreement and depends on a more fine-grained differentiation of propositional attitudes. This is not only because it has become standard practice to explicate disagreement in terms of degrees of belief but, more importantly, because doing so, as we shall see, allows us to draw more nuanced distinctions among competing views regarding peer disagreement.Secondly, on the above definition, the kind of peers involved in peer disagreement are perceived peers. It matters not so much that two subjects have responded with equal competence to the same body of evidence regarding p as that they each reasonably believe the other to have done so. A reasonably sees B as her epistemic peer regarding p if A reasonably believes that A and B are exposed to the same batch evidence, that they are equally reliable in assessing evidence of that sort, that they are equally likely to make mistakes in this particular case, etc. This is so even if A is in fact mistaken.Moreover, the problem of peer disagreement concerns what the disagreeing peers are rationally required to do: Ought they revise their beliefs? Or is it reasonable for them to continue to believe what they had believed before they found out about the disagreement? And so forth. Note that the notion of rationality or reasonableness in play here is epistemic, as opposed to practical or moral. Moreover, it is largely deontological in nature. Suppose, as I shall argue, that the disagreeing peers ought to revise their beliefs. This would imply that, if one doesnt, one is epistemically at fault.[footnoteRef:3] But being epistemically blameworthy doesnt mean that one is thereby irrational. For one could be more or less rational in holding a belief and the crucial question for the disagreeing peers is: given the facts of their disagreement and absent any further relevant evidence, what would be the most reasonable thing to believe? In arguing that disagreeing peers ought to split the difference, I claim that it is ideally (or maximally) reasonable for them to meet their peers opinion halfway. To do otherwise is less reasonable or rational rather than not rational at all. [3: Now, to blame people epistemically for what they believe or they come to believe it presupposes that they have some voluntary control over their beliefs. See Section 4.2 below.]

Notice at last that the definition merely picks out two necessary conditions for disagreement. This suffices for my purpose in this paper, so I shall leave it open whether they are also sufficient.[footnoteRef:4] [4: There are reasons for thinking not. See, e.g., Richard (typescript).]

2. Why Conciliate?

I will focus on the simplest cases of peer disagreement, one that involves two disagreeing peers, A and B. To represent peer disagreement of this sort, suppose that, exposed to the same batch of evidence E*, A believes with confidence a that p and B believes with confidence b that p. For the sake of simplicity, lets stipulate that .50).Since it is left open whether x=y and whether a-x= b+y, (C) could capture various ways of fleshing out the conciliatory insight. Below is a catalog of the major conciliatory views that one may find in the literature on peer disagreement:C-a) A should lower her confidence and B should raise her confidence: As confidence= a-x, Bs confidence=b+y (x>0, y>0, x y);C-b) A should lower her confidence and B should raise her confidence: As confidence= a-z, Bs confidence=b+z (z>0);[footnoteRef:10] [10: See, e.g., Christensen (2007) and Enoch (2010), neither of which specifies how much exactly disagreeing peers should adjust their beliefs. Kelly (2005) seems to defend a version of this view; see f.t. 21 below.]

C-c) A and B should split the difference: As confidence=Bs confidence=(a + b)/2, which is not necessarily .5;[footnoteRef:11] [11: See, e.g., Elga (2007) (?). The Equal Weight View, splitting the difference, and suspending judgment are often taken to refer almost exclusively to (C-d). See, e.g., Feldman (2007) and Kelly (2010). This is a mistake; for (C-c) and (C-d) are clearly not equivalent. But see Section 4.2 below.]

C-d) A and B should both suspend their belief: As confidence=Bs confidence= .5;[footnoteRef:12] [12: See, e.g., Feldman (2006), (2007) and (2010).]

C-e) A and B should switch camps: As confidence=b, Bs confidence=a.[footnoteRef:13] [13: Not anyone that I know of holds this view, but Plantinga (2000) considers and rejects a similar view.]

Some views on the list are more plausible than others. I shall argue for (C-c) by, first, showing how (C-c) avoids the problems that lead one to reject the other alternatives and then, in the next section, replying to main objections.

3. 1 Against (C-a)(C-a) can be dismissed rather easily. According to (C-a), A and B should both adjust their belief, but not to the same extent. This is often refereed to as the Extra Weight View. That is, the disagreeing peers should give more weight to their own opinion (or to their peers).[footnoteRef:14] But this contradicts our initial supposition, namely that A and B recognize each other as peers. If A has no reasonindependent of E* and the fact that they disagreeto think that B lacks some evidence she possesses, was not in the right state of mind to assess E*, has made a mistake in assessing E*, etc., then it would be both arbitrary and question-begging for A to give more weight to her own opinion.[footnoteRef:15] (And, for similar reasons, A should not give more weight to Bs opinion than her own.) Call this the Arbitrariness problem. The only way to preserve the symmetry between A and B is to make it the case that x=y; that is, A and B should make equally extensive revisions. This means that (C-a) needs to be rejected in favor of (C-b). [14: The former is discussed more extensively.] [15: For the Independence Thesis and the question-begging nature of (C-a), see Christensen (2007), (2011).]

3.2 Against (C-b)But (C-b) is not without problems. To begin with, what value should we assign to z? It depends on how much weight one should give to higher-order evidence afforded by the fact that ones peer disagrees. There doesnt seem to be any principled way of doing that. It might be suggested that this is not really a problem, for one could go contextualist here, arguing that how much exactly one should conciliate is context sensitive, depending on, for instance, what the peers disagree about, among other things.But even if contextualism is a viable option here, a much more serious problem remains. Notice that (C-b) leaves it open whether a-z= b+z. To see why this is problematic, let a=.8, b=.2, and z=.1. So according to (C-b), having learned about their disagreement, A should lower her confidence to .7 and B should raise her confidence to .3. But then A and B are, once again, in peer disagreement. This means we have to reapply (C-b): As confidence now becomes .6 and Bs confidence becomes .4. But, again, the disagreement between A and B persists. So new adjustments are required such that As confidence lowers to .5 and Bs confidence reaches .5. Call this the Persisting Disagreement problem. The moral here is: reapplications of (C-b) are required until As and Bs confidences converge. This means that, in effect, (C-b) collapses into (C-c).[footnoteRef:16] [16: One might think that adherents of (C-b) could avoid this problem by stipulating that (C-b) only needs to be applied once. But its hard to see what the rationale for doing this might be. Thanks to Dane Muckler for raising this worry.]

Whats worse, there are cases where As and Bs confidence couldnt possibly converge, cases where the reapplications of (C-b) would have to go on ad infinitum. Let a=.8, b=.1, and z=.1. After the third adjustment, As confidence=.5 and Bs confidence=.4. What happens next is that As confidence=.4 and Bs confidence=.5: A and B switch positions. So now A needs to raise her confidence and B needs to lower her confidence, which means, after the fifth adjustment, As confidence=.5 and Bs confidence=.4. Absent any further evidence pertaining to p, the adjusting process could go on and on and on. Call this the Irresolvable Disagreement problem. This is again a reason in favor of (C-c), since (C-c) obviously doesnt have this problem.

3.3 Against (C-d)Those who are sympathetic with (C-d) would be quick to point out that (C-d), too, could avoid the Persisting Disagreement problem and the Irresolvable Disagreement problem. For, if (C-d) is true, then the disagreeing peers should adjust their confidence that p such that As confidence=Bs confidence= .5. So As and Bs confidences automatically converge.One problem with (C-d) is that it gets around these two problems only at the cost of running into another, the Arbitrariness problem. For like (C-a), it fails to preserve the symmetry between the disagreeing peers. Take again the case where a=.8 and b=.1. (C-d) requires that A lower her confidence to .5 and B raise her confidence to .5. In other words, according to (C-d), A ought to be .3 less sure that p whereas B ought to be .4 more sure that p. But if A and B identify each other as peers, what could possibly justify the difference?Moreover, (C-d) completely ignores the evidential relation between E* and p. After all, the basic idea behind (C-d) is this: whatever your initial confidence regarding p was and whatever the original evidence E* actually supports, you should adjust your confidence to .5 whenever you come to discover that your peer disagrees with you. This would be most puzzling in cases where one of the peers, say A, has in fact correctly responded to E* whereas B has made a terrible error that went unnoticed by both A and B. Suppose, prior to learning about the disagreement, A was .9 confident that p and B was .3 confident that p. Our intuition is strong that, given E*, As was the (maximally) rational belief whereas Bs belief was, if not irrational, much less rational than As. Now, according to (C-d), when the facts of the disagreement are made known, it is (maximally) rational for both A and B to believe with .5 confidence that p. How could it be that, whenever peers disagree, to believe with .5 confidence that p automatically becomes the default position, becomes (maximally) rational, regardless what the evidential relation between E* and p is? The original evidence is rendered utterly irrelevant whenever peers disagree: this apparently flies in the face of our strong intuitions.[footnoteRef:17] Call this the Original Evidence problem. The original evidence should make a difference with respect to what is rational for the disagreeing peers to believe regarding p. And this (C-d) cannot accommodate. [17: Kelly (2007) appeals to this intuition in arguing for what he calls the Total Evidence View, according to which what it is reasonable to believe depends on both the original, first-order evidence as well as on the higher-order evidence that is afforded by the fact that ones peers believe as they do. But it is not clear what exactly the view amounts to. (Kelly seems to think that disagreeing peers should conciliate, but not too much; for what the original evidence actually supports also makes a difference regarding what it is reasonable to believe. He also discusses other factors that might affect how extensively one should adjust ones belief. Seen this way, Kellys view looks a lot like Jennifer Lackeys Justificationist View that she defends in (2010a) and (2010b).) Moreover, the view seems to have to deal with the Persisting Disagreement problem. I shall leave it at that.]

3.4 Against (C-e)If one wants to conciliate in the face of peer disagreement, simply switching camps is the last way to go. First of all, to give up your own original position and adopt your peers is to completely defer to your peers opinion, which means you are treating your peer as epistemically superior to you rather than as your peer, i.e., someone who could have just as easily made a mistake as you might. So (C-e) faces the Arbitrariness problem. Moreover, switching camps only results in a different disagreement. For this reason, (C-e) faces both the Persisting Disagreement problem and the Irresolvable Disagreement problem.

3.5 Defending (C-c)To sum up, to avoid the Arbitrariness problem, (C-a) has to collapse into (C-b). But (C-b) has to collapse into (C-c) in order to avoid the Persisting Disagreement problem and the Irresolvable Disagreement problem. (C-d) avoids both problems that (C-b) has, only to run into the Arbitrariness problem and the Original Evidence problem. (C-e) is facing the Arbitrariness problem, the Persisting Disagreement problem, and the Irresolvable Disagreement problem. Now, if (C-c) avoids all these difficulties, then we have ample reasons to think that disagreeing peers ought to split the difference. It is easy to see how (C-c) could avoid all the three problems facing (C-e). Recall:

(C-e) A and B should split the difference: As confidence=Bs confidence=(a + b)/2.

On (C-c), neither A nor B is required to make more extensive adjustments to their belief than is the other. Hence no arbitrariness of the kind that affects (C-a) and (C-e). And when they have split the difference, A and B are no longer in disagreement.More needs to be said about how splitting the difference accommodates the original evidence. Consider again the case where A has got it right and believes with confidence .9 that p whereas B believes with confidence .3 that p. According to (C-d), when they find out about the disagreement, A and B ought to withhold their belief, that is, believe with confidence .5 that p. This is problematic because the evidential force of E* gets swamped by the disagreement and becomes completely irrelevant. (C-c), on the other hand, requires A and B to split the difference and believe with confidence .6 that p. So (C-c) does take into account the fact that A has correctly evaluated the evidence.

4. Objections and RepliesI have argued that disagreeing peers ought to conciliate, ought to give some ground to their peers opinion. More specifically, they ought to split the difference and meet their peers opinion halfway. In this section, I shall consider some potential objections to the view.

4.1As I mentioned in Section 3.5, (C-c) preserves the symmetry between A and B and hence avoids the kind of arbitrariness that leads us to reject (C-a), (C-d), and (C-e). But (C-c) seems to be arbitrary in another way. Consider these two cases: Case 3: Exposed to the same evidence E1, A1 believes with confidence .9 that it is going to rain tomorrow and B1 only gives .1 credence to that proposition. A1 and B1 come together, compare notes, and realize that they are in peer disagreement.

Case 4: Given the same set of evidence E2, A2 believes with confidence .5 that the island was inhabited by humans 2000 years ago and B2 gives .4 credence to that proposition. A2 and B2 meet, compare notes, and realize that they are disagreeing peers.

Now, given (C-c), A1 and B1 ought to adjust their belief so that they both become .5 confident that it is going to rain tomorrow; A2 and B2, on the other hand, ought to believe with confidence .45 that the island was inhabited by humans 2000 years ago, after they split the difference. Notice that in Case 1, A1 lowers her confidence from .9 to .5 whereas in Case 2, A1 lowers her confidence from .5 to .45. But, one might wonder, how could this be? How could the adjustments required of disagreeing peers vary from case to case? Isnt this arbitrary?It does seem arbitrary to require disagreeing peers to make more adjustments in one case than in another. But arbitrariness of this sort is exactly what we would expect if we want to accommodate the intuition that what the original evidence supports counts for something toward what is reasonable for disagreeing peers to believe, that it cannot be swayed without exception.The apparent arbitrariness arises also because the required adjustments depend on how far apart the disagreeing peers stand on a certain issue. The fact that peers radically disagree usually means that one of them is (more) in error than the other. When the disagreeing peers stand very far apart from each other, as in Case 3 (and suppose A1 has in fact correctly assessed the evidence and come to the right conclusion), by bringing their opinions closer to one another, neither of them is very far removed from the truth of the matter. On the other hand, when the disagreement is not that radical, as in Case 4, it usually means that disagreeing peers are both hitting closely at the truth. In cases like this, very few adjustments are necessary. In short, disagreeing peers ought to split the difference, but how much ground they have to give depends on whom they are disagreeing with.The last point, one might think, spells trouble: Does this mean that one has to constantly adjusting ones confidence that p when she meets with different peers who think differently? Yes, if a) one has reason to believe that these peers all come to their opinions independently and b) ones adjusted confidence after meeting with one peer is still at odds with that of another. But one would expect that, in response to roughly the same body of evidence, the opinions of rational agents do not diverge dramatically from one another, at least regarding issues not as controversial as those in religion and politics (and perhaps in philosophy as well).

4.2One might question the way I have modeled peer disagreement. While people do have some idea how confident they are, say, that its going to rain tomorrow, and may become more or less sure about it, is it really the case that peoples confidence can be quantified in so fine-grained a way and adjusted as they wish? If it turns out that how confident people are about their belief is not quantifiable in the way I have assumed and/or is not reflectively transparent, it would mean that I have been relying on an unrealistic picture of human psychology and the approach I have adopted is, hence, misguided. Moreover, even if peoples confidence could be so quantified, it doesnt seem to be the case that they can freely adjust their confidence as accurately as is required.[footnoteRef:18] In addition, if we could only identify three broadly defined propositional attitudes (i.e., belief, agnosticism, and disbelief) in terms of confidence or degrees of belief, the distinctions I have made between different conciliatory views may break down. [18: Elgin (2010) argues that since ought implies can but belief and its withholding are typically not under the requisite kind of voluntary control, we need to adopt a non-conformist policy regarding peer disagreement, namely (S) as I defined in Section 2.]

All this boils down to, I believe, the worry that I may have to modify my conclusion depending on these factors that could only be empirically determined. To the extent that our confidence regarding a certain proposition can be quantified in the way I suggested, is accessible via introspection, can be freely adjusteddisagreeing peers ought to give ground; more specifically, they ought to split the difference by adjusting their confidence so as to meet their peers opinion halfway. Now, if it is true that the only propositional attitudes that can be clearly distinguished are the ones we are familiar with, then, assuming that we could easily switch from one propositional attitude to another,[footnoteRef:19] it means that splitting the difference would simply amounts to suspending judgment. This can be seen more clearly when you believe, for instance, that Horse B has just won the race and I believe not. On the confidence model, your confidence that Horse B won equals 1 whereas mine equals 0. Now, if we are to split the difference, we both ought to believe with confidence .5 that Horse B won, which means that we ought to suspend judgment on the issue. [19: Elgins non-conformist conclusion requires that belief and its withholding are never within our voluntary control, but there is no reason to think that this is the case.]

But if we take suspending judgment as the general conciliatory policy, things may get a little bit trickier when, for instance, I believe that Horse B won (i.e., confidence=1) and you are agnostic about it (i.e., confidence=.5). What are we required to do? It seems that whereas you ought to stick to your original opinion, I ought to suspend judgment, which means I am giving ground and you are not. This may sound strange at first. But it becomes less so once I rephrase it: When we come to find out about the disagreement, we both ought to suspend judgment; its just that this happens to mean different things for us, conciliating in my case and sticking to your guns in your case.

4.3Splitting the difference may be generally required of disagreeing peers, but there seem to be cases where they could reasonably just stick to their guns. Consider the following case:You and I are attentive members of a jury charged with determining whether the accused is guilty. The prosecution, following the defense, has just rested its case. But, after being exposed to the same evidence and arguments, I find myself quite confident that the accused is guilty while you find yourself equally confident that he is innocent.[footnoteRef:20] [20: Adapted from Kelly (2013).]

It seems that even if we reasonably consider each other as peers[footnoteRef:21], neither of us has to give any ground. If this is the case, then, as a universal thesis, (C-c) is false. [21: There are reasons to think not: since they are professionally trained, jurors may have different background beliefs, may weigh evidence differently, etc.]

Our intuition that disagreeing jurors dont have to give ground to one another, I am inclined to think, is a confused thought: it conflates legal permissibility with epistemic permissibility. For in this particular context, the opinions of each member of the jury are legally protected, are granted equal weight under the law. So even if members of the jury are epistemically obligated to give ground in cases of disagreement, they are legally permitted not to make that move. Moreover, the court often operates under the assumption that there might be gray areas in legal issues and that, consequently, some disagreements might be irresolvable. In short, disagreements among peers are sometimes expected, tolerated, and even protected for non-epistemic reasons. But this doesnt mean that, epistemically speaking, peers involved in disagreements of this sort are not obligated to conciliate.

4.4Perhaps, potentially the most devastating objection is that any conciliatory view, including (C-c), would be self-undermining when it comes to higher-order disagreement, i.e., disagreement about how to disagree. On conciliatory views, disagreeing peers ought to give ground. Lets grant this in cases of first-order disagreement, cases where peers disagree about horse race results, restaurant bills, weather, etc. But philosophers hold competing views on peer disagreement; they obviously disagree about what is rationally required of disagreeing peers. These philosophers do seem to recognize one another as peers, as no less philosophically competent. Now for someone who holds a conciliatory view, say (C-c), what should she do when realizes and reflect on the fact that her peer, an adherent of a steadfast view, believes that disagreeing peers do not have to conciliate? Because of her conciliatory commitments and on pain of inconsistency, she cant simply continue to hold (C-c). But if she is consistent and decides to conciliatee.g., lowering her confidence that (C-c) is the right viewthen she would be conceding, to some extent, that (C-c) is not the right view.[footnoteRef:22] Call this the Self-Undermining problem. [22: Elga (2010) considers an objection along this line.]

One could, of course, hold a partly conciliatory view, according to which one should give grounds in the face of disagreement about all issues except when the issue under dispute is disagreement itself. That is, conciliatory views are the right responses to peer disagreement only when the disagreement is not about how to disagree. (Elga 2010) This strategy would surely get us around the Self-Undermining objection, but its ad hoc nature makes it less attractive. More importantly, I think this is giving too much ground too quickly. For a couple of reasons. First, a case can be made that people with competing views about peer disagreement are not yet peers. Since this remains a debated issue, it is plausible to think that philosophers are still comparing notes and hence may not share all the relevant evidence.[footnoteRef:23] [23: Frances (2010), Fumerton (2010), van Inwagen (2010) all argue to the effect that it is not blameworthy, in at least some philosophical debates, to be unaffected by the differing opinions of ones peers. But I dont think they believe that not blameworthy is the same thing as rationally ideal.]

Further, in cases of disagreement about disagreement, to conciliate just is what one would expect adherents of (C-c) to do; otherwise, it would be inconsistent. This may be self-undermining to some extent, but is exactly what rationality requires: its nothing to be concerned about. The mere fact that conciliationism is self-undermining doesnt make conciliationism false. It just means that, insofar as the jury is still out, conciliators need to give some ground to their disagreeing peers.Another reason why this isnt worrisome is that the main contender of conciliatory views, the Steadfast View, faces a similar, in fact a lot more serious, problem when it comes to higher-order disagreement. Suppose A holds (S), believing that it is reasonable for disagreeable peers to stick to ones guns (and, as a corollary, it is not rational, or, in the very least, less so, for them to conciliate one way or another). Now enter B, who holds a conciliatory view. But, upon finding out that B disagrees with him, (S) commits A to saying that B ought to stick to her guns, i.e., to continue to hold the conciliatory view, which implies, in effect, it is, after all, rational for B to conciliate in cases of peer disagreement. This is an even worse problem, one would think, than the Self-Undermining problem because it means that (S) is inconsistent. Call this the Inconsistency problem. That is, (S) commits its proponents, on the one hand, to denying that it is rational for one to conciliate in cases of first-order disagreement and, on the other hand, to indirectly licensing a conciliatory policy by allowing someone who holds a conciliatory view about first-order disagreement to stick to her conciliatory view.So, as far as higher-order disagreement is concerned, conciliatory views remain a better option.

ConclusionWhen one has no reason to think that ones peer is more likely to be mistaken and finds out that she thinks differently about some issue, one is rationally required to split the difference. I have argued this view, (C-c), against its conciliatory cousins, the main contenders of conciliatory views, and main objections to conciliatory views in general and to (C-c) in particular. Like any other view of peer disagreement, (C-c) is not intended as an ultimate solution to peer disagreement. Rather, it only addresses the issue of what disagreeing peers ought to do, absent any further relevant information to go on. When more evidence comes up, peers need to update their beliefs accordingly.

1References

Christensen, David. (2007). Epistemology of Disagreement: The Good News.Philosophical Review 116 (2): 187217.Christensen, David. (2011). Disagreement, Question-Begging, and Epistemic Self-Criticism. Philosophers' Imprint 11 (6) Douven, I. (2009). Uniqueness Revisited. American Philosophical Quarterly, 46, 347361.Douven, I. (2010). Simulating peer disagreements. Studies in History and Philosophy of Science 41: 148157Elga, Adam. How to Disagree about How to Disagree. InDisagreement. Edited by Richard Feldman and Ted A. Warfield 175186. Oxford: Oxford University Press, 2010.Elga, Adam. Reflection and Disagreement.Nos41.3 (2007): 478502.Elgin, Catherine, Z. Persistent Disagreement. InDisagreement. Edited by Richard Feldman and Ted A. Warfield, 5368. Oxford: Oxford University Press, 2010.Enoch, David. (2011). Not Just a Truthometer: Taking Oneself Seriously (but Not Too Seriously) in Cases of Peer Disagreement. Mind 119 (476):953-997.Feldman, Richard, and Ted A. Warfield, eds.Disagreement. Oxford: Oxford University Press, 2010.Feldman, Richard. (2006). Epistemological Puzzles about Disagreement. InEpistemology Futures. Edited by Stephen Hetherington, 216236. Oxford: Oxford University Press.Feldman, Richard. (2007). Reasonable Religious Disagreements. In Louise Antony (ed.), Philosophers Without Gods: Meditations on Atheism and the Secular. OUP.Frances, Bryan. The Reflective Epistemic Renegade.Philosophy and Phenomenological Research81.2 (September 2010): 419463.Frances, Bryan. (2012). Discovering Disagreeing Epistemic Peers and Superiors. International Journal of Philosophical Studies 20 (1):1 - 21.Fumerton, Richard A. You Cant Trust a Philosopher. InDisagreement. Edited by Richard Feldman and Ted A. Warfield, 91110. Oxford: Oxford University Press, 2010.Huber, Franz and Schmidt-Petri, Christoph. Degrees of Belief. Springer: 2009Kelly, Thomas. Peer Disagreement and Higher Order Evidence. InDisagreement. Edited by Richard Feldman and Ted A. Warfield, 111174. New York: Oxford University Press, 2010.Kelly, Thomas. The Epistemic Significance of Disagreement. InOxford Studies in Epistemology. Vol. 1. Edited by John Hawthorne and Tamar Szab Gendler, 167196. Oxford: Oxford University Press, 2005.Kelly, Thomas. 2013. Disagreement, Dogmatism, and Belief PolarizationLackey, Jennifer. A Justificationist View of Disagreements Epistemic Significance. InSocial Epistemology. Edited by Alan Haddock, Adrian Millar, and Duncan Pritchard, 298325. Oxford: Oxford University Press, 2010a.Lam, Barry. 2011. On the Rationality of Belief-Invariance in Light of Peer Disagreement. Philosophical Review, Vol. 120, No. 2Plantinga. (2000). Warranted Christian Belief. OUPRichard Feldman (2009). Evidentialism, Higher-Order Evidence, and Disagreement. Episteme 6 (3):294-312.Richard, Mark. (typescript). What Is Disagreement?van Inwagen, Peter. It is Wrong, Everywhere, Always, and for Anyone, to Believe Anything upon Insufficient Evidence. InFaith, Freedom, and Rationality: Philosophy of Religion Today. Edited by Jeff Jordan and Daniel Howard-Snyder, 137153. London: Rowman and Littlefield, 1996.van Inwagen, Peter. 2010. Were right, theyre wrong. InDisagreement. Edited by Richard Feldman and Ted A. Warfield.