Lecture Notes, UC Davis Philosophy 102, Theory of Knowledge

Coherentism

G. J. Mattey, Senior Lecturer

Picture of Laurence BonJour

Laurence BonJour

Text: Laurence BonJour, The Structure of Empirical Knowledge, 5.3, 8.3, 8.4 (1985)


We saw in the last lecture that a possible account of justification is one whereby all justified beliefs are indirectly justified. The only way this could be without an infinite regress is for some beliefs to be justified "reciprocally." That is, one belief p is at least a partial ground for another belief q, while q is at the same time a partial ground for p. Their relation might be said to be one of "reciprocal" or "mutual" support. A coherentist account of justification allows that at least in some cases, beliefs must support each other mutually if any beliefs are to be justified. A mixed coherentist account allows some directly justified beliefs, while a pure coherentist account allows none.

Historical Precedents

In ancient Greek philosophy, there is little mention of coherence. The reason probably is that considerations of coherence are most obviously important for empirical beliefs where the evidence is not conclusive. This would make coherence only appropriate for opinion, not for knowledge. The skeptics of Plato's Academy denied that there is knowledge, however, so they were apt to take matters of coherence more seriously.

The greatest of the Academic skeptics, Carneades, is reported by Sextus to have taken coherence into account in deciding on how to act (Against the Professors, 7). We begin with an impression that appears true, that is convincing. This is sufficient for "matters of no importance." For weightier decisions, Carneades added that the convincing impression should be "undiverted." This means that there is no other impression which conflicts with the original impression. The reason is that "an impression never stands in isolation but one depends on another like links in a chain." This is a negative standard.

A more elaborate negative standard is that that the impression be "thoroughly explored." The model here is intensive cross-questioning. If a number of witness tell a consistent story, are in "mutual corroboration," we do not rest until we have checked their stories out against one another. This standard was resurrected by Chisholm in his Theory of Knowledge and incorporated, in revised form, into Chisholm's account of justification.

In the modern period, considerations of coherence were raised by Descartes in the Sixth Meditation. It is coherence which allows us, in the end, to distinguish between waking and dreaming states.

But when I distinctly see where things come from and where and when they come to me, and when I can connect my perceptions of them to the whole of the rest of my life without a break, then I am quite certain that when I encounter these things I am not asleep but awake.
Other modern philosophers such as Berkeley and Hume used coherence considerations for the same purpose.

Coherence came to the forefront of philosophical thinking in the methodology of G. W. F. Hegel, who wrote in the early nineteenth century. He produced a vast metaphysical system in which, he claimed, no part had any significance apart from all the others. (This position can be called "metaphysical holism.") As a result, our knowledge of the parts can be based only on knowledge of the whole. The Hegelian position was developed in the late-nineteenth and early-twentieth century by "absolute idealist" philosophers such as F. H. Bradley, Brand Blanshard and A. C. Ewing. It was largely in reaction to Hegelian coherentism that foundationalism was revived in the twentieth century.

Coherence theories of one kind of another were prominent in the last century. The American philosopher Willard Van Orman Quine was probably the most influential of them. In his famous article "Two Dogmas of Empiricism," Quine proclaimed that no statement is immune from revision. We may be led to discard even beliefs about logic if we make radical enough changes to the rest of our belief system. Donald Davidson and Nicholas Rescher have also made extensive use of coherence considerations.

In mainstream theory of knowledge circles, the two names most closely associated with coherentism are Keith Lehrer and University of Washington Professor Laurence BonJour. Lehrer has laid out an elaborate coherence account of justification in his Theory of Knowledge. BonJour devoted an entire book, The Structure of Empirical Knowledge, to a coherence account. Interestingly, though, BonJour is a foundationalist with respect to a priori knowledge. (See In Defense of Pure Reason.)

BonJour's Rejection of Foundationalism

In the first four chapters of The Structure of Empirical Knowledge, BonJour had argued that there are no directly justified ("basic," in BonJour's terminology) empirical beliefs.

There is no way for the foundationalist's allegedly basic empirical beliefs to be genuinely justified for the believer in question without that justification itself depending on further empirical beliefs which are themselves in need of justification. (p. 84)
His basic argument is that there is no way to produce an unjustified justifier (the ground of a directly justified belief). If any psychological state is to justify, it must itself require justification. To be able to justify, it must have "assertive, or at least representational content." But if it has representational content, there is a question as to whether that content represents correctly, and so there is a need for justification. (See p. 78 for a summary account of the argument.)

The Concept of Coherence

If we are to avoid skepticism, then, we must turn to a coherence theory of justification. In Chapter 5, BonJour explores what a tenable coherence theory would look like. The key feature of coherence is that it in some sense allows for circular justification. This is intolerable to someone who approaches justification in the manner of Aristotle. He, and other foundationalists, see justification as "a one-dimensional sequence of belief, ordered by the relation of epistemic priority, along which epistemic justification is passed from the eariler to the later beliefs in the sequence via connections of inference" (p. 90).

The reason theorists favor this "linear" account of justification, according to BonJour, is that they are concerned with "local" justification, the justification of specific beliefs. Here, circularity appears intolerable. But when we think of "global" justification, the justification of the belief system as a whole, then coherence, or "mutual or reciprocal support," is the only standard we have available.

Thus, when we say that an individual belief is justified, it is not because of its relation to other specific beliefs, but rather because of its relation to a comprehensive coherent system of beliefs. No belief has a superior epistemic status relative to any other, such as with Aristotle's view that the premises of a demonstration must be, initially, better known than the conclusion. So there is no linear justification and consequently no way in which the justification can be circular.

There are three steps, then, in the justification of beliefs. The first is to move from the inferential relations between beliefs to the coherence of the overall system of beliefs. The second is to show that a coherent system of beliefs is the basis for justification of individual beliefs. The third step is to justify a given belief by virtue of its inclusion in a coherent system. The material in our text concerns the first step. It is useful to have a relatively clear concept of coherence if we are to judge the extent to which a given belief system is coherent.

It is easy to given an intuitive characterization of coherence, as agreement or close connection among a body of beliefs. These connections may be of several different types. We may infer one belief from others, whether deductively or non-deductively. Some beliefs are evidence for other beliefs. And some beliefs explain other beliefs. It is really difficult to put together a comprehensive account of coherence which embraces all these elements.

If the task of the account is merely to provide an alternative to foundationalism, then clarification of the concept is not necessary. The reason is that any comprehensive foundationalist theory, not just those that explicitly incorporate coherence elements, must in the end be a mixed coherence theory. Beliefs about the distant past, for example, are not directly justified, and their indirect justification often comes down to which story among many fits together the best with what facts we have.

The task is not really required to provide an alternative to skepticism, either. Although the concept of coherence may be somewhat sketchy, it still appears that a coherence account of justification would be preferable to the embrace of skepticism. But since BonJour is going to be writing at length about coherence, he sees fit to take some steps toward and understanding of its concept.

There are four main elements of his clarification of the concept of coherence.

Each of these is discussed in some detail.

Coherence and Consistency

Many critics of coherentism, and even some of its advocates, equate coherence with logical consistency or lack of contradiction among the beliefs in a system. But a system may be consistent in this sense while lacking in coherence.

A system may be logically consistent but probabilistically inconsistent. If I believe that p and also that p is highly improbable, then my belief system is not as coherent as it would be if I did not have both these beliefs. It may be impossible to avoid all probabilistic inconsistency in the way one can avoid logical inconsistency. If something improbable happens, it seems best to believe it. How much probabilistic inconsistency detracts from coherence is a matter of degree, depending on how many such inconsistencies they are and how improbable the beliefs in question are. The greater the degree of inconsistency, the less coherent the system is, all else being equal.

A system may be both logically consistent and highly probabilistically consistent without being coherent. This can occur when the beliefs are relatively unrelated to one another. Their isolation insures that they will not conflict, but it also insures that they do not "hang together." There must be some sort of positive connection among the beliefs for there to be coherence.

Coherence and Inferential Relations

The positive connection required for coherence should be inferential. This allows justification of one belief by another. The inferential relations in question should be truth-preserving to some extent. This allows both deductive and non-deductive inferential relations to contribute to coherence. There is general agreement among coherentists over this point.

Disagreement comes in when the inferential relations are specified. An extreme proposal is that of Blanshard: every proposition in the system entails every other one in the system and is entailed by the system. Even if we construe "entailment" liberally, it seems impossible to fulfill. We could follow Ewing and limit the relation to one in which each proposition in the system is entailed by the conjunction of all the others.

At the other end, we have Lewis's very weak requirement of "congruence." The idea here is that the other propositions taken together increase the probability of each proposition in the system. It has already been stated that coherence is a matter of degree, so we can say that Lewis is just describing a very weak form of coherence. Unlike the proposal of Blanshard and Ewing ("the idealists"), this one should be easily realizable.

There is a technical problem that arises here, however. Both Lewis and Ewing require only that the relation of support be one-way. This allows systems with two independent sub-systems. Nothing in sub-system in A has any relation to anything in the sub-system B. Now suppose you have a proposition p drawn from A. It is entailed by or made more probable by the whole system A+B, even though nothing in B does any work. The same goes for any proposition q in B. So all propositions are supported by the whole system.

Yet we would not want to say that the system is coherence, because the two sub-systems do not "hang together." A coherent system is unified. You could accommodate this view by ruling out cases like that just give, which was Ewing's strategy. BonJour suggests that a better strategy would be to say that the degree of unity is another factor affecting the degree of coherence. The more connections there are, and the stronger they are, the more coherent the system is. The more the system is broken down into unrelated sub-systems, the less coherent it is. Finally, coherence is increased by any given proposition's being involved in multiple inferential relations.

Coherence and Explanation

To say that coherence is a function of inferential relations is not to say much. We can beef up the description by adding that these relations be explanatory. One one account of explanation, the "covering-law model" of Carl Hempel, some inferential relations are explanatory. If I take as premises some initial conditions and a universal law, and then infer a result from them, then I have given an explanation for what has been inferred. Moreover, if I can deduce a law with limited applicability from some more universal law, again there is an explanation of the limited law.

A relation is not explanatory merely by virtue of its logical form, however. The goal of explanation is to bring widely diverse phenomena under laws that are basic. Where there is a failure to do this (an anomaly), explanatory coherence is decreased. Not only is the goal of explanation thwarted, but the belief in the anomaly is supported by other parts of the belief system. So there is a conflict that is antithetical to coherence.

A final point about explanation is that there are cases in which coherence (to some degree) is attained without explanation. In the Bromberger-Lehrer example, a fact about the position of an owl and a mouse is deduced by the Pythagorean theorem. This provides coherence in that there is an inferential relation connecting that fact to other things that are believed. But this is not an explanatory inferential relation. The position of the two animals might even be an anomaly, given the predatory habits of owls.

Coherence and Conceptual Change

Given that coherence increases with systematic unification, there will be times when coherence is increased with changes in concepts which allows that unification. A new system of theoretical explanation, involving new concepts, can take care of anomalies. (For example, the concept of inertia allowed seventeenth-century physicists to explain the motion of a projectile: something Aristotelian conceptions did not do well.)

The account of coherence given here is sketchy, but it has some identifiable content. It is not so weak as to be easily satisified. We can use it in our discussions of empirical knowledge.

Justifying Coherentism

BonJour takes it that the defender of any account of justification has the obligation to show why a person who is justified on that account is likely to have true beliefs. The reason is that truth is the goal of cognitive inquiry. The justification of a theory of justification is termed a "metajustification." So the issue before us is whether there is an adequate metajustification of the coherence account of justification.

Before beginning the task of metajustification, BonJour clarifies the requirements for its success. We want to say that someone whose belief system is very coherent is likely to be true. But truth-conduciveness can be understood in two ways. It might mean that at any given time, a person with a very coherent system is likely to be correct in his beliefs. Or it might mean that such a system would likely yield truth in the long run.

If we stick to the first meaning, we are going to be frustrated. BonJour contends that nobody has succeeded in metajustification of this sort, no matter what the theory of justification they have tried to justify. He adds that it would be sufficient for his purposes to show that a high degree of coherence is likely to indicate reality over the course of time. "Establishing even this result would presumably make it epistemically reasonable to adopt and apply such a standard of justification even in the short run" unless we have available a likely more successful standard.

We must take care to distinguish two kinds of long-range results. The desirable one would stabilize at some point, which would allow coherence over a stretch of time. Less desirable would be an system which, although coherent at some times, is changing so much and in so many ways as not to provide a stable picture of the world.

Another thing to keep in mind is the "Observation Requirement" which had been laid down earlier. The basic idea is that coherence requires sensitivity to observational input. This requirement allows the coherentist to meet the "isolation objection," according to which a coherent system might be completely isolated from reality. The system's ability to respond to new input will be a key factor in the metajustification argument.

A final point is that the degree of coherence, as well as of stability, is relative. So the result of the metajustification argument should be that the degree of coherence and stability varies with the likelihood of true belief. So here is the thesis to be defended.

A system of beliefs which (a) remains coherent (and stable) over the long run and (b) continues to satisfy the Observation Requirement is likely, to a degree which is proportional to the degree of coherence (and stability) and the longness of the run, to correspond closely to independent reality.

The argument, in a nutshell, is that close correspondence to reality is "the obvious" explanation for the stability and coherence of a system over the long run, given that it is reacting to ever-changing observational input. More specifically, there are two premises to the argument. P1 states that it is highly likely that there is some explanation for coherence and stability in the face of the observational input. P2 states that the best explanation for this behavior of the system is that (a) the spontaneous beliefs that are thought to be reliable indicators of the world have a content that reflects reality, and (b) the system of beliefs as a whole corresponds to reality. Also, how good the explanation is varies with the degree of coherence and stability and how long the stable system is maintained.

Elaboration of the Metajustification

BonJour takes P1 to be close to self-evident. Suppose that there were nothing more than chance at work in the production of belief in response to observational input. Then we should predict that the system would be continually disrupted. To restore it, we either would have to make major changes in it, which would undermine its stability, or we would have to discount the observational input, which would flout the Observation Requirement. If there is stability, then, it is very likely not due to chance.

It is much harder to defend P2. The goal would be to show that correspondence to the truth (the "correspondence hypothesis") is the best explanation for stable coherence. Why think that lack of correspondence would undermine stable coherence? The reason the correspondence hypothesis is the best explanation can be seen from the examination of rival explanations.

Rival explanations are divided into two types. The explanations of the first type makes "normal hypotheses," that reality is pretty much the way we think it is, but in reaction to it our belief system yields "a picture of the world which is in some way inaccurate, incomplete, or distorted." Explanations of the second type make a "skeptical hypotheses" according to which reality is very different from our ordinary conception of it. On those hypotheses, we are victims of a Cartesian demon or are brains in a vat.

Rival explanations involving normal hypotheses are not discussed in the excerpts appearing in our text. These are divided again into two types. One type has it that stable coherence results merely from getting the observational facts right. In that case, the theoretical component of the picture of the world may be inaccurate, thus rendering the overall picture inadequate. The second type has it that stable coherence results from a distorted or skewed reaction to observational input. BonJour argues that neither type of explanation is superior.

Skeptical Hypotheses

The first point BonJour makes is that skeptical hypotheses confront every account of justification and must not be ignored. Probably every account will have equal difficulty dealing with them. But in the present context, he is not concerned to show that coherentism is superior to other accounts of justification. Rather, he is concerned to show that stable coherence is better explained by the correspondence hypothesis than by the skeptical hypothesis.

Many traditional responses to skepticism are in fact "themselves merely sophisticated versions of skepticism." That is, they presume that the skeptical hypothesis is no less likely to be true than the correspondence hypothesis. Then they go on to say that the correspondence hypothesis is preferable on non-epistemic grounds, such as practicality.

A common strategy is to argue that skeptical hypotheses are inferior in virtue of the greater simplicity of the correspondence hypothesis. But this faces the problems of understanding what simplicity is, and why it is a criterion for the preferability of explanations. And it also must show that the correspondence hypothesis actually is simpler.

A superior strategy is to argue directly that the correspondence hypothesis is more likely to be true than skeptical hypotheses, in the face of long-run stable coherence of a system of beliefs.

Assuming the truth of probability theory, we can say that there are two factors influencing the relative likelihood of two hypotheses. The first is the degree to which the evidence supports the two probabilities. We may assume these to be equal in the two cases, since the skeptical hypotheses are designed to be compatible with all the empirical evidence available to us. The second factor is the probability of the hypotheses independently of the evidence. This would have to be decided on a priori grounds.

BonJour takes a stab at showing that the correspondence hypothesis is antecedently more probable than are the skeptical hypotheses. His treatment of the matter is "preliminary and highly intuitive." The basic idea is that given long-run stable coherence, skeptical hypotheses are less likely to be true (as well as being less satisfying methodologically) because of the fact that they can be adapted to any evidential situation.

The Elaborated Chance Hypothesis

In considering alternative explanations of coherence given normal hypotheses, BonJour had raised the possibility that coherence is the product of pure chance. He rejected this explanation on the grounds that observational beliefs based on chance would likely end up destroying the coherence of the system.

This objection could be surmounted by stipulating in the hypothesis that the observational beliefs produced by chance are produced in such a way as not by themselves to upset the coherence of the belief system. This is what he calls the "elaborated chance hypothesis." Given this hypothesis, the long-run stability of coherence is highly likely. But at the same time, it is not likely to be a true hypothesis. It is no more likely than the unelaborated chance hypothesis.

The probability is low in the a priori sense of probability. That is, this probability can be determined independently of experience. The lowness of the probability is due to probabilistic incoherence between the thesis (a) that the observational beliefs are produced by chance, and (b) that they lead to coherence in the long run. The hypotheses are, in BonJour's terms, "complex," in that they contain elements which are in tension with one another.

The Evil-Demon Hypothesis

BonJour will focus on the evil-demon hypothesis in the hope that his treatment can be extended to other skeptical hypotheses. There are actually two versions of the hypothesis, corresponding to the two versions of the chance hypothesis. In the first, the demon merely produces my experience, and in the second, he does so with the intention of fooling me by keeping my belief system coherent.

The simple hypothesis meets the same fate as the chance hypothesis. It is not very likely to be true, given long-term stable coherence. The demon could produce practically any configuration of beliefs, and it is unlikely that what he does produce would be conducive to coherence.

The elaborated demon hypothesis, which is the familiar one, is not subject to this objection. It can be stipulated that the goal of the demon is to feed me with observations that are conducive to coherence. In fact, the observations may even be more conducive to coherence than those that we have given the correspondence hypothesis.

The attack on the elaborated demon hypothesis is based on its low a priori probability. As with the elaborated chance hypothesis, the problem is probabilisitic incoherence. Given coherence, the chance of the simple demon hypothesis being true is very low. This problem is "internalized" in the case of the elaborate demon hypothesis. That is, what becomes unlikely is that the demon would have just the intentions and purposes that would produce stable coherence, given the range of intentions and purposes he might have.

This does not clinch the argument, however, since it still must be shown that the correspondence hypothesis has a higher a priori probability. There may be an analogy with the simple demon hypothesis. Why should the world be thought to be configured in such a way that it will produce observational beliefs that will meld into a long-term stable coherent system?

So perhaps we should devise an elaborated correspondence hypothesis, to the effect that the world is set up in just such a way as to provide us with true beliefs about the it. That takes care of the objection to the simple correspondence hypothesis, but it seems to run into the objection to the elaborated demon hypothesis: that it is extremely unlikely that the world would be this way, given all the ways it could be.

This still leaves open the possibility, however, that the elaborated correspondence hypothesis is more likely than the elaborated demon hypothesis. There are two disanalogies. The first is that the simple correspondence hypothesis is more likely than the simple demon hypothesis, because the world is a regular and orderly place. So the beliefs it produced would, a priori, be more likely to be orderly, which is more consistent with the assumption of coherence.

Biological, coultural, and conceptual evolutionary accounts explain how the correspondence exists. This explanation comes from within the framework of the correspondence hypothesis. The availability of such accounts reduces the unlikelihood of the elaborated correspondence hypothesis.

The fact that the explanation is generated within the framework of the correspondence hypothesis is important. For there are no resources within the demon hypothesis to do this kind of work, and so a corresponding explanation for the demon is unlikely to be true. If there were no internally-generated account (for example, if God were invoked), then the two cases would be parallel.

So the correspondence hypothesis comes out on top, which is all we can ask. It remains very unlikely a priori, but it produces a better explanation. Thus the task of sketching out a metajustification is complete.

There is no need here to go the way of some coherence theories of justification and claim that truth is coherence. This is one of the standard objections to the coherence theory, and removing it clears the way for accepting a coherence theory. Given the problems with foundationalism, it looks as though the coherence theory presented here is "the leading candidate for a correct theory of empirical knowledge."


[ Course Home Page | Menu of Lectures ]