The roots of predicate logic lie in the syllogistic logic of Aristotle, which he developed in the fourth century BCE. (See Prior Analytics.)
Aristotle's logic is concerned with the relation of premises to conclusion in arguments. A premise is defined as follows: "A premiss then is a sentence affirming or denying one thing of another" (Bk.I, Pt.1). The thing of which something is affirmed or denied is the subject of the sentence, and that which is affirmed or denied of the subject is the predicate. Subject and predicate are both called terms, which we would now call noun-phrases. So if I affirm that all cats are mammals, 'cats' is the subject-term and 'mammals' is the predicate-term.
Syllogisms relate premises to a conclusion. Here is Aristotle's formal definition.
A syllogism is discourse in which, certain things being stated, something other than what is stated follows of necessity from their being so. I mean by the last phrase that they produce the consequence, and by this, that no further term is required from without in order to make the consequence necessary. (Bk. I, Pt.1)What Aristotle had in mind here is roughly what we now call a valid argument. His main goal was to discover which kinds of arguments are valid. To do so, he divided sentences into types. Affirmative sentences assert that the subjects have the predicate, while negative sentences deny this. Universal sentences assert of all the subject that it has or lacks the predicate, while particular sentences assert this of only some of the subject. This yields a four-fold classification of sentences (see The Logic Book, page 4 and Chapter 7, Section 6).
For example, consider the following syllogism:
Whenever three terms are so related to one another that the last is contained in the middle as in a whole, and the middle is either contained in, or excluded from, the first as in or from a whole, the extremes must be related by a perfect syllogism. I call that term middle which is itself contained in another and contains another in itself: in position also this comes in the middle. By extremes I mean both that term which is itself contained in another and that in which another is contained. If A is predicated of all B, and B of all C, A must be predicated of all C: we have already explained what we mean by 'predicated of all'. (Bk.1, Pt.4)The validity of the A-A-A form is understood by reference to a relation of containment between terms, and we would now say that containment is a transitive relation(The Logic Book, Chapter 7, Section 9). But Aristotle had no systematic way of dealing with the contaiment relation. This was left to later logicians.
No such question arises with invalid argument-forms. All that is needed is a case (counter-example) in which all the premises are true and the conclusion is false. For example,
William of Shyreswood
The next innovator in predicate logic was an Englishman, William of Shyreswood, who wrote a book entitled Introductiones in Logicam in the first half of the thirteenth century. The most notable feature of the book was the treatment of quantifiers, the logical expressions "all," "no," and "some" on which are based Aristotle's distinction between universal and particular sentences. William recognized the equivalence of various combinations of quantifiers and negation.
William devised a way to keep track of the valid syllogistic forms. Each syllogistic form is represented by the first three vowels in a Latin word. For example, A-A-A is signified by 'barbara,' E-A-E by 'celarent,' etc. (We have left out of discussion other aspects of syllogisms such as "mood," which are also reflected in the choice of words.) William devised a poem representing all the valid forms of syllogism.
Barbara celarent darii ferio baralipton
Celantes dabitis fapesmo frisesomorum;
Cesare campestres festino baroco; darapti
Felapton disamis datisi bocardo ferison.
Gottfried Willhelm Leibniz
The seventeenth-century German philosopher-scientist-mathematician Leibniz was the first to provide a systematic interpretation of predicate logic. (For a somewhat technical discussion of Leibniz's contribution to logic, see Wolfgang Lenzen's paper "Leibniz's Logic.") Unfortunately, virtually nothing of his groundbreaking discoveries was published during his lifetime, and it was only discovered early in the twentieth century.
Leibniz's insight was that logic can be treated algebraically, on the analogy of addition, subtraction and multiplication. When this is done, there is finally a systematic way of determining the validity of arguments. Also, logic becomes fully symbolic.
One interesting twist to Leibniz's logic was that in it, terms can be interpreted in two different ways. They can be understood extensionally, as referring to classes of objects. This is how subsequent logicians would come to interpret predicate logic. But Leibniz thought they are best understood intensionally, as referring to properties. So 'cats' could be understood either as the class of all cats or as the property of being a cat. On the intensional interpretation, what makes 'All cats are mammals' true is that the property of being a mammal is a component of the property of being a cat. This is the reverse of the extensional explanation, which is that the class of cats is included in the class of mammals.
The great mathematician Leonhard Euler in 1768 published a way of representing the relations of subject and predicate geometrically. Improvements to Euler's system were made subsequently by the French mathematician J. D. Gergonne some fifty years later, and then by the Englishman John Venn and the American Charles Sanders Peirce in the late nineteenth century. (For an account of these developments, see the Stanford Encyclopedia of Philosophy entry "Diagrams.")
During the nineteenth century, several logicians perfected systems of sentential logic, which had been studied first by the ancient Stoics. Prominent among them was George Boole, who made the analogy between sentential operators and the operators of set-theory, thus giving sentential logic an extensional interpretation. Suppose we have two sentences symbolized by, 'P' and 'Q'. And suppose that P is true in cases a and b, while Q is true in cases b and c. The conjunction is true in just those cases in which both P and Q are true, i.e., case b. We can think of this as the intersection of the two sets of cases. The disjunction is true in all cases where either disjunct is true, i.e., the union of the two sets of cases. The negation of P is true in those cases in which P is not true, which is the complement of the set of cases in which it is true. The conditional is true just in case the class of cases in which the antecedent is true is a subset of the cases in which the consequent is true. (For a full account of Boolean Logic, see C.I. Lewis and C.H. Langford, Symbolic Logic, Chapters 1 and 2.)
In 1879, the German philosopher Gottlob Frege published Begriffsschrift, the first satisfactory system of predicate logic. The greatest departure of Frege's system from Aristotle's is its generality. It can handle all combinations of quantifiers and negation, as well as conjunctions, disjunctions, conditonals, and biconditionals. Moreover, it can represent relations, whereas Aristotle's system was limited to predicates applying to a single subject.
Frege's inspiration came from the mathematical conception of a function, such as that of addition. The addition function takes two arguments: we write 'x + y,' which represents a a certain value. Frege saw that predicates can take arguments as well, 'x is the father of y.' This behaves like a function, in that we get a sentence when names replace the variables. So just as '1 + 2' indicates the sum of two number, 'George H. W. Bush is father of George W. Bush' indicates a relation between two people. The use of functional notation was a significant break from the past, which Frege defended.
These deviations from what is traditional find their justification in the fact that logic has hithereto always followed ordinary language and grammar too closely. In particular, I believe that the replacement of the concepts subject and predicate by argument and function, respectively, will stand the test of time. Preface to Begriffsschrift
The schema for a two-place relation can be represented symbolically as 'F(x,y).' We would like to apply quantifiers and say that, for example, 'all people have some father.' This can be done in Frege's system. All x that are persons are such that there is a y such that y is a father of x. The way in which he symbolized quantifiers and sentential operators was very graphical and not intuitive at all. Here are the symbolizations for the A, E, I, and O sentence-forms.
|A: All a that are X are P||E: All a that are X are not P||I: Not all a that are X are not P||O: Not all a that are X are P|
Perhaps this is what kept his work from wide notice initially. It was eventually discovered and made famous by Bertrand Russell, who adopted a much more intuitive notation, that of the Italian mathematician Peano, which we still use today.
Frege's project was very ambitious. He wanted to provide a rigorous proof-procedure based on a few axioms and one rule of inference. Moreover, he wanted to be able to use this system to derive all the truths of arithmetic. Unfortunately, Russell discovered a paradox that showed that Frege's system was inconsistent.
After considerable struggle, Russell was able to find a solution to the paradox. With this in hand, he turned anew to Frege's project in the monumental Principia Mathematica, co-authored with Alfred North Whitehead (first volume published in 1910). This work pushed predicate logic into the forefront of philosophical research, spawning many important developments.
Even before this, in 1905, Russell had published a paper demonstrating the power of predicate logic as a weapon with which to attack philosophical problems. In "On Denoting," Russell gave an analysis of definite descriptions, such as "the present King of France." There was a puzzle as to the meaning of a sentence such as "The present King of France is bald." How are we to understand it, given that there is no present King of France? Russell's analysis showed how the sentence can be taken to be false, given its rendering in predicate logic. Russell's technique will be covered in Chapter 7, Section 9.
One question left open by the systems of Frege and Russell-Whitehead was whether they were complete. Roughly, what we want to know is whether every theorem in the system that ought to be provable can in fact be proved. Of course, showing whether a system is complete depends on what is considered to be obligatory to prove. Rather than focusing on valid inference, the logicians at the time considered valid sentences of the logic. These are sentences that are true no matter what the symbols are understood as standing for. In his doctoral dissertation in 1930, Gödel proved that all valid sentences are provable in the system of Russell and Whitehead. Other ways of proving completeness, most notably that of Leon Henkin, have been invented. If you wish to go through the version of the completeness proof in our text, see Chapter 11.
In the early 1930s, the Polish logician Alfred Tarski provided a formal interpretation of predicate logic. Until that point, systems of predicate logic were axiomatic: from a small set of axioms and some rules of inference, numerous theorems were derived. But what did the theorems mean? Tarski found a general method, based on a concept of "satisfaction," of interpreting sentences with quantifiers. This method will be examined in detail in Chapter 8 of our text.