Vous êtes sur la page 1sur 160

Basic Symbolic Logic

Text for
Logic 2003A: Symbolic Logic I

Kent A. Peacock

Department of Philosophy,
University of Lethbridge.

April 22, 2006


ii

Copyright
c 2005, 2006 Kent A. Peacock
Preface

The purpose of this text is to provide a correct but flexible command of symbolic logic up to
but not including the point at which one can begin to study metalogic (the “logic of logic”).
As such, it covers most of the material usually covered in a typical second-year university
course on symbolic logic, including such time-honoured topics as truth tables, truth trees,
and natural deduction for propositional and first-order predicate logic (without and with
identity). A working command of these techniques is simply assumed in the the literature on
analytic philosophy, mathematical logic, foundations of mathematics, computer science, and
much of physical science. For enrichment there are also chapters on widely-used properties
of relations, the parallels between circuit logic and propositional logic, elementary set theory,
and a short treatment of the categorical syllogism using the tools of predicate logic. (The
chapter on circuit logic may especially be of interest to those many students who come
to logic and philosophy from computer science.) There is more here than can usually be
covered in a single course, but these extra topics give students and instructors plenty to
choose from.
A concluding chapter introduces the reader briefly and qualitatively to some of the riches
that lie beyond, including modal and deviant logics, Gödel, Turing, artificial intelligence,
and quantum computing. Not everyone will wish to explore such advanced topics in detail,
but every student of philosophy and logic should be aware that such things exist, and have
at least a passing acquaintance with them.
The aim of this text is not to present a rigorous development of propositional and
predicate logic from the simplest and most general first principles available, but to offer a
workaday (but of course valid!) system of logic that will be as useful as possible, both for
beginning logicians who may later wish to go on to a more foundational treatment, and also
those many others who are interested in logic more as a tool than for its own sake.
One might naturally ask why the world needs to be blessed with yet another logic
text, when there are so many excellent texts on the market now. Like almost everyone
who has taught logic, I could not avoid working out my own approach to the subject. In
order to teach the subject to myself and others, I had to reconstruct it in my own way,
and I ended up with an approach that I believe is a useful complement to other methods
of teaching this sort of logic. And there is room for new approaches, for the fact remains
that elementary logic, unlike certain parts of elementary mathematics, is still a developing
subject. I modestly believe that the approach I present here has an advantage over many
other texts at a similar level because of its emphasis on flexibility and ease of use.
The choice of topics in this text was designed in particular to meet the needs of the
second-year symbolic logic course at the University of Lethbridge. We have a first year
course that is mostly informal logic and critical thinking (with a smattering of “baby”

iii
iv PREFACE

deductive logic), a second-year course called Symbolic Logic I, that introduces the formal
methods of classical two-valued Boolean logic (primarily propositional and predicate calcu-
lus) up to the point at which one can begin to do metatheory, and then a third-year course,
Symbolic Logic II, in which those who are so inclined can finally dip into metatheory.
The notation for natural deduction I used here is adapted with some simplifications
from that used in superb but now largely out-dated texts by Mates and Lemmon [18, 17],
supplemented with the highly efficient truth tree techniques pioneered by Jeffrey [14]. I
build my natural deduction system on basic rules similar to those of Lemmon, but I have
streamlined the presentation of natural deduction techniques in such a way as to make them
as easy to use as possible. I also use Lemmon’s efficient way of setting up truth tables, with
a few shortcuts of my own thrown in. In the end, I have written the logic text that I think
would have benefited me the most as a student, and I hope that it will be useful to others
as well. Although the book is aimed in the first instance at college and university students,
I hope that it will also be useful for professionals in disciplines such as physics or computer
science, or indeed anyone at all who may wish to refresh their knowledge of the techniques
of logic.
The advantage of the Mates/Lemmon notation for natural deduction over the popular
and graphically explicit Fitch system (as used in, e.g., [2, 13]) is that it is more compact,
more flexible, and that it allows one to introduce a theorem at any point in the deduction.
(This will all be explained below.) Logic does not yet have one notation that has become
universally accepted (unlike most branches of mathematics such as calculus), but once you
have learned one notation, it is not very hard to learn another. Some students may in
the end prefer the Fitch system over the slightly more abstract Mates/Lemmon system
because Fitch allows us to visualize whenever we are in a subproof. On the other hand, the
concision and flexibility of Mates/Lemmon certainly help to make symbolic logic useful —
which places it squarely in the spirit of this text.
Although I tremendously admire Lemmon’s book, I base my teaching of logic on a dif-
ferent pedagogical philosophy. Lemmon, and indeed numerous other authors of logic texts,
believe that the student should learn how to derive every result from first principles, using
as few rules as possible. This “logical boot camp” approach certainly develops technical
adroitness in those students who are willing to tough it out. But it has the disadvantage that
one often requires derivations of great length and complexity in order to prove results that
should be very simple and obvious. My view is that the subject is difficult enough as it is.
I think that (especially for beginning students) the most useful approach is to do just what
one does in mathematics, which is to use, as creatively and as early as possible, previously
derived results to get new results. The ultimate purpose is to streamline derivations and
get useful or interesting formulas as quickly and painlessly as possible, just as one would
in (say) a trigonometry text. Trig would be nearly useless if one had to prove an identity
such as cos2 θ + sin2 θ = 1 every time one needed to use it. I therefore encourage students
to freely use earlier theorems and sequents, and I provide a formalism that allows one to
do this in an efficient and clear manner. (Lemmon allows the introduction of sequents and
theorems, too, but he does not emphasize it nearly as much as I think it would be useful to
do so.) Again, the aim is to develop a system of logic that could actually be useful .
Another respect in which this book differs from some other treatments of symbolic
logic is in its views about the purpose of learning natural deduction. Once the student has
mastered syntactic methods of verifying a given deduction, such as truth trees, she may
v

well wonder what is the point of struggling through the process of constructing a syntactic
derivation of a result that can be quickly checked with a tree or table.
The experienced logician will know that one reason for learning syntactic methods is
that not all logics are complete, so that once we get beyond first-order predicate logic,
syntactic methods are not merely a redundant gloss on semantic methods.
Another reason for learning natural deduction is that it is a formalization of the way
we do actually tend to think about things: presented with a collection of premisses, we do
indeed go through something like natural deduction (although usually sloppily, and with a
free admixture of inductive steps) to arrive at a conclusion. Like Molière’s M. Jourdain,
who was astonished to learn that he had been speaking prose his whole life, we may be
surprised to learn that we use natural deduction whether we know it or not; the methods
of logic simply make it more explicit and precise.
There is yet another reason that natural deduction is worth learning, however, a reason
that I do not think has received sufficient attention. What often happens in real life is that
we are presented with a collection of givens (assumptions or known facts) and we then try to
deduce what may follow from those givens. We often do not have a pre-cooked conclusion
any more than we know the sum total of a list of expenses before we add it up. There
are a number of exercises included here which are meant to develop the skill of using the
rules of natural deduction to work from a given collection of premisses to a conclusion that
may have certain desired or known properties but which has not yet been found. Once we
have a complete chain of reasoning from premisses to conclusion we can, if we wish, use
semantic methods to check the argument structure we have constructed. The relation of
natural deduction to truth trees and tables is something like the relation of long division
to multiplication. In long division we have to guess our trial divisors, but once we have
an answer we can check it in a purely mechanical way using multiplication. Similarly, in
natural deduction we have to guess which rules will help us uncover a conclusion with the
desired properties, but once we have a conclusion we can check the deduction mechanically
with semantic methods. In some of the exercises in this book the student is presented with
a collection of premisses without a conclusion, and invited to explore the consequences of
these premisses using the rules of natural deduction.
A number of rules and other results are used in this course that cannot be proven by
means of the tools introduced in the course. (An example is the Deduction Theorem.) Rest
assured that all of the rules I use are valid and can be rigorously established in a more
advanced approach.
Logic resembles gymnastics or language study in that it can be learned only by practice.
To this end, I have included a generous selection of exercises. I follow the practice of many
old-fashioned math texts and divide these exercises into A, B, and C levels, together with
occasional Questions for Discussion. The A exercises are “finger exercises” that test your
understanding of the definitions and techniques introduced in the chapter. If you cannot do
the A exercises then you have missed something basic in the chapter. The B exercises are
a bit more complex than the A’s and usually combine several concepts and techniques, but
they should be doable by anyone who has worked through the A exercises diligently. The
B exercises are therefore tests of basic competency in the material of the chapter or section.
The C exercises are advanced and may require quite a bit of thought, or even additional
research beyond the material in this text. The solutions to about half of the exercises are
presented at the end of the book, and the rest are available in the Instructors’ Handbook ,
vi PREFACE

except for some of the more open-ended C problems.


The author of any text on logic must at some point decide how to spell the word
“premiss” (plural “premisses”). I have decided to follow Lemmon’s convention and use the
version with double s’s because it avoids confusion with the legalistic sense of the term (as
in, for instance, “keep off the premises!”) and because it is probably more widely used in
logic.
Although the book is elementary, its pace is brisk. I assume that my readers are like
me in that they do not enjoy being patronized; this is not Logic for the Complete Dummy.
I would be very grateful for any suggestions as to how this document can be improved.

Acknowledgements
I am grateful to the University of Lethbridge and the Social Sciences and Humanities Re-
search Council of Canada for essential material and financial support. Many thanks for dis-
cussions, advice, suggestions, shorter proofs, or error corrections to Bryson Brown, Sheldon
Chow, Dawn Collins, Mark Ebner, Herb Korté, Laura Smith, Mark Smith, Sam Woodruff,
and John Woods. A special thanks to Ardis Anderson for her help in tracking down M.
Jourdain. Any errors that remain (and a quick induction suggests that there must be some!)
are entirely my own responsibility.
Contents

Preface iii

1 Introduction to Logic: Getting Your Feet Wet 1


1.1 Logic, Logical Form, and Validity . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 How to Teach Yourself Logic . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 How To Use This Text . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Questions for Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Propositional Logic: Symbolization and Translation 7


2.1 Notation in Propositional Logic . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Common English Equivalents for Propositional Connectives . . . . . . . . . 8
2.2.1 Negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Conjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 Disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.4 Material Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.5 The Biconditional . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.6 Commutativity and Associativity . . . . . . . . . . . . . . . . . . . . 11
2.2.7 “Unless” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Truth Table Definitions of the Propositional Connectives . . . . . . . . . . . 13
2.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Interpreting the Material Conditional . . . . . . . . . . . . . . . . . . . . . 14
2.4.1 “Ifs” versus “Only ifs” . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2 The “Paradoxes” of Material Implication . . . . . . . . . . . . . . . 15
2.5 Syntactic Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.1 Scope of a Connective . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.2 Main Connective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.3 Bracketing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7 Negated Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 Intensional Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.9 Alternative Notation for Propositional Connectives . . . . . . . . . . . . . . 19
2.10 Exercises on Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

vii
viii CONTENTS

3 Introduction to Natural Deduction 23


3.1 Basic Rules of Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.1 Alternative Names for the Basic Rules. . . . . . . . . . . . . . . . . . 26
3.2 Comments on Basic Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Simple Examples of the Basic Rules . . . . . . . . . . . . . . . . . . . . . . 28
3.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Some Useful Sequents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4.1 One-Way Sequents . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4.2 Interderivability Results . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.3 Form of Justification When Using Derived Rules . . . . . . . . . . . 42
3.5 Summary of Basic Sequents and Rules . . . . . . . . . . . . . . . . . . . . . 42
3.5.1 Basic Rules and Sequents . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5.2 Summary of Derived Sequents . . . . . . . . . . . . . . . . . . . . . . 44
3.6 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.6.1 A Common Misuse of vE . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6.2 Affirming the Consequent . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6.3 Denying the Antecedent . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6.4 Avoid Circularity! . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.7 Exercises on Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4 Techniques and Applications 51


4.1 Recognition of Logical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Combining Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.1 The “Laws” of Thought . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.1 Uniform Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.2 Nonuniform Substitution . . . . . . . . . . . . . . . . . . . . . . . . 55
4.5 Introduction Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.6 The Deduction Metatheorem . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.6.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.6.2 A Turnstile is not an Arrow! . . . . . . . . . . . . . . . . . . . . . . 59
4.7 Boolean Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.7.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.8 Filling in the Gaps in an Argument . . . . . . . . . . . . . . . . . . . . . . . 60

5 Propositional Semantics: Truth Tables 63


5.1 Syntax versus Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2 Truth Table Definitions of the Propositional Connectives . . . . . . . . . . . 64
5.2.1 How Many Possible Truth Tables Are There? . . . . . . . . . . . . . 64
5.2.2 Expressive Completeness . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2.3 Evaluating Complex Wffs . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3 Semantic Classification of Single Wffs . . . . . . . . . . . . . . . . . . . . . 66
5.3.1 Tautology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
CONTENTS ix

5.3.2 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.3.3 One-to-One Relation Between Inconsistencies and Tautologies . . . . 66
5.3.4 Contingency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.3.5 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.4 Relation of Syntax to Semantics . . . . . . . . . . . . . . . . . . . . . . . . 67
5.5 Semantic Classification of Pairs of Wffs . . . . . . . . . . . . . . . . . . . . 67
5.5.1 Implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.5.2 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.5.3 Contrariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.5.4 Subcontrariety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.5.5 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.5.6 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.5.7 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.6 Relations Between Semantic Properties of Wffs . . . . . . . . . . . . . . . . 68
5.7 Semantic Classification of Sets of Wffs . . . . . . . . . . . . . . . . . . . . . 69
5.7.1 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.7.2 Inconsistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.7.3 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.7.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.8 Testing Validity of Sequents Using Truth Tables . . . . . . . . . . . . . . . 69
5.8.1 Deduction Theorem from the Semantic Viewpoint . . . . . . . . . . 71
5.8.2 Short-cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.9 Using Short-Cuts in Testing for Other Semantic Properties . . . . . . . . . 72
5.10 Evaluating Long Conjunctions or Disjunctions . . . . . . . . . . . . . . . . . 73
5.11 Sheffer and Nicod Strokes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.12 Review Exercises on Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . 75

6 Propositional Semantics: Truth Trees 77


6.1 Branches and Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.2 Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3 Stacking and Branching Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3.1 Substitution of Equivalents . . . . . . . . . . . . . . . . . . . . . . . 80
6.4 Checking Semantic Properties of Formulas with Trees . . . . . . . . . . . . 80
6.4.1 Tautology, Contingency, and Consistency . . . . . . . . . . . . . . . 80
6.4.2 Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.5 Checking Validity of Sequents . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.6 Trees or Tables? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.8 Questions for Further Research . . . . . . . . . . . . . . . . . . . . . . . . . 84

7 Predicate Logic: Symbolization and Translation 85


7.1 Why We Need Predicate Logic . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Predicate Logic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.2.1 Universe of Discourse . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.2.2 Terms and Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.2.3 Monadic Predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.2.4 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
x CONTENTS

7.2.5 Propositions and Propositional Functions . . . . . . . . . . . . . . . 88


7.2.6 Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7.2.7 Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2.8 Categorical Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2.9 Don’t Be Fooled. . . ! . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.2.10 Combinations of Properties . . . . . . . . . . . . . . . . . . . . . . . 92
7.2.11 Use of “Except” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.2.12 Use of “Only” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.2.13 Variants of Categorical Forms Using Relations . . . . . . . . . . . . 93
7.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

8 Predicate Logic: Natural Deduction 95


8.1 Duality Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.2 Introduction and Elimination Rules . . . . . . . . . . . . . . . . . . . . . . . 95
8.2.1 Universal Elimination (UE) . . . . . . . . . . . . . . . . . . . . . . . 95
8.2.2 Universal Introduction (UI) . . . . . . . . . . . . . . . . . . . . . . . 96
8.2.3 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.2.4 Existential Introduction (EI) . . . . . . . . . . . . . . . . . . . . . . 97
8.2.5 Existential Elimination (EE) . . . . . . . . . . . . . . . . . . . . . . 97
8.2.6 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.3 Summary of Introduction and Elimination Rules for Quantifiers . . . . . . . 99
8.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.5 Distribution Rules for Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . 102
8.6 Proofs of Duality Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
8.7 Rules of Passage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.8 Interderivability ⇔ Equivalence Theorem . . . . . . . . . . . . . . . . . . . 104
8.9 What to Memorize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.10 A Shortcut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.11 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.12 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.13 Summary of Rules for Predicate Natural Deduction . . . . . . . . . . . . . . 106
8.13.1 Basic Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.13.2 Duality Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.13.3 Rules of Passage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.13.4 Distribution Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.13.5 Direct Manipulation of Propositional Functions . . . . . . . . . . . . 108
8.14 Review Exercises for Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . 109

9 Predicate Logic With Identity 111


9.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
9.2 Fallacy Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
9.3 Definite Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
9.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9.3.2 Expressing Numerical Relations . . . . . . . . . . . . . . . . . . . . . 116
9.3.3 Superlatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
9.4 Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
CONTENTS xi

Solutions to Selected Exercises 119


Solutions for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Solutions for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Solutions for Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Solutions for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Solutions for Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Solutions for Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Solutions for Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Solutions for Chapter 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
xii CONTENTS
Chapter 1

Introduction to Logic: Getting


Your Feet Wet

1.1 Logic, Logical Form, and Validity


What is logic? There is much disagreement among professionals about this question. I will
offer two definitions that, while no doubt debatable, give us something to work with:

• Logic is the art and science of reasoning.

• Logic (especially symbolic logic) is the mathematics of language.

As the very word “logic” suggests (its root is the Greek term “logos,” meaning word ),
logic has a lot to do with language. But it also points to something beyond language,
for logic instantiates deep laws of form that appear in many guises in many subjects and
disciplines [5]. One can get into interesting philosophical debates about the nature of these
mathematical forms. (The Platonists, who believe in the independent reality of mathe-
matical objects, have been debating for millennia with the nominalists, who insist that
the “truths” of logic and mathematics are nothing more than linguistic conventions.) We
could also ask which comes first, logic or mathematics? The school of logicism, headed by
Bertrand Russell and Gottlob Frege (who themselves contributed in important ways to the
discipline that we now call modern symbolic logic) argued that all of mathematics is based
on laws of pure logic. But it is now generally felt that this ambitious project did not work,
and that it is more likely the other way around; i.e., that logic is a sort of applied math-
ematics. Unfortunately, we cannot get into such fascinating disputes in an introductory
work like this. For more on the philosophy and foundations of mathematics and logic, see
[1, 5].
We should begin by noting the important distinction between inductive and deductive
logic. In deductive logic we extract information already contained, or implicit, in a set
of premisses (statements that are accepted as given in the context of an argument). We
might accept the premisses for several reasons: they may be known facts whose implications
we are trying to uncover, for instance, or they might be hypotheses or possibilities whose
consequences we wish to explore. But in deductive reasoning we never go beyond the
information contained in the premisses; rather, we rearrange that implicit information into
forms that may be more interesting or useful to us.

1
2 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET

Inductive reasoning is said to be ampliative in that it expands upon the information


given in the premisses. Inductive reasoning is usually implicitly or explicitly probabilistic,
and it becomes quantitative in many scientific contexts. Induction, especially as it is used
in science and engineering, also involves factors that are very difficult to define formally,
such as creativity, intuition, and the grasp of aesthetic relationships.
Most applications of reasoning in real-life situations involve a mixture of inductive and
deductive methods. However, it is very useful to study deductive methods in isolation,
because of its importance as the foundation for an understanding of all other methods of
reasoning. This course is almost entirely concerned with deductive reasoning.
Now, why do we do symbolic logic? The advantages of symbols in logic are much the
same as their advantages in mathematics: precision, concision, and their ability to express
complex relationships that could not be expressed efficiently or at all in words. Also, symbols
have the advantage of greater universality; an argument in different languages that has the
same logical form would be expressible in the same symbols.
A lot of what logicians do (though not all) is analyze arguments. For the logician,
an argument is not a dispute or a shouting match; rather, it is an attempt to establish a
conclusion on the basis of given premisses (singular, premiss). The premisses are statements
that are assumed to be given; as logicians we don’t know where they come from.

Warning! Please spell the word “argument” correctly! One mark off any
assignment or test any time you write “arguement”!

Here’s a simple deductive argument:

If Sundin stays healthy, the Leafs have a chance.


But if Sundin takes his vitamins, he will stay healthy.
∴ If Sundin takes his vitamins, the Leafs have a chance.

(There’s our first symbol: “∴”, which means “therefore”.)


Notice that we don’t worry too much about tense in the examples used here; there are,
however, “tensed” logics for which the point in time at which a proposition was uttered is
important.
Now, we need to distinguish between validity and soundness. Consider the following
argument:

If you are in Lethbridge, then you are in Alberta.


If you are in Alberta, then you are in Canada.
∴ If you are in Lethbridge, you are in Canada.

Both premisses and the conclusion happen to be true statements. Substitute the word
“France” for “Alberta” and we would have an argument with false premisses; but the
conclusion would still seem to follow from the premisses. Therefore, there are arguments
that intuitively seem to be valid in the sense that the conclusions somehow follow from the
premisses, but which still have something missing.
To clarify what is missing, we make a crucial distinction between validity and soundness:

Validity: a deductive argument (or argument form) is valid if and only if (or iff ) it is
impossible for its conclusion to be false when its premisses are true.
1.1. LOGIC, LOGICAL FORM, AND VALIDITY 3

Soundness: A deductive argument is sound iff it is valid and has true premisses.

This is the modern usage of the terms “valid” and “sound” as they are applied to arguments
or argument forms. Some older books, such as Lemmon’s Beginning Logic [17], use the word
“sound” where we will use “valid;” that terminology is now out of date.
The following rough-and-ready concept of validity is also useful:

An argument is valid if it follows from its premisses by correct application of


certain rules of deduction.

Eventually, we have to ask whether or not the rules of deduction are themselves valid in
the deeper sense given above. In fact, they are, so that these two definitions of validity
turn out to be the same thing in the long run. Later in the course we will learn other
ways of expressing the notion of validity, all of which are equivalent to the first definition
given above. In a sense, this course is simply about all the ways of defining and testing for
deductive validity, and for validly deriving conclusions from premisses.
The practical value of valid deductive reasoning stems from the fact that if a valid
argument has true premisses, the conclusion is guaranteed to be true as well. Validity
transmits truth from premisses to conclusions.
Surprisingly, there are some arguments that can be valid, for formal reasons, but which
can never be sound. Later on we will see some examples of rather suspicious-looking argu-
ments that are, in fact, valid.
Common parlance often confuses the terms “valid” and “true.” Be careful to avoid
this ambiguity when you are doing formal logic. Arguments can be valid or invalid, and
propositions can be true or untrue, but there are no such things as “valid” statements or
“true” arguments. As we shall see, however, valid arguments can be transformed by the
techniques of natural deduction into certain kinds of true statements called logical truths.
As logicians, we do not worry about where the premisses came from or whether or not
they are true; we are only concerned about the validity of arguments, not their soundness.
In this respect, logicians might be compared to automotive mechanics, who ensure that your
car runs properly but do not ask where you drive it. As Bertrand Russell once quipped,
“Pure mathematics [which Russell saw as an extension of logic] can be defined as that
subject in which we do not know what we are talking about, nor whether what we are
saying is true.” [20].
Now consider the following argument:

If you are in Hobbiton, then you are in the Shire.


If you are in the Shire, then you are in Middle Earth.
∴ If you are in Hobbiton, you are in Middle Earth.

Compare this to the argument given above about being in Alberta. What is common
to these two arguments is their form, which we could represent in a more or less obvious
way, without even defining the symbols, as follows:

P →Q
Q→R
∴ P → R.
4 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET

So we can distinguish between particular arguments in a given natural language (such


as English, French, Russian, etc.), and their forms, which are more universal. One of the
main skills that you will develop in learning logic is the ability to perceive logical form.
Our main business as logicians is to analyze the validity of arguments by determining the
validity of argument forms.
There are different kinds of logical form that capture different ways in which an argu-
ment can be valid. We can intuitively recognize some kinds of validity by means of what
I shall call “linguistic common sense;” but the powerful tools of logic allow us to test the
validity of arbitrarily complex argument forms.
A deductive form is very much like a computer program which accepts input (the
premisses) and outputs a conclusion. In fact, computers can easily be programmed to
follow the steps of a deductive argument. The validity of a deductive form can be compared
to the reliability of a piece of software. A reliable accounting program, for instance, will
correctly tell you whether you have a profit or a loss in your business so long as you correctly
enter all income and expenses. This is like a valid argument form that can be guaranteed
to return a true conclusion if the premisses are true. Of course, if some of the premisses
are false then there is no telling whether the conclusion will be true or false, just as if you
put incorrect data into your accounting program you might, or might not, get the right
answer back from it. Just as the job of software engineers and programmers is to write
reliable code, the job of logicians (so far as deductive logic is concerned) is to devise valid
arguments.
Another helpful analogy between logic and computing is that we can think of logic as
being like a computer game with levels, where we have to master each level before we can
go on to the next. In this course we cover three main levels, each one of which gives us
more machinery to capture validity. They are:

1. Propositional logic — also known as sentential logic, sentential calculus, or proposi-


tional calculus.

2. Elementary first-order predicate logic.

3. Elementary first-order predicate logic with identity.

Each level has extra machinery that allows us to represent a kind of validity that cannot
be represented by the lower level.
We begin with propositional logic, which is an absolutely basic discipline that appears
in a hundred useful contexts, and serves as the foundation for all higher-level work in logic.

1.2 How to Teach Yourself Logic


I entitled this section “how to teach yourself logic”, not “how to learn logic” or “how to
be taught logic”, because I want to emphasize that logic, like any other complex set of
inter-related skills, is ultimately self-taught no matter what courses you take or what books
you read.
The secret of learning any complex set of skills is attentive repetition, together with the
cultivation of a calm intensity that does not lose its focus on the desired goal, but which at
the same time awaits improvement with almost infinite patience.
1.3. QUESTIONS FOR DISCUSSION 5

I once had the temerity to take a course called Introduction to Theoretical Physics,
given at the University of Toronto by a great teacher and scholar named Professor Lynn
Trainor. The very first equation he presented to us will be the first formula of this text as
well:
Understanding = Familiarity.
What Professor Trainor meant was that in learning a complex technical subject such as
logic or physics, one must accept the fact that fully confident understanding may not come
immediately; rather, it follows from becoming so familiar with the operations of the subject
that they become second nature. In fact, learning logic is very much like learning a language;
in the beginning, you have to work very hard simply to familiarize yourself with the basic
definitions and rules, some of which at first may seem arbitrary and strange. After a while
the structure of the language becomes so obvious that you cannot quite figure out why you
once could not understand it. Yes, in order to learn logic you have to memorize things,
and then diligently practice their use. Learning a subject involves putting the details of the
subject into your head, and becoming adept in their use by dint of long practice.
There are, however, also some important differences between learning logic and learning
a language. Although there are things that must be memorized in learning logic, there are
far fewer of them than in learning a language, where one must memorize thousands of words
as a basis for full fluency. Furthermore, as one becomes more experienced and familiar with
the use of logic, the inter-relations between its concepts become more and more obvious,
and mere memorization becomes less important, until it finally dawns on you that all of
logic is in a deep sense just many different ways of saying the same thing.

1.2.1 How To Use This Text


• Work through on paper the demonstrations given in the text.
• There are a generous number of practice exercises, and solutions to about half of them
are given in the back. Try to work as many of these as you can. Try the A’s first, and
if you cannot do all of them you have missed something important in the previous
section of the text. When you can do the A’s, move on to the B’s.
• If you do all of the exercises in this text — do them again! — and perhaps again, to
the point at which they become as obvious to you as the streets and alleys of your
home town. I repeat, repetition is the secret of learning.
• You may also find it very helpful to try and make up your own problems.
• It is very important to be neat and tidy when writing out your proofs, truth tables,
and so forth. A certain amount of doing logic correctly is nothing more than clerical
accuracy. Writing things out completely, correctly, and legibly will facilitate careful
and accurate thought — and will also encourage our over-worked teaching assistant
to give you better marks.

1.3 Questions for Discussion


1. If all deductive logic is simply a process of reworking information given in the pre-
misses, why aren’t all deductive arguments circular? Or are they?
6 CHAPTER 1. INTRODUCTION TO LOGIC: GETTING YOUR FEET WET

2. Is logic just a branch of mathematics? Or the other way around? Or neither?

3. Are logic and mathematics discovered or invented ?


Chapter 2

Propositional Logic: Symbolization


and Translation

2.1 Notation in Propositional Logic


As explained in Chapter 1, propositional logic is the first “level” of logic. In propositional
logic we study the logical behavior of propositions, which are linguistic expressions or ut-
terances that are intended by their user to indicate a fact or state of affairs. Propositions
(also sometimes called statements) are distinct from questions, requests, commands, or ex-
clamations in that they can have a truth value (though as we shall see, it may be sometimes
difficult to say just what that value is). Examples of propositions are “This is an example of
a proposition,” “2 is a number,” “D-Day was June 6, 1944,” and “E = mc2 .” Propositions
are so-called because in effect the speaker “proposes” that something is the case.
We will represent propositions by letters such as P, Q, R, . . . P1 , P2 , . . ..
Some logicians will say that “It is raining” and “Il pluit” represent the same proposition
expressed in different languages; others (perhaps more commonsensically) will say that these
represent different propositions that happen to point to the same state of affairs. We need
not concern ourselves with this subtle distinction in an introductory course.
Propositions can be joined together by certain symbols called propositional connec-
tives. These are also called truth functional connectives because, as we shall eventually see,
the truth value of compound propositions formed by linking simpler propositions together
with these particular connectives is strictly a function of the truth values of the simpler
propositions. Not all ways of forming compound propositions from simpler propositions are
truth-functional. However, truth-functional combinations of propositions are the main ones
we are concerned with here because all that propositional logic is really concerned with are
the truth values of propositions; it is really a calculus of truth values.
We shall be mainly concerned with the following truth-functional connectives: − (not,
or negation); & (and), ∨ (inclusive or), → (the material conditional), and ↔ (the bicon-
ditional). Common translations of these symbols are given in the next section.
This is by no means the smallest set of connectives with which one can do propositional
logic. These connectives have been chosen because they are fairly close to the less formal
logically connective concepts that are used in many natural languages. It is possible to
define other truth-functional connectives, and we will briefly discuss this in Chapter 5.
Combinations of propositional letters and connectives will be called formulas. Those

7
8 CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

strings of symbols that can represent propositions are called well-formed formulas or wffs.
It is possible to give a precise recursive definition of the set of all possible wffs, but we will
not do that in this course. (See, for instance, Lemmon’s Beginning Logic [17].) Instead,
we will simply rely on our knowledge of English to tell us which formulas can stand for
complete sentences and which cannot.
Propositions can be atomic or molecular . Atomic propositions are those that cannot
be broken down into truth functional combinations of other propositions, while molecular
propositions are those that can. An example of an atomic proposition is “Today is Tuesday,”
while an example of a molecular proposition is “Today is Tuesday and I have to go to a
meeting.”

2.2 Common English Equivalents for Propositional Connec-


tives
Below I list some of the most commonly encountered English equivalents for the standard
propositional connectives. It is impossible to list all of the possible English expressions that
could be correctly translated into the standard symbols such as & . We have to rely on
our knowledge of the language, what might be called “linguistic common sense,” in order
to work out a speaker or author’s implicit logical sense. Expressions in natural languages
such as English can be ambiguous, imprecise, or capable of multiple meanings and subtle
rhetorical nuances, and may not always have a unique logical translation.
The rules for translation that we present here certainly do not capture all of the con-
ceivable ways that logical structure can be expressed in a natural language. However, many
years of practical experience shows that the rules we present here (or ones very similar to
them) allow us to define a system of logic that is very useful and which can form a basis
for more advanced logics.
In the examples in this chapter we’ll use the following propositional symbols:

B := “Bob goes to the store”


A := “Alice goes to the store”
C := “Cheryl goes to work”
T := “Tad stays home”

The symbol := means “is defined as . . . ”, and we will use the double-arrow ⇔ to mean “can
be translated as . . . ”.

2.2.1 Negation
To negate a proposition is to deny that it is true or that it is the case. Thus, −P can be
read as “not P ,” or “it is not the case that P ,” “P is not true,” or “P is false.”
Negation is a unary connective, in that it only applies to the proposition written imme-
diately to its right. This proposition could be a combination of other propositions, however.

Example: Bob does not go to the store ⇔ −B.


2.2. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES 9

2.2.2 Conjunction
To conjoin two propositions is to assert or to hold them jointly. We will write conjunction
in the form P & Q. The two conjoined propositions P and Q are called the conjuncts.
Although conjunction is usually taken to be binary, meaning that it applies to only two
propositions, and while we will usually consider cases where only two propositions are
conjoined, there is no reason in principle why any number of conjuncts cannot be linked in
the same conjunction (although we will not prove that fact formally in this course). The
most general conjunction can be written in the form P1 & . . . & Pn .
Here are some common expressions that would all be translated into symbols as P & Q:

P and Q.
Both P and Q.
P but Q.
P although Q.
P despite Q.
P even though Q.
P whereas Q.
P in spite of Q.
In addition to Q,P .
etc.!

Note that the subtle rhetorical distinctions between “but,” “and,” “whereas,” etc., are all
lost in moving to propositional notation. If these distinctions make a difference to the logic
of an argument, then they must somehow be expressed clearly enough that they can be
translated into propositional notation.
Example 1 : Cheryl goes to work and Tad stays home ⇔ C & T .
Example 2 : Alice goes to the store while Tad stays home ⇔ A & T .
Example 3 : Bob and Alice both go to the store ⇔ B & A.
Example 4 : Bob and Alice go to the store while Cheryl goes to work ⇔ B & A & C.
Note that order does not matter in any of these examples, or, indeed, in any correct
usage of &. Thus, Example 2 could be translated just as well as T & A.

2.2.3 Disjunction
To disjoin two propositions is to say that one or the other, or possibly both, is or are the
case. We write disjunction in the form P ∨ Q. The two disjoined propositions P and Q are
called the disjuncts. Like conjunction, disjunction is usually taken to be binary, but also
like conjunction it is perfectly admissible to write disjunctions of any number of disjuncts.
The most general disjunction can be written in the form P1 ∨ . . . ∨ Pn .
Here are common expressions that would be translated as P ∨ Q:

P or Q.
Either P or Q.
P or Q or both.
10CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

In propositional logic we will almost always take “or” in the inclusive sense, meaning
that if we accept “P or Q” as true, we are willing to grant that either P or Q is true on
their own, or that P and Q may be true together. This is how “or” is usually used in
English and many other natural languages. Here’s a typical example: “Either negligence or
incompetence accounted for the sinking of the Titanic.” Clearly, it might have been both.
Occasionally, it will be useful to express the concept of exclusive disjunction, also known
as exclusive or , or in circuit theory as XOR. Later on we will see how to construct exclusive
“or” in terms of the usual connectives. Here’s an example in which the fact that a disjunction
is exclusive might make a logical difference: “Every natural number is either even or odd.”
Again, we can usually take “or” to be inclusive unless clearly stated otherwise.
In Latin there are two distinct words for the two senses of “or”: (“vel”) for the inclusive
sense, and (“aut”) for the exclusive sense — and thus less chance of confusing the two
concepts. The usual symbol for inclusive “or,” ∨, was probably adapted from the Latin vel .

Example 1 : Bob or Alice go to the store ⇔ B ∨ A.


Example 2 : Either Cheryl goes to work or Bob goes to the store ⇔ C ∨ B.
Again, as with &, order does not matter. Thus, Example 2 could just as correctly have
been written as B ∨ C.
Example 3 : Either Bob goes to the store, Alice goes to the store, or Cheryl goes to
work ⇔ B ∨ A ∨ C.
Example 3 shows that, as with &, any number of propositions can be disjoined.

2.2.4 Material Implication


It is probably fair to say that all or virtually all types of logic, formal or informal, involve
some notion of logical consequence — the idea that one thing may be taken to follow from
certain others. In propositional logic the consequence relation is called material implication,
and it is crucial to understand its properties, which are sometimes counterintuitive. Material
implication is a strictly binary connective, and is symbolized in this course by →.
In a statement of the form P → Q the proposition P is the antecedent, and Q is the
consequent. The form P → Q is often called the conditional .
Here are common translations of P → Q:
If P then Q.
P only if Q.
Q if P .
If P , Q.
P is a sufficient condition for Q.
Q is a necessary condition for P .
P implies Q.
Q is implied by P .
P entails Q.
Q is entailed by P .
Q while P .
While P , Q.
Given P , Q.
2.2. COMMON ENGLISH EQUIVALENTS FOR PROPOSITIONAL CONNECTIVES11

Innumerable minor linguistic variations of these forms are possible.


It is very important not to confuse “P only if Q” with “P if Q”. In these two forms
the implication goes in opposite directions.
Example 1 : If Bob goes to the store then Tad stays home ⇔ B → T .
Example 2 : Bob goes to the store only if Tad stays home ⇔ B → T .
Example 3 : Bob goes to the store if Tad stays home ⇔ T → B.
Example 4 : In order for Alice to go to the store, it is necessary for Cheryl to go to work
⇔ A → C.
Note carefully that if P → Q, it does not always follow that Q → P . Also, unlike ∨
and &, implication is strictly binary.

2.2.5 The Biconditional


The symbol ↔ is called the biconditional , and is defined such that P ↔ Q if and only if both
P implies Q and Q implies P . It is, of course, not generally true of any two propositions
chosen at random that they entail each other.
Here are the most common translations of P ↔ Q.

P if and only if Q. (This is often abbreviated as “P iff Q.”)


P is a necessary and sufficient condition for Q.
P is equivalent to Q.
P implies Q and Q implies P .

Again, there are many possible ways of expressing this notion in natural languages.
Example: Bob goes to the store if and only if Tad stays home ⇔ B ↔ T .

2.2.6 Commutativity and Associativity


Commutativity: An operation is commutative if it returns the same result regardless
of the order in which it is applied. For example, P & Q has the same truth value as
Q&P.

Associativity: An operation is associative if it returns the same result regardless of how


we group the terms to which it is applied. For instance, if we take bracketing to
indicate that the bracketed term is evaluated first, it can be shown that (P ∨ Q) ∨ R
has the same truth value as P ∨ (Q ∨ R).

Conjunction, disjunction, and the biconditional are commutative and associative, while
material implication is not.

2.2.7 “Unless”
“Unless,” like “or,” is a word that is used in more than one way in the English language, and
some confusion has grown up over the right way to translate it into propositional symbols.
Like “or,” “unless” can be used in an inclusive or exclusive sense.
Here is an example of the inclusive sense of “unless:”
12CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

Johnny will fail the exam unless he studies.

Sadly, we know that Johnny studying for the exam does not exclude the possibility of
Johnny failing the exam.
Here is an example of the exclusive use of “unless:”

We will have a picnic unless it rains.

The very thing we wish to deny is that we are going to have a picnic in the rain, while at
the same time we affirm that we will have a picnic if it does not rain. This is most easily
translated as exclusive “or”.
In the chapter on propositional semantics we will examine the truth table behavior of
the various senses of “or” and “unless”.
It would be very rare that the logic of an argument would depend on whether or not
a usage of “unless” was exclusive. Therefore, in the large majority of cases, you can follow
the advice of most logic books, which is to translate “P unless Q” as P ∨ Q. But check
the context carefully.

2.2.8 Exercises
A
1. Translate the following into symbols:

(a) Tad stays home while Cheryl goes to work.


(b) Tad stays home but Cheryl does not go to work.
(c) If Tad stays home then Cheryl does not go to work.
(d) Either Bob goes to the store or Tad stays home.
(e) Alice going to the store is implied by Bob not going to the store.
(f) Bob does not go to the store only if Alice does.
(g) Cheryl goes to work if Bob goes to the store.
(h) For Cheryl to go to work it is sufficient that Tad stays home.
(i) For Cheryl not to go to work it is necessary that Tad not stay home.
(j) Cheryl going to work implies Alice going to the store, and Alice going to the
store implies Cheryl going to work.
(k) Bob does not go to the store if Alice does.
(l) Bob going to the store is equivalent to Tad not staying home.
(m) Tad stays home unless Cheryl does not go to work.

2. Translate the following into clear, grammatically correct, idiomatic English:

(a) B → −A.
(b) B & C.
(c) −B ∨ C.
(d) −C ↔ T .
2.3. TRUTH TABLE DEFINITIONS OF THE PROPOSITIONAL CONNECTIVES 13

(e) −A → B.
(f) −A & − T .
(g) −A ∨ −T .
(h) B → T.
(i) T → B.
(j) −T → −B.

2.3 Truth Table Definitions of the Propositional Connectives


All of the propositional connectives we have used in natural deduction can be defined in
terms of their characteristic truth tables. We introduce truth tables now because they will
help us to understand why some of the connectives are translated in certain ways.
Here is the simplest truth table, that of a single proposition P :

P
T
F
This merely expresses the assumption that any proposition is taken to be either definitely
T or definitely F. The table is constructed simply by listing all the possible truth values of
the proposition in a column below it.
Tables for propositions containing two variables have four lines, in order to represent
all possible combinations of truth values of the atomic propositions. We can call these the
“input” values. The truth value of the whole expression is written in a column underneath
the connective; call these values the “outputs”.
Here are the truth-table definitions of the five most commonly used propositional con-
nectives:

Negation: Conjunction: Disjunction:


− P (P & Q) (P ∨ Q)
F T T T T T T T
T F T F F T T F
F F T F T T
F F F F F F

Material Implication: Biconditional:


(P → Q) (P ↔ Q)
T T T T T T
T F F T F F
F T T F F T
F T F F T F

It is absolutely necessary for beginners in logic to memorize these basic tables, because
they are the basis for a large part of what we do in this text. We will study truth tables in
much more detail in Chapter Four, where we will see that they offer a powerful method for
the analysis of the validity of arguments.
14CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

2.3.1 Exercises
A
1. If it is true that Bob goes to the store and false that Cheryl goes to work, then what
are the truth-values of the following propositions?

(a) Bob goes to the store and Cheryl goes to work.


(b) If Bob goes to the store then Cheryl goes to work.
(c) Either Bob goes to the store or Cheryl goes to work.
(d) Bob goes to the store if and only if Cheryl does not go to work.
(e) For Bob to go to the store it is sufficient that Cheryl not go to work.

2. If P is T and (P & Q) is F, then state the truth values of

(a) P ∨ Q
(b) P → Q
(c) −P ∨ Q
(d) P ↔ Q
(e) Q → P

2.4 Interpreting the Material Conditional


2.4.1 “Ifs” versus “Only ifs”
As we learn more about how deduction works, it will become apparent that the truth table
for → is the backbone of deductive logic. However, this table has some counterintuitive
features which you may only become fully comfortable with after a lot of experience with
logic. We can note right away, however, that the table will help us to understand the various
ways in which → is translated into English. The reading “if P then Q” should be clear:
from the table, we can see that on all the lines where P → Q is T, Q is true whenever P is
T. In other words, for Q to be T, it is sufficient for P to be true.
The reading “P only if Q” sometimes confuses students (and a few professors from time
to time as well). However, it makes perfect sense if you notice that on all the lines of the
table where P → Q is T, that P is T only in the lines in which Q is true also. Hence, also,
the reading “Q is a necessary condition for P ”: given P → Q, P cannot be T unless Q is
T.
The concepts of necessity and sufficiency are easy to mix up. Concrete examples may
help to make them clear. For a person to be adequately nourished, it is sufficient that they
eat East Indian food. However, there are many other kinds of nourishing food, and it is
not necessary to eat East Indian in order to live. (Lovers of curry might disagree with me
about this.) On the other hand, it is necessary to ingest a certain amount of protein in
order to be adequately nourished.
Here is a nice example (thanks to a student!) which illustrates the proper reading of
P → Q. Consider the following statements:

(1) You will win the lottery only if you buy a ticket.
2.4. INTERPRETING THE MATERIAL CONDITIONAL 15

(2) You will win the lottery if you buy a ticket.

These are two very different statements. The first is the generally true claim that you have
to buy a ticket in order to have a chance to win; but this, of course, does not guarantee
that you will win the lottery. The second says that you will win if you buy a ticket, which
is certainly false for most lotteries!
Statement (1) above is equivalent to the following:

To win the lottery it is necessary that you buy a ticket.

On the other hand, statement (2) is equivalent to this:

To win the lottery it is sufficient that you buy a ticket.

Again, the truth conditions for these two statements are clearly quite different.

2.4.2 The “Paradoxes” of Material Implication


Consider the following garden-variety implication:

If you change your oil regularly then your engine will last longer.

We recognize this as true because we know about the causal relevance of regular oil changes
to engine longevity. Here’s another example:

If a party in a Parliamentary system wins the most seats in an election, it gets


to form the government.

Here the relevance of antecedent to consequent is not causal, but legal. Still, anyone familiar
with the way Parliamentary governments work will recognize the statement as true. But
now consider the following:

If Ottawa is the capital of Canada then Caesar crossed the Rubicon.

Both antecedent and consequent are true, and so by the truth table for “if-then” the whole
statement is true. And yet, there seems to be little if any relevance of antecedent to
consequent. This illustrates a fact that is often called the Paradox of Material Implication:

The truth value of a material conditional has absolutely nothing to do with the
meaning of the antecedent and consequent, beyond their truth values.

This is a “paradox” only because it is an affront to our linguistic common sense. In proposi-
tional logic the only “meaning” the propositional letters P, Q, . . . have is their truth values;
propositional logic is purely an algebra of truth values, and any formula is true or false
depending only on the truth values of the input variables and the logical relations between
them.
The challenge to common sense becomes even more acute when we consider cases in
which the antecedent is false, for the table says that such conditionals are true regardless
of the truth value of the consequent. Thus, both of the following are true statements:

If 2 + 2 = 17π, then Canada is in North America.


If 1 = 0 then 1 = 2.
16CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

These examples are an illustration of the following principle of classical two-valued logic:

Ex Falso Quodlibet: from a falsehood anything follows.

A free translation of the Latin phrase is, “from a falsehood, conclude whatever you like.”
This principle is sometimes abbreviated Ex Falso, and we shall encounter it again in the
next chapter.
There is an advanced branch of logic called relevance (or relevant) logic in which logi-
cians attempt to construct notions of implication which can express relevance of antecedent
to consequent in a way that is closer to ordinary language. However, long experience has
shown that the simple, stripped-down notion of material consequence presented here is on
the whole the most useful notion of implication we have, since it can be adapted in so many
ways.

2.5 Syntactic Conventions


2.5.1 Scope of a Connective
The scope of a propositional connective is the set of formulas that it operates on. For
instance, in −P , the scope of the negation sign is just P ; in −P & Q, the scope of the
sign of conjunction is −P and Q; and in −(P & Q) the scope of the negation sign is all of
(P & Q).

2.5.2 Main Connective


The main connective of a propositional formula is the connective whose scope is the whole
formula. For instance, in (P ∨ Q) → (P & Q), the → is the main connective; while in
−(P & Q) the main connective is the negation sign. As these examples indicate, scope can
be fixed by brackets if there is any danger of ambiguity.
Is there a main connective in a conjunction or disjunction with more than two terms?
Since conjunction and disjunction are fully associative, it would seem that no one connective
in (P1 . . . & . . . Pn ) or (P1 . . . ∨ . . . Pn ) should be privileged. This question illustrates the
drawback of the usual notation we use for disjunction and conjunction, which (as George
Spencer Brown pointed out [5]) are usually represented as if they were binary when in
fact they are not restricted to two variables. The highly flexible notation used by Spencer
Brown in his calculus of indications [5] gets around this problem elegantly. It is also possible,
although we will not rely on it here, to represent conjunctions and disjunctions respectively
by writing something like this:
^ ^ _ _
(P1 , P2 , . . . , Pn ) = Pi and (P1 , P2 , . . . , Pn ) = Pi ,
i i

where the connectives in front operate on the whole list of propositions given in brackets
and thus, in effect, are the main connectives for the whole conjunction or disjunction.
In Chapter 5 we will introduce the highly flexible Reverse Polish Notation (RPN), and
show how it can be used to streamline the evaluation of complex formulas, especially those
containing conjunctions or disjunctions with more than two terms.
2.6. COMBINATIONS 17

2.5.3 Bracketing
There are almost as many bracketing conventions as there are logic books. In this book we
will use a very simple convention whose sole purpose is to avoid ambiguity.
Brackets indicate the scope of a connective. Bracket off a sub-formula if there is any
possibility of confusion about which connective applies to which part of the formula. For
instance, in (P & Q) → R, the & connects P and Q, and the → connects the whole
subformula (P & Q) with R. Brackets can be nested, and the general rule is that one
evaluates inner brackets first, and then works outwards. Putting brackets around a whole
formula is optional.
Example: In P & (Q ∨ R), the subformula Q ∨ R is considered as a unit, and this
formula would be translated as “P and either Q or R.” By comparison, (P & Q) ∨ R would
be translated as “Either P and Q or R;” or, perhaps more clearly as “R or both P and Q.”
Be careful! In English and many other natural languages, the scope of logical con-
nectives is ambiguous and must be understood from context and a knowledge of common
usage.
Example: “If Kerry wins then Bush loses and Cheney has to retire” would usually
be understood to be bracketed as follows: “Kerry wins → (Bush loses & Cheney has to
retire).” Strictly speaking, it could also be read as “(Kerry wins → Bush loses) & Cheney
has to retire.” However, few experienced speakers of English would read it this way. In
Chapter Four we will learn how to show that these two versions of the sentence are not
logically equivalent. I discuss further examples like this in the section on Combinations
below.
Remember that we allow unlimited associativity for conjunction, disjunction, and the
biconditional. This means that we can write conjunctions of n formulas as (P1 & . . . & Pn ),
disjunctions of n formulas as (P1 ∨ . . . ∨ Pn ), and equivalences of n formulas as (P1 ↔ . . . ↔
Pn ), without having to bracket off pairs of conjuncts or disjuncts. (There is an exception
to this when we do truth tables, which will be explained in Chapter 5.)

2.6 Combinations
Truth-functional combinations of propositions can be created in literally an infinite variety
of ways. As indicated in the discussion of bracketing above, a sensitivity to context and
conventional usage is sometimes required to translate combined expressions either from a
natural language to a formula or the reverse.
Example 1: If Bob and Alice go to the store then Tad doesn’t stay home ⇔ (B & A) →
−T .
Example 2: Either Bob goes to the store or Tad stays home and either Alice goes to
the store or Cheryl goes to work ⇔ (B ∨ T ) & (A ∨ C).
Example 3: Either Bob goes to the store only if Alice goes to the store or else Cheryl
goes to work only if Tad stays home ⇔ (B → A) ∨ (C → T ).
Propositional logic can express relationships that would be far too complicated to state
conveniently in ordinary language.
18CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

Example: The expression (P1 & . . . & Pn ) → (Q1 ∨ . . . ∨ Qm ) is a perfectly legitimate


formula, and yet few would try and say anything like this in words if n and m were much
larger than, say, 2 or 3.

2.6.1 Exercises
A
1. Using the propositional letters defined at the beginning of this chapter, translate the
following English sentences into formulas:

(a) If Bob goes to the store and Tad stays home then Alice goes to the store only if
Cheryl goes to work.
(b) Bob and Alice don’t go to the store if and only if Tad doesn’t stay home.
(c) If Tad stays home then Cheryl goes to work but Cheryl goes to work only if Tad
stays home.
(d) When Bob goes to the store, either Alice also goes to the store or Cheryl doesn’t
go to work.
(e) Either Bob or Alice goes to the store if Tad doesn’t stay home.

2. Translate the following formulas into clear, grammatically correct, idiomatic English:

(a) (A & B & C) → −T


(b) (A & B) ↔ (C ∨ −T )
(c) A → (B → C)
(d) (A → B) → C
(e) (A → B) → (C → T )

2.7 Negated Forms


It is not always obvious how to translate English expressions involving negations that op-
erate over or within conjunction, disjunction, or elimination. Here are a few of the more
frequently encountered negated forms:
Neither P nor Q: −(P ∨ Q).

Either −P or −Q: −P ∨ −Q

Not both P and Q: −(P & Q).

Both not P and not Q: −P & − Q.

P does not imply Q: −(P → Q).

P implies not-Q: P → −Q.

P is not equivalent to Q: −(P ↔ Q).

P is equivalent to not-Q: P ↔ −Q.


2.8. INTENSIONAL CONTEXTS 19

2.7.1 Exercises
A
1. Using the propositional letters defined at the beginning of this chapter, translate the
following English sentences into formulas:
(a) Neither Bob nor Alice goes to the store.
(b) Either Cheryl doesn’t go to work or Tad doesn’t stay home.
(c) If Tad stays home it does not necessarily mean that Alice goes to the store.
(d) Cheryl going to work is insufficient to guarantee that Tad stays home.
(e) Cheryl goes to work only if Tad doesn’t stay home.
2. Translate the following formulas into clear, grammatically correct, idiomatic English:
(a) −(−A ∨ B)
(b) −(B → (−C ∨ T ))
(c) −(A ↔ B)
(d) A ↔ −B
(e) −(A ∨ B)

2.8 Intensional Contexts


Intensional contexts are statements that report on how a proposition is represented to or in
a person or thing capable of representation. For instance, they might report on someone’s
state of mind or utterances. Here are typical examples:
Joe believes that Santa is real.
Joe wishes that Santa could be real.
“Santa came last night,” said Joe.
Science proves that Santa does not exist.
How should we translate these? The key is to see that the truth value of the whole statement
— for instance, whether or not it is true that Joe said that Santa came last night — is
entirely independent of the truth value of the statement itself that Joe made. Therefore, the
connection between Joe’s statement itself and the claim that he made a certain statement is
not truth-functional. If we are given any statements like the four examples above to translate
into propositional symbols, we have no choice but to treat them as atomic propositions, and
give them simply a single propositional letter such as P .

2.9 Alternative Notation for Propositional Connectives


Virtually every elementary calculus text published anywhere in the world uses nearly the
same notation for the basic operations of the theory. Logic, however, has not achieved the
same standardization, which is a reflection of the fact that logic is still a subject that is
very much under development.
Here are some other notations that one might encounter for the symbols we use in this
text.
20CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION

−P is often written as ∼ P , ¬P , or P . (The symbol ∼ is called the tilde.)

P → Q is frequently written as P ⊃ Q. (The symbol ⊃ is pronounced “horseshoe.”)

P & Q is often written as P ∧ Q and sometimes as P · Q, or occasionally as P Q.

P ∨ Q is often written as P + Q.

P ↔ Q is sometimes written as P ≡ Q.
2.10. EXERCISES ON CHAPTER 2 21

2.10 Exercises on Chapter 2


B
[Under construction]
22CHAPTER 2. PROPOSITIONAL LOGIC: SYMBOLIZATION AND TRANSLATION
Chapter 3

Propositional Logic: Introduction


to Natural Deduction

We begin by setting forth rules and procedures for what logicians call natural deduction.
Some students may well find it to be decidedly unnatural at first (though familiarity will
bring ease of use). However, this system of deductive reasoning is, in fact, a very useful
distillation of the most important deductive principles that we use in ordinary reasoning.
In fact, they represent more than merely a sort of anthropology of reasoning: they are
normative, in the sense that they are an attempt to codify, clarify, and make more precise
our day-to-day reasoning. (A normative rule is one that says what we should do; for
instance, “Honour thy father and mother,” or “Change your oil every 3000 km.”) The sort
of logic we set forth here by no means exhausts all possible useful logics. The justification
for the approach we take, out of the many approaches that are mathematically possible, is
pragmatic, in that extensive experience shows that what we set forth here can be applied
to a great deal of the day-to-day reasoning that we actually carry out, and can also serve
as a very useful foundation for further developments.
Logicians call this way of reasoning natural deduction because it is meant to come as
close as one reasonably can to the way people tend to reason deductively in ordinary natural
languages (such as English). There are also numerous axiomatic systems of logic that are
focused on logical economy and generality. Such axiomatic systems play an important role
in research into the basis of mathematics and logic, and in discovering new systems of logic
that might be useful. However, a natural deductive approach designed more for ease of
use and closeness to intuitive methods of reasoning is much more helpful for beginners in
logic, especially those who might want to learn logic mainly because of its numerous useful
applications in other fields such as philosophy, mathematics, computing, and science.
We shall symbolize the logical forms of arguments by sequents, which are expressions
of the general form

P1 , P2 , . . . , Pn ` Q, (3.1)

where the propositions Pi are the premisses (the statements that are taken for granted in
the context of the problem), and Q is the conclusion (the proposition to be arrived at by
valid deductive steps from the premisses).
The symbol ` (called the turnstile) can be read as “therefore,” “proves,” or (more
fussily) as “syntactically entails.” If P ` Q we can also say that Q is derivable from P .

23
24 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

It amounts to the claim that the stated conclusion can be derived validly from the given
premisses by accepted rules of deduction.
Since the premisses of a sequent are taken to be held conjointly, a sequent of the form
3.1 above is just the same thing as saying

(P1 & . . . & Pn ) ` Q. (3.2)

Note carefully that in general deductive entailment goes only one way; that is, given
that we can show P ` R, we cannot automatically conclude that R ` P . There is an
important class of formulas, however, for which the entailment goes both ways — each
formula entails the other. In such a case, we will write a sequent of the form

P a` Q. (3.3)

Such a formula expresses an interderivability between P and Q. To establish interderivabil-


ity we generally have to carry out two proofs, one showing P ` Q, and the other showing
Q ` P , and two such proofs are often quite different in form.
It will be shown that some formulas can be derived without any premisses left over.
The sequents for such formulas can be written in the form

` P, (3.4)

which can be read “Provable P ,” or “P is a theorem.” Theorems are propositions that can
be asserted without the qualification of any premisses at all. Such theorems are sometimes
referred to by the grand name of logical truths.
There is no reason we can’t concatenate sequents together to get something like this:

P, Q ` R ` S ` T, (3.5)

in which a sequence of conclusions are derived one after the other. This reflects common
linguistic practice, where it would not be uncommon to make an argument of the form “P
and Q, therefore R; therefore S, etc.” However, in this book we will be almost entirely
concerned with sequents with only one turnstile.
The proof or derivation of a sequent will consist of a sequence of lines, beginning with
the statement of the premisses as assumptions, and ending (one hopes) with the desired
conclusion. Each line has to show four things: the dependencies (if any), the line number,
the proposition asserted on the line, and the justification for asserting that proposition.
The dependencies are the line numbers of the assumptions on which the line is based;
if the line itself is merely an assumption, the dependency is the number of the line itself. If
the line is a theorem there will be no dependencies. We shall say that a proposition depends
upon the assumptions listed to the left of it; or equivalently, that the assumptions to the
left of a proposition in a line govern that proposition.
The justification is a statement of the rule used to get the line, together with the
numbers of the previous lines (if any were needed) from which the line was derived by
means of the indicated rule.
The meaning of these terms will be clearer when we consider examples of proofs.
3.1. BASIC RULES OF DERIVATION 25

3.1 Basic Rules of Derivation


The following list of rules is very similar to that in Lemmon’s Beginning Logic [17], except
that we will derive MT instead of taking it as basic. Also, I have made certain assumptions
that Lemmon uses more explicit than he does.
In the following, the proposition letters P , Q, etc., may denote either atomic or molec-
ular formulas.

1. Rule of Assumption (A)


We may introduce any proposition anywhere in a proof that we wish — so long as we
never forget that it is an unproven assumption!
Dependency: The assumption so introduced depends only on itself, and the line num-
ber at which it is introduced appears in the assumption column to its left.
Form of justification: A

2. Modus Ponens (MP)


From P and P → Q we can derive Q.
Dependency: The conclusion Q depends upon whatever assumptions P and P → Q
depend upon.
Form of justification: m, n MP, where m and n are the lines for the premisses.

3. Double Negation (DN)


From P we can derive − − P , or from − − P we can derive P .
Dependency: The double negation of a line in a proof always depends upon the same
assumption as the line double-negated.
Form of justification: m DN, where m is the line double-negated.

4. & -Introduction (&I)


Given P and Q separately, we can derive P & Q or Q & P .
Dependency: the conclusion depends on whatever assumptions P and Q depended
upon.
Form of justification: m, n &I, where m and n are the lines for the premisses.

5. & -Elimination (&E)


Given the conjunction P & Q, we can derive either of the conjuncts P or Q separately.
Dependency: the conclusion depends upon whatever assumptions the original con-
junction depended upon.
Form of justification: m &E, where m is the line containing the conjunction used.

6. v-Introduction (vI)
Given P , we may derive P ∨ Q or Q ∨ P , where Q is any proposition whatever.
Dependency: The conclusion depends upon whatever assumptions the premiss de-
pended upon.
Form of justification: m vI, where m is the line of the proposition “or-ed” onto.

7. v-Elimination (vE)
From a disjunction P1 ∨ . . . ∨ Pn , we may conclude Q, so long as we can find a
derivation of Q from all n of the disjuncts separately.
26 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

Dependency: the conclusion Q depends upon any assumptions that the starting dis-
junct depended upon, together with any additional assumptions that had to be in-
troduced (but which could not be again discharged) in the derivations of Q from the
individual disjuncts.
General form of justification: k, l1 , m1 , . . . ln , mn vE, where K is the line of the start-
ing disjunction, li is the first line of the i-th subproof, and Mi is the last line of the
i-th subproof. See examples below.

8. Conditional Proof (CP)


If Q follows from the assumption P , then we can conclude P → Q.
Dependency: The conclusion P → Q depends upon whatever assumptions P depended
upon, together with any additional assumptions that had to be introduced (but which
could not again be discharged) in the derivation of Q from P .
Form of justification: m, n CP, where m is the line number of the antecedent of the
conditional to be proven, and n is the line number of the consequent.

9. Reductio ad Absurdum (RAA)


If from the assumption P (or −P ) we can derive a contradiction of the form Q & − Q,
we can conclude −P (or P respectively).
Dependency: The conclusion depends upon any assumptions that governed any lines
used in the derivation of the contradiction, but which were not themselves discharged
in that derivation.
Form of justification: m, n RAA, where m is the line at which we assume the nega-
tion of the conclusion to be established, and n is the line at which we arrive at the
contradiction that negates the initial assumption.

10. Reflection (Rfl)


A very basic assumption of the sort of natural deduction we use here, which Lemmon
does not explicitly state himself but which he certainly uses, is that any proposition
may be taken to follow from itself. (If it is true that my name is Kent, then it would
be an odd sort of logic in which I could not immediately though perhaps redundantly
infer that my name is Kent.) This allows us to cite a single proposition as both the
beginning and end of certain kinds of sub-proofs in vE, RAA, and CP, and enormously
streamlines certain derivations. I’ll call this the principle of Reflection, just to give it
a name; though we need not cite it explicitly when it is used.

11. Reiteration (R)


Any line may be reiterated or repeated anywhere it is needed in a proof.
Dependency: The reiterated line has the same dependencies as the line it was copied
from.
Form of justification: m R, where m is the number of the line being reiterated.

12. Definition Introduction (DfIntro)


If P := Q, then P a` Q.

3.1.1 Alternative Names for the Basic Rules.


MP is also known as Detachment, Modus Ponendo Ponens, or Conditional Elimination.
3.2. COMMENTS ON BASIC RULES 27

&I is also sometimes called Conjunction

&E is also sometimes called Simplification.

vI also sometimes called Addition.

RAA is sometimes called Indirect Proof or Negation Introduction.

CP is sometimes called Conditional Introduction, → Intro, or ⊃ Intro.

3.2 Comments on Basic Rules


Some students may feel that it is a little fishy to be allowed to assume anything we want at
any time. The fact is that an assumption is nothing more than that — an unjustified
adoption of a proposition, perhaps merely for the sake of argument. There is nothing
logically illegitimate about doing this, so long as you remember that it was just an
assumption. Adopting an assumption is rather like borrowing money on a credit card
that has no credit limit, but on which all debts must ultimately be paid. Once we
have adopted an assumption and use it in the subsequent derivation, everything from
that point on is dependent on the assumption and cannot be asserted unconditionally.
Assumptions can be discharged by CP, RAA, and vE; this is analogous to the process
of paying back borrowed money.

MP is the pattern for all deductive inferences. It expresses the fundamental notion of
logical consequence, that one proposition may follow from another.

If you cannot think of any other way to get a letter into the proof, vI may be the best way
to do it. To some people vI seems a bit like cheating, since it allows us to introduce any
proposition whatsoever into a derivation whenever we need it. You should be able to
see that nothing illegitimate is going on, however, if you can see that the proposition
added to or “or-ed” onto P is not being asserted categorically. An example might
help: if it is true that my name is Kent, it is also safe (although perhaps odd) to say
that either my name is Kent or the moon is made of green cheese.

vE allows us to derive conclusions from disjunctions. Although the general statement of


the rule given above allows us to derive conclusions from disjuncts containing any
number of terms, usually in the propositional logic section of this course it will be
applied to disjunctions of the form P ∨ Q containing only two terms. When we do
predicate logic we will use a rule called Existential Elimination (EE) which is, in
effect, a form of vE over arbitrary numbers of disjuncts. Students often find vE to be
difficult at first, but the difficulties are mostly notational. Practice will make perfect.

RAA is one of the most powerful and useful rules. If you cannot think of any other way
of getting to the desired conclusion, introduce as an assumption the negation of the
desired conclusion and then try to get a contradiction. Sometimes even very well
educated people are surprisingly reluctant to use RAA, since they are uncomfortable
assuming for the sake of argument a proposition that they know or wish to believe is
false. This is unfortunate, since the ability to freely entertain hypotheses, no matter
how outrageous they may seem, is one of the most useful tools of thought. RAA would
28 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

not be valid in a logic that was not bivalent (i.e., a logic in which not all propositions
were either definitely true or definitely false).

Reiteration is, of course, a consequence of Reflection. Strictly speaking, Reiteration is


not needed in a system like ours because one can always cite a line that has already
been written (so long as you don’t forget to also cite the assumptions upon which
it depends) anywhere else in the proof. Despite this, beginners will sometimes find
Reiteration to be helpful, and so I have allowed it as a rule even though it is redundant.
Think of it as analogous to training wheels on a bicycle, which you will come to do
without as your skill increases.

It will become increasingly evident as the course progresses that these rules of deduction
are by no means the simplest rules upon which a valid system of propositional logic
could be based. These rules are chosen because they are fairly close to the way people
actually reason in ordinary natural languages; they are therefore designed mainly for
ease of use, not logical economy. It should also become evident that other rules could
just as easily be taken as basic. For instance, the reader might find it a good exercise
to show that we could have taken Disjunctive Syllogism (DS, defined below) as a basic
rule instead of MP, introduced → as a defined symbol (taking the rule of Implication,
also given below, as a definition instead of a derived result), and then derived MP
using DS.

3.3 Simple Examples of the Basic Rules


In this section I state and prove several sequents which demonstrate the use of the Basic
Rules in especially simple and clear ways.

1 (P & Q) → R, P, − − Q ` R (This uses MP, DN, and &I.)


1 (1) (P & Q) → R A
2 (2) P A
3 (3) −−Q A
3 (4) Q 3 DN
2,3 (5) P &Q 2,4 &I
1,2,3 (6) R 1,5 MP

This derivation illustrates the basic pattern for all natural deductions. The centre
column lists the propositions that lead one after the other to the conclusion; the numbers
in brackets are the line numbers. The left-hand column lists the dependencies, that is,
the assumptions that the proposition in the centre column depends upon. The right hand
column lists the rule that justifies the proposition written in that line, together with the
lines to which the rule was applied.
Those who have some familiarity with accountancy can think of the left column as
analogous to a tabulation of debit, and the centre column as credit. Making an assumption
is like borrowing money; the assumptions we make allow us (by means of the rules) to
assert other propositions, but we must keep track of the fact that we are relying on certain
assumptions just as we must keep track of money we owe. Debits and credits must balance
in the end. If we discharge all the assumptions and still have a proposition left over, that
3.3. SIMPLE EXAMPLES OF THE BASIC RULES 29

proposition counts as a theorem, that is, a proposition that can be held unconditionally —
like a property with no mortgage on it!
2 P & Q, P → R ` R (This uses &E and MP.)
1 (1) P &Q A
2 (2) P →R A
1 (3) P 1 &E
1,2 (4) R 2,3 MP

3 P ` P ∨ (Q & R) (This uses vI.)


1 (1) P A
1 (2) P ∨ (Q & R) 1 vI

4 P ∨ Q, P → R, Q → R ` R (This uses vE and MP.)


1 (1) P ∨Q A
2 (2) P →R A
3 (3) Q→R A
4 (4) P A (vE)
2,4 (5) R 2,4 MP
6 (6) Q A (vE)
3,6 (7) R 3,6 MP
1,2,3 (8) R 1,4,5,6,7 vE

This is one of the simplest examples I have been able to think of that demonstrates the
use of vE. In some treatments of propositional logic, this sequent is actually called “v-
Elimination.” Here we will call it inference pattern Convergence, since variations on it will
occur often enough that it will be useful to give it a name.
The whole idea of vE is that we use it to establish the claim that from either P or Q
we can get to the desired conclusion. It is like a claim that we can get to Calgary from
Lethbridge via either Highway 2 or the Vulcan Highway. To demonstrate the claim, you
have to test both routes — because the claim is that either route works. This is why we use a
subproof for each disjunct in the starting disjunction. Each subproof starts by temporarily
assuming the disjunct to be tested; when the subproof is done, that temporary assumption
is discharged.
Notice also that in lines 4 and 6 I annotate the assumptions that start the subproofs
with an indication of the rule that I am trying to use in the subproof. This is an entirely
optional step, and whether or not you do it does not affect the validity of the proof. However,
it can help you keep track of which assumption is part of which subproof; this is especially
helpful when you use nested subproofs. (See the proof of DS below.) I have borrowed this
device from Jacquette [13].
5 P `Q→P (This demonstrates CP and R.)
1 (1) P A
2 (2) Q A (CP)
1 (3) P 1R
1 (4) Q→P 2,3 CP
30 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

As in sequent 5, I have marked the assumption in line 2 with the rule that I intend to
use in the subproof.
6 P → Q, P ` Q (This demonstrates RAA.)
1 (1) P →Q A
2 (2) P A
3 (3) −Q A (RAA)
1,2 (4) Q 1,2 MP
1,2,3 (5) Q& − Q 3,4 &I
1,2 (6) Q 3,5 RAA

Of course, this sequent is merely the statement of MP. But it demonstrates in a very
clear way how to use RAA.

3.3.1 Exercises
A

1. Find natural deduction proofs for the following sequents.

(a) P, P → −Q ` −Q
(b) − − P, P → Q ` Q
(c) P & Q, P → R ` R
(d) P, P → R, P → S ` R & S
(e) − − P ` P ∨ R

1. Construct proofs of the following sequents.

(a) P → (Q & R), P ` R ∨ S


(b) P → Q, R → Q, P ∨ R ` Q ∨ S
(c) P ∨ Q, Q → R ` P ∨ R
(d) P, (P & Q) → R ` Q → R
(e) P → Q, Q → R, −R ` −P (Hint: try RAA)

3.4 Some Useful Sequents


In this section we derive from the Basic Rules a series of sequents that are very widely used
throughout logic, and which will be found to be very useful in our future propositional and
predicate natural deductions. Many of these sequents have common names, and you should
learn these names and acquire the ability to recognize these forms in their innumerable
variations.
3.4. SOME USEFUL SEQUENTS 31

3.4.1 One-Way Sequents

7 P → Q, −Q ` −P Modus Tollens (MT)

1 (1) P →Q A
2 (2) −Q A
3 (3) P A (RAA)
1,3 (4) Q 1,3 MP
1,2,3 (5) Q& − Q 2,4 &I
1,2 (6) −P 3,5 RAA

MT is one of the most widely used rules in deductive logic.


Verbal example: If Bob goes to work then Alice stays home. But Alice did not stay
home. Therefore, Bob did not go to work.

8 P ∨ Q, −Q ` P Disjunctive Syllogism (DS)

1 (1) P ∨Q A
2 (2) −Q A
3 (3) P A (vE)
4 (4) Q A (vE)
5 (5) −P A (RAA)
2,4 (6) Q& − Q 2,4 &I
2,4 (7) P 5,6 RAA
1,2 (8) P 1,3,3,4,7 vE

Example in Words: Either Bob or Alice paid the phone bill; however, it was not Bob.
Hence, it was Alice who paid the phone bill.
The proof of DS will repay careful study, since it illustrates the use of vE and RAA. The
object is to derive P from P ∨ Q given the condition −Q. The first disjunct is simply the
desired conclusion; from the assumption of the second disjunct we can derive a contradiction
and thereby prove P — or, indeed, anything we want. The conclusion follows by vE. In
the justification for the conclusion at line (8), 1 is the line at which the premiss P ∨ Q
was introduced; 3 is the line at which the working assumption P was introduced and also
the line at which it is discharged (since P is the very conclusion we are aiming at); 4 is the
line at which the second working assumption was introduced and 7 the line at which it was
discharged. If we were working with a disjunction with more than two terms, there would
have to be two line numbers for each working assumption introduced and then discharged;
but we shall rarely use vE on disjunctions with more than two terms.
Note the use of Reflection in line 3, in which P is both assumption and conclusion of
a sub-proof. Reflection also allows us, for example, to establish the theorem ` P → P as
follows:

9 `P →P Law of Identity (Id)


32 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

1 (1) P A
(2) P →P 1,1 CP

Example in Words: Paul Martin is Prime Minister of Canada only if Paul Martin is
Prime Minister of Canada. More generally, every proposition implies itself; this theorem
could be just as well called “self-implication”.
This example illustrates the fact that the utter triviality of many of the basic rules of
logic becomes apparent when they are put into words. It is fascinating how powerful these
apparently “trivial” inference rules can be, however.
Here is another simple sequent that can sometimes come in handy:

10 −P ` −(P & Q) Denial (Den)

1 (1) −P A
2 (2) P &Q A (RAA)
2 (3) P 2 &E
1,2 (4) P & −P 1,3 &I
1 (5) −(P & Q) 2,4 RAA

The reader will notice that the peculiar way in which we used RAA in the proof of
DS would apparently, in principle, allow us to deduce any proposition whatsoever from
a contradiction. This is, indeed, the case, and this phenomenon, which we have already
encountered in the last chapter, is important enough to deserve a name of its own: Ex falso
quodlibet, or more briefly Ex Falso. This Latin phrase literally means “from a contradiction
(or falsehood), whatever. . . ”, or “from a contradiction anything follows.”

11 P, −P ` Q Ex Falso Quodlibet (ExF)

1 (1) P A
2 (2) −P A
3 (3) −Q A (RAA)
1,2 (4) P & −P 1,2 &I
1,2 (5) Q 3,4 RAA

Example in Words: If 2 + 2 = 4 and 2 + 2 6= 4 then Captain Kangaroo is the Prime


Minister of Australia.
Here is a slightly different version of ExF that helps to illustrate how the methods work:

12 P & − P ` Q

There are at least two ways to prove this. A “learner’s permit” method uses Reiteration:

1 (1) P & −P A
2 (2) −Q A (RAA)
1 (3) P & −P 1R
1 (4) Q 2,3 RAA

It can be done more compactly as follows:


3.4. SOME USEFUL SEQUENTS 33

1 (1) P & −P A
2 (2) −Q A (RAA)
1 (3) Q 2,1 RAA

Note that line 1 is only “derived” from line 2 in the sense that it can be asserted when
2 is asserted simply because it has already been asserted. But that is enough to make RAA
work. Ex Falso will seem less odd when we come to truth table semantics in Chapter 5.

13 P → Q ` P → (P & Q) Absorption (Abs)

1 (1) P →Q A
2 (2) P A
1,2 (3) Q 1,2 MP
1,2 (4) P &Q 2,3 &I
1 (5) P → (P & Q) 2,4 CP

Example in Words: Given that Bob works only if Alice stays home, then if Bob works
then Bob works and Alice stays home.
This proof is a typical usage of CP. Note how the assumption on line 2 is discharged
when we arrive at the conclusion.

14 P → Q, Q → R ` P → R Hypothetical Syllogism (HS)

1 (1) P →Q A
2 (2) Q→R A
3 (3) P A (CP)
1,3 (4) Q 1,3 MP
1,2,3 (5) R 2,4 MP
1,2 (6) P →R 3,5 CP

Example in Words: If Bob works only if Alice stays home, and if Alice stays home only
if she is off shift, then Bob works only if Alice is off shift.
The next derivation is a good illustration of the use of vE:

15 P → Q, R → S, P ∨ R ` Q ∨ S Constructive Dilemma (CD)

1 (1) P →Q A
2 (2) R→S A
3 (3) P ∨R A
4 (4) P A (vE)
1,4 (5) Q 1,4 MP
1,4 (6) Q∨S 5 vI
7 (7) R A (vE)
2,7 (8) S 2,7 MP
2,7 (9) Q∨S 8 vI
1,2,3 (10) Q∨S 3,4,6,7,9 vE
34 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

Example in Words: If Bob works only if Alice stays home, and if Tad works only if
Cheryl stays home, and if either Bob or Tad works, then either Alice or Cheryl stays home.

The following variant on Constructive Dilemma is easily established:

16 P → Q, R → S, −Q ∨ −S ` −P ∨ −R Destructive Dilemma (DD)


The proof is left as a B exercise.

3.4.2 Interderivability Results


In this section we state and prove a number of interderivability results of the general form

P a` Q. (3.6)

It can be shown that for every interderivability of the form (3.6), there is a theorem of the
form

` P ↔ Q. (3.7)

We shall say that any wffs P and Q for which ` P ↔ Q holds are provably equivalent. In
Chapter 5 we will see that expressions are provably equivalent iff they have the same truth
tables.
In syntactic terms (that is, in terms of the use that this fact can be put in derivations),
this implies that any occurrence of a formula of the form P can be substituted for by
a formula of the form Q wherever they occur, or vice versa. We will call this metarule
Equivalence Introduction, and this rule, along with some other introduction rules, will be
discussed in greater detail in the next chapter. For now, we merely note the fact that because
of Equivalence Introduction these interderivabilities are very powerful tools of deduction.
We will use Equivalence Introduction in some of the proofs further on in this chapter.

De Morgan’s Laws
The following rules, called de Morgan’s Laws, are very important and widely used in logic:

17 −(P & Q) a` −P ∨ −Q de Morgan’s Laws (deM)

(a) −(P & Q) ` −P ∨ −Q


1 (1) −(P & Q) A
2 (2) −(−P ∨ −Q) A (RAA)
3 (3) −P A (RAA)
3 (4) −P ∨ −Q 3 vI
2,3 (5) (−P ∨ −Q) & − (−P ∨ −Q) 2,4 &I
2 (6) P 3,5 RAA
7 (7) −Q A (RAA)
7 (8) −P ∨ −Q 7 vI
2,7 (9) (−P ∨ −Q) & − (−P ∨ −Q) 2,8 &I
2 (10) Q 7,9 RAA
2 (11) P & Q 6,10 &I
3.4. SOME USEFUL SEQUENTS 35

1,2 (12) (P & Q) & − (P & Q) 1,11 &I


1 (13) −P ∨ −Q 2,12 RAA

This is an adaptation of the ingenious proof given by Lemmon for his Sequent 36(a). It
shows that RAAs can be nested.

(b) −P ∨ −Q ` −(P & Q)


1 (1) −P ∨ −Q A
2 (2) −P A (vE)
3 (3) P &Q A (RAA)
3 (4) P 3 &E
2,3 (5) −P & P 2,4 &I
2 (6) −(P & Q) 3,5 RAA
7 (7) −Q A (vE)
8 (8) P &Q A (RAA)
8 (9) Q 8 &E
7,8 (10) −Q & Q 7,9 &I
7 (11) −(P & Q) 8,10 RAA
1 (12) −(P & Q) 1,2,6,7,11 vE

18 −(P ∨ Q) a` −P & − Q de Morgan (deM)

(a) −(P ∨ Q) ` −P & − Q


1 (1) −(P ∨ Q) A
2 (2) P A (RAA)
2 (3) P ∨Q 2 vI
1,2 (4) (P ∨ Q) & − (P ∨ Q) 1,3 &I
1 (5) −P 2,4 RAA
6 (6) Q A (RAA)
6 (7) P ∨Q 6 vI
1,6 (8) (P ∨ Q) & − (P ∨ Q) 1,7 &I
1 (9) −Q 6,8 RAA
1 (10) −P & − Q 5,9 &I

(b) −P & − Q ` −(P ∨ Q)


1 (1) −P & − Q A
2 (2) P ∨Q A (RAA)
1 (3) −P 1 &E
1,2 (4) Q 2,3 DS
1 (5) −Q 1 &E
1,2 (6) Q& − Q 4,5 &I
1 (7) −(P ∨ Q) 2,6 RAA

This can, of course, be proven without the use of the derived rule DS by simply, in effect,
filling in the proof of DS between lines (3) and (4).
36 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

The de Morgan Laws amount to rules for the distribution of the negation sign; they
show that when we distribute negation over conjunctions they turn into disjuncts of the
negated terms, and when we distribute negation over disjunctions they become conjuncts
of the negated terms.

Implication Rules
The following two rules about the implication relation → are frequently used. They can
both be cited the same way.

19 P → Q a` −P ∨ Q Implication (Imp)

(a) P → Q ` −P ∨ Q
1 (1) P →Q A
2 (2) −(−P ∨ Q) A (RAA)
2 (3) −−P & −Q 2 deM
2 (4) P & −Q 3 DN
2 (5) P 4 &E
1,2 (6) Q 1,5 MP
2 (7) −Q 4 &E
1,2 (8) Q& − Q 6,7 & I
1 (9) −P ∨ Q 2,8 RAA
The converse is left as a B exercise.
Note the RAA subproof nested inside the CP subproof.

20 P → Q a` −(P & − Q) Implication (Imp)


The proofs of this version of Imp use Sequent 18, and they are left as a B exercise.
Example: Here is an example of a valid use of Imp and deM. It is justifiable by the
metarule of Equivalence Introduction, although you do not have to cite that explicitly.
1 (1) (P → Q) & (R → S) A
1 (2) (−P ∨ Q) & (R → S) 1 Imp
1 (3) −(P & − Q) & (R → S) 2 deM
This example illustrates the fact that an interderivability rule such as deM allows us to
substitute an interderivable formula for a subformula in a larger formula.
Transposition is also very useful:

21 P → Q a` −Q → −P Transposition (Trans)

P → Q ` −Q → −P
1 (1) P →Q A
2 (2) −Q A (CP)
1,2 (3) −P 1,2 MT
1 (4) −Q → −P 2,3 CP

The converse is left as a B exercise.


3.4. SOME USEFUL SEQUENTS 37

Idempotency

22 P & P a` P Idempotency (Idem)


These easy proofs are left as A exercises.

23 P ∨ P a` P Idempotency (Idem)

P ∨ P `P
1 (1) P ∨P A
2 (2) P A (vE)
3 (3) P A (vE)
1 (4) P 1,2,2,3,3 vE

This is the shortest possible application of vE. The converse is left as an A exercise.

Commutative Laws
The commutative laws are really just a restatement of the Basic Rules of &I and vI, which
allow us to “and” and “or” two propositions together in any order. Their proofs illustrate
that something which is “obvious” still may require a little work in order to be correctly
demonstrated from the Basic Rules.

24 P & Q a` Q & P Commutativity (Comm)

P &Q ` Q&P
1 (1) P &Q A
1 (2) P 1 &E
1 (3) Q 1 &E
1 (4) Q&P 2,3 &I

25 P ∨ Q a` Q ∨ P Commutativity (Comm)

P ∨ Q`Q ∨P
1 (1) P ∨Q A
2 (2) P A (vE)
2 (3) Q ∨P 2 vI
4 (4) Q A (vE)
4 (5) Q ∨P 4 vI
1 (6) Q ∨P 1,2,3,4,5 vE

Associative Laws

26 P & (Q & R) a` (P & Q) & R Associativity (Assoc)


This proof is left as a B exercise.
38 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

27 P ∨ (Q ∨ R) a` (P ∨ Q) ∨ R Associativity (Assoc)
This proof is a little tricky, since it requires nested vEs. It is left as a C exercise.
As noted previously, Associativity can be extended to conjunctions and disjunctions
of arbitrary numbers of terms, but the proof of this fact requires mathematical induction,
which is beyond (though not far beyond) the scope of this course.

Rules of Dominance
The Rules of Dominance may seem a little strange, but they have a very natural interpre-
tation from the semantic point of view. We will review them in Chapter 5. These rules
simplify the derivation of Equiv (below).
The proofs of the Rules of Dominance are simple but quite instructive. I give some
here, and leave the rest as exercises.

28 P & (Q ∨ −Q) a` P Dominance (Dom)

P ` P & (Q ∨ −Q)
1 (1) P A
2 (2) −(P & (Q ∨ −Q)) A (RAA)
2 (3) −P ∨ −(Q ∨ −Q) 2 deM
2 (4) −P ∨ (−Q & − −Q) 3 deM
2 (5) P → (−Q & − −Q) 4 Imp
1,2 (6) (−Q & − −Q) 1,5 MP
1 (7) P & (Q ∨ −Q) 2,6 RAA
The proof of the converse is left as an A exercise.

29 P & (Q & − Q) a` Q & − Q Dominance (Dom)

30 P ∨ (Q ∨ −Q) a` Q ∨ −Q Dominance (Dom)

31 P ∨ (Q & − Q) a` P Dominance (Dom)

P ∨ (Q & − Q) ` P
1 (1) P ∨ (Q & − Q) A
2 (2) P A (vE)
3 (3) Q& − Q A (vE)
3 (4) P 3 ExF
1 (5) P 1,2,2,3,4 vE

The converse is left as an A exercise.

Distributive Laws

32 P & (Q ∨ R) a` (P & Q) ∨ (P & R) Distributive Laws (Dist)

(a) P & (Q ∨ R) ` (P & Q) ∨ (P & R)


3.4. SOME USEFUL SEQUENTS 39

1 (1) P & (Q ∨ R) A
2 (2) −((P & Q) ∨ (P & R)) A (RAA)
2 (3) −(P & Q) & − (P & R) 2 deM
2 (4) (−P ∨ −Q) & (−P ∨ −R) 3 deM
2 (5) (P → −Q) & (P → −R) 4 Imp
2 (6) P → −Q 5 &E
2 (7) P → −R 5 &E
1 (8) P 1 &E
1 (9) Q∨R 1 &E
1,2 (10) −Q 6,8 MP
1,2 (11) −R 7,8 MP
1,2 (12) −Q & − R 10,11 &I
1,2 (13) −(Q ∨ R) 12 deM
1,2 (14) −(Q ∨ R) & (Q ∨ R) 9,13 &I
1 (15) (P & Q) ∨ (P & R) 2,14 RAA

Note the use of RAA.

(b) (P & Q) ∨ (P & R) ` P & (Q ∨ R)


1 (1) (P & Q) ∨ (P & R) A
2 (2) P &Q A (vE)
2 (3) Q 2 &E
2 (4) P 2 &E
2 (5) Q∨R 3 vI
2 (6) P & (Q ∨ R) 4,5 &I
7 (7) P &R A (vE)
7 (8) P &E
7 (9) R &E
7 (10) Q∨R 9 vI
7 (11) P & (Q ∨ R) 8,10 &I
1 (12) P & (Q ∨ R) 1,2,6,7,11 vE

This is a straightforward application of vE.

33 P ∨ (Q & R) a` (P ∨ Q) & (P ∨ R) Distributive Laws (Dist)

(P ∨ Q) & (P ∨ R) ` P ∨ (Q & R)
1 (1) (P ∨ Q) & (P ∨ R) A
1 (2) (−P → Q) & (−P → R) 1 Imp
1 (3) −P → Q 2 &E
1 (4) −P → R 2 &E
5 (5) −P A (CP)
1,5 (6) Q 3,5 MP
1,5 (7) R 4,5 MP
1,5 (8) Q&R 6,7 &I
1 (9) −P → (Q & R) 5,8 CP
1 (10) P ∨ (Q & R) 9 Imp
40 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

The converse is left as a B exercise.


The Distributive Laws show how “ORs” can be distributed over “ANDs,” and vice
versa. There is an important branch of advanced logic called quantum logic in which the
seemingly “obvious” distributive laws break down. (For a nicely accessible introduction to
this intriguing subject, see [12].)

Equivalence Rules
There are two Equivalence rules that are widely used.

34 P ↔ Q a` (P → Q) & (Q → P ) Equivalence (Def↔)


This follows by Definition Introduction, since P ↔ Q := (P → Q) & (Q → P )

35 P ↔ Q a` (P & Q) ∨ (−P & − Q) Equivalence (Equiv)


I give the proof of both directions here, since they are simple but instructive.

(a) P ↔ Q ` (P & Q) ∨ (−P & − Q)


1 (1) P ↔Q A
1 (2) (P → Q) & (Q → P ) 1 Def↔
1 (3) (−P ∨ Q) & (−Q ∨ P ) 2 Imp
1 (4) (−P & (−Q ∨ P )) ∨
(Q & (−Q ∨ P )) 3 Dist (× 2)
1 (5) ((−P & − Q) ∨ (−P & P )) ∨
((Q & − Q) ∨ (Q & P )) 4 Dist (× 2)
1 (6) (−P & − Q) ∨ (Q & P ) 5 Dom
1 (7) (P & Q) ∨ (−P & − Q) 6 Comm (× 2)

Note how Dom allows us to “cancel” out contradictory expressions such as −P & P .

(b) (P & Q) ∨ (−P & − Q) ` P ↔ Q


1 (1) (P & Q) ∨ (−P & − Q) A
2 (2) (P & Q) A (vE)
2 (3) P 2 &E
2 (4) Q 2 &E
2 (5) −Q ∨ P 3 vI
2 (6) −P ∨ Q 4 vI
2 (7) Q→P 5 Imp
2 (8) P →Q 6 Imp
2 (9) (P → Q) & (Q → P ) 7,8 &I
10 (10) (−P & − Q) A (vE)
10 (11) −P 10 &E
10 (12) −Q 10 &E
10 (13) −P ∨ Q 11 vI
10 (14) −Q ∨ P 12 vI
10 (15) P →Q 13 Imp
10 (16) Q→P 14 Imp
3.4. SOME USEFUL SEQUENTS 41

10 (17) (P → Q) & (Q → P ) 15,16 &I


1 (18) (P → Q) & (Q → P ) 1,2,9,10,17 vE
1 (19) P ↔Q 18 Def↔

Negated Arrows
The following are easy, given the rules we have now developed.

36 −(P → Q) a` P & − Q Negation of Implication (Neg→)

−(P → Q) ` P & −Q
1 (1) −(P → Q) A
1 (2) −(−P ∨ Q) 1 Imp
1 (3) −−P & −Q 2 deM
1 (4) P & −Q 3 DN
We get the converse essentially by reversing these steps.

37 −(P ↔ Q) a` (P & − Q) ∨ (−P & Q) Negation of Equivalence (Neg↔)

−(P ↔ Q) ` (P & − Q) ∨ (−P & Q)


1 (1) −(P ↔ Q) A
1 (2) −((P → Q) & (Q → P )) Def↔
1 (3) −(P → Q) ∨ −(Q → P ) 2 deM
1 (4) (P & − Q) ∨ (Q & − P ) 3 Neg→
1 (5) (P & − Q) ∨ (−P & Q) 4 Comm
Again, we obtain the converse by reversing the steps. The derivation of this sequent by
using only the Basic Rules would run to many more than five lines.

Distributive Laws for →


It is possible to prove interderivability relations that act, in effect, like distributive laws for
implication. (Thanks to Mark Smith for drawing this to my attention.)

38 P → (Q & R) a` (P → Q) & (P → R) Distributive Law for → (Dist→)


The proofs are left as B exercises.

39 P → (Q ∨ R) a` (P → Q) ∨ (P → R) Distributive Law for → (Dist→)

P → (Q ∨ R) ` (P → Q) ∨ (P → R)
1 (1) P → (Q ∨ R) A
1 (2) −P ∨ (Q ∨ R) 1 Imp
1 (3) (−P ∨ Q) ∨ R 2 Assoc
1 (4) ((−P ∨ Q) ∨ R) ∨ −P 3 vI
1 (5) (−P ∨ Q) ∨ (R ∨ −P ) 4 Assoc
1 (6) (−P ∨ Q) ∨ (−P ∨ R) 5 Comm
1 (7) (P → Q) ∨ (P → R) 6 Imp
42 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

(In the next chapter we will show that it is possible to combine steps in proofs like this.)
The converse is left as a B exercise.
Finally, here is a rule that looks almost like a distributive law for →, but isn’t quite.

40 (P & Q) → R a` P → (Q → R) Exportation (Exp)

(P & Q) → R ` P → (Q → R)
1 (1) (P & Q) → R A
2 (2) P A (CP)
3 (3) Q A (CP)
2,3 (4) P &Q 2,3 &I
1,2,3 (5) R 1,4 MP
1,2 (6) Q→R 3,5 CP
1 (7) P → (Q → R) 2,6 CP

This proof of Exp demonstrates nested CPs. The converse is left as a B exercise.

3.4.3 Form of Justification When Using Derived Rules


The general structure of a justification notation in a proof line is that we state the rule
we used to get the line, and give the numbers of the lines we used. RAA, CP, and vE are
slightly different, in that we only have to give the beginning and end lines of the subproofs;
but they are the only rules for which the usual pattern does not hold.
If you are using a Derived Rule such as MT, HS, or CD, the same basic pattern holds,
except that you need to mention the line numbers of all of the propositions that were used
as inputs to the use of the rule. For instance, a line that uses a rule that uses three lines
will need to have those three lines mentioned in its justification.
Here is an example to make this clear:

41 P ∨ Q, −P ∨ R, −Q ∨ R ` R

1 (1) P ∨Q A
2 (2) −P ∨ R A
3 (3) −Q ∨ R A
2 (4) P →R 2 Imp
3 (5) Q→R 3 Imp
1,2,3 (6) R∨R 1,4,5 CD
1,2,3 (7) R 6 Idem

Since CD requires three premisses, three lines must be cited in the justification.

3.5 Summary of Basic Sequents and Rules


In this section, I present for convenience a concise summary of the Basic and Derived rules
and sequents we have covered in this chapter. These elementary sequents can occur as
fragments of longer derivations, and it is very important that you learn to recognize them
when they appear. One of the most important skills in logic is the ability to recognize
3.5. SUMMARY OF BASIC SEQUENTS AND RULES 43

logical form; for instance, to spot that a very complicated-looking argument is really just a
case of a simple form such as MP, MT, or HS.
Don’t forget that in the following statement of the rules, the propositional variables P ,
Q, R, etc., could be either atomic or molecular propositions.

3.5.1 Basic Rules and Sequents


Some of the Basic Rules are really metarules, meaning that they are statements about what
sort of inferences we can make, given that certain other inferences are possible. Other Basic
Rules can be expressed as sequents.

Basic Metarules
Assumption (A):
Any proposition whatever may be assumed, so long as we do not forget that it was
an assumption.

∨ -Elimination (vE):
If P ` R and Q ` R are provable separately, then we can conclude P ∨ Q ` R.

Conditional Proof (CP):


From P, Q ` R we can conclude P ` Q → R.

Reductio ad Absurdum (RAA):


From P ` R & − R we can conclude −P .

Reiteration (R):
Any line may be repeated anywhere in a proof, so long as we carry its dependencies
with it.

Definition Introduction (DefIntro):


If a set of symbols is defined as a certain truth-functional combination of propositional
variables, then one may be substituted for the other wherever they occur.

Basic Sequents
Modus Ponens (MP):
P, P → Q ` Q

&-Introduction (&I):
P, Q ` P & Q
P, Q ` Q & P .

&-Elimination (&E):
P &Q ` P
P &Q ` Q

v-Introduction (vI):
P `P ∨ Q
P ` Q ∨ P.
44 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

Double Negation (DN):


− − P a` P
Note that DN is the only Basic Rule that is an interderivability result.

3.5.2 Summary of Derived Sequents


I summarize here the sequents stated and (mostly) derived earlier in this chapter. There is
a denumerable infinity of propositional sequents that can be derived from the Basic Rules.
The rules in this list are simply those that are used often enough to be given names.

One-Way Sequents
Modus Tollens (MT):
P → Q, −Q ` −P

Disjunctive Syllogism (DS):


P ∨ Q, −Q ` P

Denial (Den):
−P ` −(P & Q)

Ex Falso Quodlibet (ExF):


P, −P ` Q

Hypothetical Syllogism (HS):


P → Q, Q → R ` P → R

Constructive Dilemma (CD):


P → Q, R → S, P ∨ R ` Q ∨ S

Destructive Dilemma (DD):


P → Q, R → S, −Q ∨ −S ` −P ∨ −R

Convergence (Conv):
P ∨ Q, P → R, Q → R ` R

Absorption (Abs):
P → Q ` P → (P & Q)

Interderivability Sequents
de Morgan’s Laws (deM):
−(P & Q) a` −P ∨ −Q
−(P ∨ Q) a` −P & − Q.

Implication Rules (Imp):


P → Q a` −P ∨ Q
P → Q a` −(P & − Q)

Transposition (Trans):
P → Q a` −Q → −P
3.6. FALLACY ALERT! 45

Idempotency (Idem):
P & P a` P
P ∨ P a` P .

Commutativity (Comm):
P & Q a` Q & P
P ∨ Q a` Q ∨ P

Associativity (Assoc):
P & (Q & R) a` (P & Q) & R
P ∨ (Q ∨ R) a` (P ∨ Q) ∨ R

Dominance (Dom):
P & (Q ∨ −Q) a` P
P & (Q & − Q) a` Q & − Q
P ∨ (Q ∨ −Q) a` Q ∨ −Q
P ∨ (Q & − Q) a` P

Distributivity (Dist):
P & (Q ∨ R) a` (P & Q) ∨ (P & R)
P ∨ (Q & R) a` (P ∨ Q) & (P ∨ R)

Definition of Equivalence (Def↔):


P ↔ Q a` (P → Q) & (Q → P ).

Equivalence (Equiv):
P ↔ Q a` (P & Q) ∨ (−P & − Q).

Negation of Implication (Neg→):


−(P → Q) a` P & − Q

Negation of Equivalence (Neg↔):


−(P ↔ Q) a` (P & − Q) ∨ (−P & Q)

Exportation (Exp):
(P & Q) → R a` P → (Q → R)

Distribution for → (Dist→):


P → (Q & R) a` (P → Q) & (P → R)

Distribution for → (Dist→):


P → (Q ∨ R) a` (P → Q) ∨ (P → R)

3.6 Fallacy Alert!


A fallacy, loosely speaking, is simply some sort of error in reasoning. We fallible humans
commit certain characteristic errors so often and so spontaneously that it has been found
useful to give them names; they are almost like optical illusions of the mind. They are com-
mon precisely because, like optical illusions, they look believable if they are not examined
closely. Many of these fallacies belong to informal logic or inductive logic, and they are
46 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

discussed in books on rhetoric and critical thinking. (See [6, 16, 8] for good introductions
to critical thinking and fallacy theory.) There are a few fallacies of deductive reasoning that
occur rather often as well, and I will mention a few of them here. In Chapter 5 we will learn
systematic techniques for diagnosing fallacious deductive reasoning.

3.6.1 A Common Misuse of vE


It is very common for students to attempt the following move:

1 (1) P ∨Q A
1 (2) P 1 vE (???)

This is a glaring error. You can convince yourself that this pattern of reasoning is invalid
by considering the following example:

The number 3 is either even or odd. (This is true, of course.)


Therefore the number three is even. (False!)

The rule of vE does not work like &E!


This example also illustrates the fact that we can check the validity of a sequent by
trying to find specific examples of it in which obviously true premisses lead to obviously
false conclusions. If you can do this, the sequent must be invalid.
If you cannot find an example that invalidates a sequent, that doesn’t mean that the
sequent is valid! It just means that you can’t think of an example to show that it is not. In
Chapter 5 we learn general and reliable methods for checking the validity of propositional
logic sequents and arguments.

3.6.2 Affirming the Consequent


Another very common fallacy is Affirming the Consequent:

P → Q, Q ` P (????) (3.8)

To see that this is invalid, consider the following example:

If today is Tuesday, we have a logic class.


We are having a logic class.
Therefore, today must be Tuesday.

Even if the premisses are correct, the conclusion could still be false, simply because logic
might be held on other days of the week besides Tuesday. This error is seductive, however,
since it looks so much like the valid rule MP.

3.6.3 Denying the Antecedent


This is an error which is very similar to Affirming the Consequent, except that it is more
like a misuse of MT:

P → Q, −P ` −Q. (3.9)

A simple example demonstrates invalidity:


3.6. FALLACY ALERT! 47

If today is Tuesday, we have a logic class.


Today is not Tuesday.
Therefore we don’t have logic.

Again, the problem is just that we might have logic on other days of the week.

3.6.4 Avoid Circularity!


From this point onward the student can take any of the derived sequents above and use
them in the proof of further sequents. But you should not forget that these were proven in
a certain order. For instance, de Morgan’s Laws are proven first here, and then used in the
proof of the Implication rules. It is possible to prove Implication using only Lemmon’s Basic
Rules as well. (See Sequent 35 in Lemmon.) It would also be possible to use Implication in
the derivation of the de Morgan Laws — but you must not do that if you used de Morgan
to get Implication! That would be a circular procedure, or “begging the question,” as it
is sometimes called in informal logic. A circular proof is no proof at all, since it merely
amounts (though perhaps in a roundabout way) to a restatement of its assumptions.
It would also be quite legitimate to prove Implication first from the Basic Rules and
then use it in the derivation of de Morgan. (You might want to try this as an exercise.) But
once you have chosen an order of proof you should stick to it, in order to avoid circularity.
48 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

3.7 Exercises on Chapter 3


A
1. Find natural deduction proofs for the following named sequents:

(a) P & P ` P (Idem)


(b) P ` P & P (Idem)
(c) P ` P ∨ P (Idem)
(d) P & (Q ∨ −Q) ` P (Dom)
(e) P ` P ∨ (Q & − Q) (Dom)
(f) Q & − Q ` P & (Q & − Q) (Dom)
(g) P & (Q & − Q) ` Q & − Q (Dom)
(h) Q ∨ −Q ` P ∨ (Q ∨ −Q) (Dom)

2. Find natural deduction proofs for the following miscellaneous sequents:

(a) P, Q ` P → Q
(b) Prove the following without the use of R. Your proof should be only three lines
long.
P `Q→P
(c) P → (Q ∨ R), −(Q ∨ R) ` −P
(d) P ∨ −Q, Q ` P
(e) (Q → R), −(Q → R) ` R
(f) (M ∨ N ) → Q ` (M ∨ N ) → (Q & (M ∨ N ))
(g) P → (Q → R), (Q → R) → T ` P → T
(h) −(P & − Q) ` −P ∨ Q
(i) −(P ∨ −(R → S)) ` −P & (R → S)
(j) (X ∨ Y ) → Q ` (−X & − Y ) ∨ Q
(k) −M ∨ N, M ` N
(l) −(X → Y ) → −P, P ` (X → Y )
(m) P → (Q ∨ Q) ` −(P & − Q)
(n) P & (Q → R) ` (P & − Q) ∨ (P & R)
(o) X → (P ↔ Q), (P & − Q) ∨ (−P & Q) ` −X

B
1. Find natural deduction proofs for the following named sequents:

(a) P → Q, R → S, −Q ∨ −S ` −P ∨ −R (DD)
(b) −P ∨ Q ` P → Q (Imp) (Hint: use DS)
(c) P → Q ` −(P & − Q) (Imp)
3.7. EXERCISES ON CHAPTER 3 49

(d) −(P & − Q) ` P → Q (Imp)


(e) −Q → −P ` P → Q (Trans)
(f) P & (Q & R) ` (P & Q) & R (Assoc)
(g) P ∨ (Q & R) ` (P ∨ Q) & (P ∨ R) (Dist)
(h) (P ∨ Q) & (P ∨ R) ` P ∨ (Q & R) (Dist)
This sequent is proven in the text using CP. Try it using RAA.
(i) P → (Q → R) ` (P & Q) → R (Exp)
(j) P → (Q & R) ` (P → Q) & (P → R) (Dist→)
(k) (P → Q) & (P → R) ` P → (Q & R) (Dist→)
(l) P ∨ (Q ∨ −Q) ` Q ∨ −Q (Dom)
(m) −(P & − Q) ` P → Q (Neg→)
(n) (P → Q) ∨ (P → R) ` P → (Q ∨ R) (Dist→)

2. The following questions require you to find natural deduction proofs for a variety of
sequents:

(a) P, Q ` P ↔ Q
(b) P ↔ Q, Q ↔ R ` P ↔ R
(c) (P → R), (Q → R) ` (P & Q) → R
(d) −P ∨ R, −(Q & − S), −P → Q, −R ` S
(e) R, R ↔ P, −Q → −P ` Q
(f) P → Q, Q → R, R → S, −S ∨ −R ` −P
(g) P ↔ S, P ∨ Q, P ∨ R, −Q ∨ −R ` S
(h) P → (Q ∨ R), Q → S, R → T, −(S ∨ T ) ` −P
(i) P → (Q ∨ R), (P → Q) → X, (P → R) → Y ` X ∨ Y
(j) P → R, −Q → S, −(R ∨ S) ` −(Q → P )

C
1. MT is a basic pattern of inference which is very important in scientific reasoning.
Suppose we have a new scientific theory, and suppose this entire theory is expressed as
a single proposition T . Suppose also the theory deductively entails a certain prediction
P . Finally, suppose that (to our great disappointment) experimentation shows that P
is false. What does MT imply about T ? (Philosophers of science refer to this process
as falsification.) Does this mean that we have to junk all of T and start all over again?
If not, why not?

2. Find a natural deduction proof for the following sequent:

(a) P ∨ (Q ∨ R) ` (P ∨ Q) ∨ R
50 CHAPTER 3. INTRODUCTION TO NATURAL DEDUCTION

3. Suppose we had a system of Natural Deduction that was exactly the same as the
one we use in this book, except that, in the place of Modus Ponens (MP), it used
Disjunctive Syllogism (DS) (P ∨ Q, −Q ` P ) as a Basic Deductive Rule.

(a) Show that in this new system we can derive MP using only DS and other Basic
Rules, so long as we define P → Q as −P ∨ Q. (If we don’t have MP as a Basic
Rule, we have to treat → as a defined symbol, since in our usual system → is
defined implicitly by the way it behaves in MP.)
(b) Now suppose, instead, that we define P → Q as −(P & − Q), as is done in
numerous logic texts. (See, e.g., Copi and Cohen [6].) This definition captures
the intuition that to say P implies Q is to say that we never simultaneously have
P and −Q. Show that in this system we can, as well, derive MP using Basic
Rules. Hint: try RAA.
Chapter 4

Techniques and Applications of


Propositional Natural Deduction

In this chapter we introduce several techniques that greatly simplify propositional natural
deductions, and indicate some of its possible applications.
Don’t forget that in this chapter the propositional variables P , Q, R, etc., can be either
atomic or molecular unless stated otherwise.

4.1 Recognition of Logical Form


One of the most important skills to develop in symbolic logic is the ability to recognize
logical form. Suppose, for instance, you were asked to deduce a conclusion from the following
seemingly-complicated premisses: ((R ↔ S) & (S ↔ T )) ∨ (R → −S) and R & S. A little
thought will convince you that this is merely an instance of DS, so that an immediate
conclusion is (R ↔ S) & (S ↔ T ). Formally, we could write this out as follows:

1 (1) ((R ↔ S) & (S ↔ T )) ∨ (R → −S) A


2 (2) R&S A
2 (3) −(R → −S) 2 Neg→
1,2 (4) (R ↔ S) & (S ↔ T ) 1,3 DS

Something that threatened to be forbiddingly complicated in fact turns out to be very


simple.
Try to “chunk” complicated formulas together in your mind and see if they start to
look like instances of simpler rules before you start blindly calculating.

4.1.1 Exercises
A
1. Identify the simple rule used in each of the following deductions:

(a) (X ∨ Y ) → (Y ∨ Z), (X ∨ Y ) ` (Y ∨ Z)
(b) ((X ∨ Y ) → Q) → (P & Q), −(P & Q) ` −(((X ∨ Y ) → Q) → (P & Q))
(c) P ` ((M ↔ N ) ∨ (M ↔ (R & S))) ∨ P

51
52 CHAPTER 4. TECHNIQUES AND APPLICATIONS

(d) P → (Q & X), (Q & X) → T ` P → T


(e) ((P → Q) → R) → (P ↔ (Q & R)) ` −((P → Q) → R) ∨ (P ↔ (Q & R))

4.2 Combining Steps


As you become more experienced in doing proofs you will naturally want to combine steps
when they seem to be obvious. For instance, you might do a fragment of a larger proof like
this:

1,2 (16) −(−P ∨ Q) (from previous lines)


1,2 (17) P & −Q 16 deM,DN

This is perfectly okay, so long as you don’t get ahead of yourself and make a mistake.
Just as you can use more than one rule to get a given line, you can use the same rule
more than once. For instance, you could use Imp twice on the same line as follows:

1,2 (1) P → (P → Q) (from previous lines)


1,2 (2) −P ∨ (−P ∨ Q) 1 Imp (×2)

If you have the slightest doubt about the steps you are taking, it is better painstakingly
to write them all out one by one and thereby reduce the chance that you will make a
mistake. Remember, too, that on tests and assignments the onus is on you to convince your
instructor that you understand how the proof works!

4.2.1 Exercises
A

1. State the justifications for each of the following steps:

(a) (n) P → (Q → P ) TI
(n+1) −P ∨ (−Q ∨ P ) ?
(b) (n) P → (Q & R)
(n+1) (−P ∨ Q) & (−P ∨ R) ?
(c) (n) −P → (Q & R)
(n+1) P ∨ (Q & R) ?
(d) (n) (P → Q) & (P → R)
(n+1) −P ∨ (Q & R) ?
(e) (n) −M → N
(n+1) −N
(n+2) M ?
(f) (n) (P ∨ −P ) ∨ (Q ∨ −Q)
(n+1) (−P ∨ Q) ∨ (−Q ∨ P ) ?
4.3. THEOREMS 53

4.3 Theorems
In the last chapter we introduced the concept of the theorem, meaning a wff that can be
proven with no assumptions left over. If P is a theorem, we assert

` P, (4.1)

(read as “Provable P ” or “P is a theorem”), which means that P can stand on its own,
without the support of any premisses. In the next chapter we will see that the theorems of
propositional logic are just the same as its tautologies, which are the formulas that are T
in all lines of their truth tables.
For now, I will give a few simple examples of theorems, to demonstrate typical tech-
niques that can be used to prove them.

42 ` P → P
Here are two proofs of this elementary theorem. The first uses CP and R:

1 (1) P A (CP)
1 (2) P 1R
(3) P →P 1,2 CP

We can also do this without using R:

1 (1) P A
(2) P →P 1,1 CP

This is no doubt the shortest possible application of CP!


If we want to derive a result that depends on no premisses from a set of premisses, we
have to have a way of discharging assumptions, and that means that we are going to use
either CP or RAA. (The rule vE also discharges assumptions, but only in a sub-proof.)
Here is an important theorem that we can prove using RAA:

43 ` P ∨ −P Law of Excluded Middle (ExM)

1 (1) −(P ∨ −P ) A (RAA)


1 (2) −P & − −P 1 deM
(3) (P ∨ −P ) 1,2 RAA

This illustrates the power of RAA: if you can’t think of any other way to get a result,
assume its negation and try to derive a contradiction.
Here’s another way to get ExM:

1 (1) P A (CP)
(2) P →P 1,1 CP
(3) P ∨ −P 2 Imp,Comm

This theorem is called the Law of Excluded Middle because it says that every proposition
is either true or false, with nothing in between. This is sometimes expressed in Latin as
tertium non datur — “a third is not given”.
54 CHAPTER 4. TECHNIQUES AND APPLICATIONS

Here is another example: Show that any given proposition P is implied by any other
proposition Q. I am going to give five ways of proving this theorem, in order to demonstrate
the variety of proof strategies that can be used for theorems.

44 ` P → (Q → P )

1 (1) P A (CP)
2 (2) Q A (CP)
1 (3) P 1R
1 (4) Q→P 2,3 CP
(5) P → (Q → P ) 1,4 CP

This is a straightforward use of CP. The use of R in line 3 is redundant; see if you can do
it without using R.

1 (1) P A (CP)
1 (2) −Q ∨ P 1 vI
1 (3) Q→P 2 Imp
(4) P → (Q → P ) 1,3 CP

The above proof reminds us of how we can use vI to bring new propositional letters into a
proof.

1 (1) −(Q → P ) A (CP)


1 (2) Q& − P 1 Neg→
1 (3) −P 2 &E
(4) −(Q → P ) → −P 1,3 CP
(5) P → (Q → P ) 4 Trans

The above proof illustrates the idea that if we want to prove a conditional, we might try
to prove the negation of its consequent first and then use Transposition to get the desired
conditional.

1 (1) −(P → (Q → P )) A (RAA)


1 (2) P & − (Q → P ) 1 Neg→
1 (3) P 2 &E
1 (4) −(Q → P ) 2 &E
1 (5) Q& − P 4 Neg→
1 (6) −P 5 &E
1 (7) P & −P 3,6 &I
(8) P → (Q → P ) 1,7 RAA

The above proof is a bit longer than the others, but it illustrates the fact that if all else fails
we can use RAA; that is, assume the negation of what you want to prove, and then derive
a contradiction by brute force.
4.4. SUBSTITUTION 55

(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ −Q 1 vI
(3) −P ∨ (P ∨ −Q) 2 Comm,Assoc
(4) P → (P ∨ −Q) 3 Imp
(5) P → (Q → P ) 4 Comm,Imp

In line 1 of the above proof we introduce a theorem, the Law of Excluded Middle. Theorem
Introduction is discussed further below; the basic idea is that a theorem can be stated at
any point in a proof without justification. We then expand line 1 using vI, rearrange the
results, and apply Imp to get what we wanted. This is the deepest proof, since it builds
directly from ExM. In fact, it would be possible to derive all theorems, and thereby all of
deductive logic, directly from ExM if we wanted to.

4.3.1 The “Laws” of Thought


There is no limit to the number of propositional theorems that could be proved. However,
three simple theorems are important enough that they have come to be known as the “laws
of thought.”

The Law of Excluded Middle (ExM):


` (P ∨ −P )

The Law of Identity (Id):


` (P → P )

The Law of Non-Contradiction (NC):


` −(P & − P ).

We have seen the proofs of the first two; the proof of NC is left as an A exercise.

4.4 Substitution
Under certain circumstances, it is valid to substitute one wff for the whole or part of another
wff. There are two kinds of substitution, uniform and nonuniform.

4.4.1 Uniform Substitution


We will say that a wff P is substituted uniformly for a wff Q if every instance of Q in a
larger formula is replaced by P . Example: Consider the formula (P & R) → (Q & R). A
uniform substitution of (M ∨ N ) for R would yield (P & (M ∨ N )) → (Q & (M ∨ N )).

4.4.2 Nonuniform Substitution


We will say that a wff P is substituted nonuniformly for a wff Q if some but not all instances
of Q in a larger formula are replaced by P . Example: Consider again (P & R) → (Q & R).
A nonuniform substitution of (M ∨ N ) for R would yield either (P & (M ∨ N )) → (Q & R)
or else (P & R) → (Q & (M ∨ N )).
56 CHAPTER 4. TECHNIQUES AND APPLICATIONS

4.5 Introduction Rules


Introduction Rules are metarules that allow us to introduce previously accepted results into
a derivation in order to simplify it. In this section we state without formal justification
(since that would be beyond the scope of this course) a number of Introduction Rules that
will greatly facilitate proofs of sequents and theorems. We have already used all of these
rules except Theorem Introduction in Chapter 3.

1. Definition Introduction (DefIntro): Any defined symbol can be replaced by its defini-
tion, or vice versa.
So far, the only symbol we are taking as defined in terms of the basic set of symbols is
↔. But there is no reason that we could not define other symbols as combinations of
basic symbols or other previously defined symbols. For instance, suppose we stipulate
that P ∗ Q := (P ∨ Q) → (P & Q). Then any occurrence of P ∗ Q could be substituted
for by its definition, and any wff that was interderivable with its definition could be
substituted for by P ∗ Q. Any such usage can be justified by citing the line number
that was operated on and then putting down either DefIntro or Def∗.

2. Sequent Introduction (SI): Any uniform substitution instance of a proven sequent can
be introduced in a proof.
We have already used SI many times in Chapter 3, when we applied sequents such as
MT, HS, DS, and so forth in the proofs of further sequents. Do not forget that the
substitution has to be uniform; only uniform substitution instances of sequents can
be guaranteed to be valid. If the sequent you are using has a special name (such as
MP or HS) then just cite the name in the justification column; otherwise, write SI
and give a reference to where the sequent was previously proven (such as in an earlier
exercise).

3. Equivalence Introduction (EqI): Given an equivalence theorem ` P ↔ Q, we may


substitute P for Q (or the other way around) in any occurrences of Q in a valid
sequent; this kind of substitution need not be done uniformly.
If the equivalence rule you are using has a special name (such as deM or Dist) then just
cite the name in the justification column; otherwise, write EqI and give a reference to
where the rule was previously proven (such as in an earlier exercise).

4. Theorem Introduction (TI): Any uniform substitution instance of a theorem may be


written as a line anywhere in a proof. No assumptions will be listed in the assumption
column (because theorems do not depend on assumptions) and the notation TI should
be put in the justification column (together with the name of the theorem if it has one
in brackets).
A theorem is a proposition that we have already shown can be proven with no as-
sumptions left over. Hence, we could introduce it, or a substitution instance of it, at
any point in a proof; in effect, doing so is just shorthand for a statement of the entire
proof of the theorem. If the theorem you are using has a special name (such as ExM
or Id) then just cite the name in the justification column; otherwise, write TI and
give a reference to where the theorem was previously proven (such as in an earlier
exercise).
4.6. THE DEDUCTION METATHEOREM 57

Example of TI:

45 P → Q, −P → R ` Q ∨ R

1 (1) P →Q A
2 (2) −P → R A
(3) P ∨ −P ExM
1,2 (4) Q∨R 1,2,3 CD

4.5.1 Exercises
A
1. Prove the Law of Non-Contradiction, ` −(P & − P ).

B
1. Prove the following theorems:

(a) ` (P & − P ) → Q
(b) ` P → (−P → Q)
(c) ` P ∨ (P → Q)
(d) ` (P → Q) ∨ (Q → P )
(e) ` (P → Q) ∨ (Q → R)
(f) ` (−P → P ) → P

C
1. Prove ExM using only Basic Rules.

4.6 The Deduction Metatheorem


The following metatheorem is one of the most important results in deductive logic. It
cannot be proven in all generality with the tools we use in this course. However, we will
work out proofs for a few simple cases, and they should make apparent the plausibility of
this centrally important theorem.

Deduction Theorem:
The sequent P1 , P2 , . . . , Pn ` Q is valid iff ` (P1 & P2 & . . . & Pn ) → Q.

Example: Given P ` R, show ` P → R; and given ` P → R, show P ` R. The proofs


of these results are very simple, but they nicely illustrate the application of SI and TI.

(a) Given P ` R, show ` P → R.


1 (1) P A (CP)
1 (2) R 1 SI
(3) P →R 1,2 CP
58 CHAPTER 4. TECHNIQUES AND APPLICATIONS

In line 2 we apply the given sequent P ` R. It doesn’t have a special name (like MT, HS,
etc.), but we can substitute it into the proof as required. Note that we have to cite line 1,
since this is the line that the introduced sequent is applied to.

(b) Given ` P → R, show P ` R.


1 (1) P A
(2) P →R TI
1 (3) R 1,2 MP

On the assumption that P → R is a theorem, we can substitute it in wherever we want;


and we do not have to cite any previous assumptions or lines since a theorem holds without
qualification.
The Deduction Theorem gives us a better understanding of a very useful result that we
used in the previous chapter:

P a` Q if and only if ` P ↔ Q.

Here is an informal argument to establish this result:

(a) Given P a` Q, show ` P ↔ Q.


The expression P a` Q simply means P ` Q and Q ` P . By the Deduction Theorem,
this means that we have ` P → Q and ` Q → P . We can then prove ` P ↔ Q as
follows:

(1) P →Q TI
(2) Q→P TI
(3) (P → Q) & (Q → P ) &I
(4) P ↔Q 3 Def↔

(b) Given ` P ↔ Q, show P a` Q.


From ` P ↔ Q we get ` P → Q and ` Q → P separately; by the Deduction Theorem
this gives us P ` Q and Q ` P , which is just the same as P a` Q.

The main use of the Deduction Theorem, for the purposes of this introductory course,
is simply the fact that whenever we have an interderivability result, we have an equivalence;
and such equivalences can be used freely in transforming expressions into equivalent forms
which may help us get to where we want to go in a deduction.

4.6.1 Exercises
B

1. Given P, Q ` R, show ` (P & Q) → R.

2. Given ` (P & Q) → R, show P, Q ` R. (These two questions in combination show the


Deduction Theorem for the case of two premisses.)
4.7. BOOLEAN ALGEBRA 59

4.6.2 A Turnstile is not an Arrow!


There is a natural tendency to think of P → Q as saying just the same thing as P ` Q.
This is a mistake, one which can sometimes be glossed over in an elementary course such
as this, but which in a course involving metatheory would be serious. The symbol ` is a
metalogical symbol. It is a claim about what can be derived, while → is a logical symbol,
meaning (as far as propositional logic is concerned) that it denotes a relation between truth
values.
Now, the Deduction Theorem assures us that given P ` Q, it is a theorem that P → Q.
But P ` Q and P → Q are different claims. The first says that Q can be derived from P
by the rules of natural deduction; and the Deduction Theorem says that this is the same
thing as saying that P → Q can be derived with no assumptions left over. But P → Q
only says that given P , Q is true. It so happens that elementary propositional logic has the
property of completeness, meaning that every logically true statement can be proven within
the system. Because of completeness, we can sometimes in propositional logic gloss over
the fine distinction between “true” and “provable”. But (as Gödel showed) there are logical
systems that are incomplete; therefore, once we move beyond elementary propositional logic
the distinctions between what can be proven from a given set of rules, and what is true,
have to be kept carefully in mind.

4.7 Boolean Algebra


We can greatly simplify many deductions that involve the use of equivalence relations by
means of Boolean algebra. For our purposes, we will mean by “Boolean algebra” the
manipulation of propositional formulas in a manner very similar to ordinary algebraic ma-
nipulations.
If all we are doing is trying to go from one expression to another which is equivalent
to it, we can write our calculations in a much simpler and more direct way that does not
require the full apparatus of natural deduction. This will seem very familiar to those who
have some experience with the manipulation of algebraic identities in ordinary mathematics.
Here is a simple example:

−(P → Q) ↔ −(−P ∨ Q) Imp


↔ −−P & −Q deM
↔ P & −Q DN

This establishes the theorem ` −(P → Q) ↔ (P & − Q).


As with regular natural deduction, we can combine steps:
−(P → Q) ↔ −(−P ∨ Q) Imp
↔ P & −Q deM,DN
Again, be careful that you don’t get ahead of yourself. If you aren’t quite sure of what you
are doing, don’t combine steps.
We can substitute equivalent expressions for subformulas within a larger formula either
uniformly or nonuniformly:
(−P ∨ Q) & (P → R) ↔ (P → Q) & (P → R) Imp
↔ P → (Q & R) Dist→
60 CHAPTER 4. TECHNIQUES AND APPLICATIONS

4.7.1 Exercises
B
1. Use Boolean algebra to demonstrate the following equivalence theorems:

(a) ` (P ↔ Q) ↔ (−(P → −Q) ∨ −(−Q → P ))


(b) ` ((Q ∨ R) → P ) ↔ ((Q → P ) & (R → P ))
(c) ` (P → (Q ∨ R)) ↔ ((P → Q) ∨ (P → R))
(d) ` (P → (Q & R)) ↔ ((P → Q) & (P → R))
(e) ` (P ∨ (Q → R)) ↔ ((Q → P ) ∨ (−P → R))

4.8 Filling in the Gaps in an Argument


It is unavoidable that most of this text has to be devoted to explaining the basic techniques
of symbolic logic, and we can’t devote as much space as we should to its possible applications.
This could lead an attentive student to wonder what is the point of going through all the
pain of finding a deductive proof of a conclusion that has already been worked out —
especially when the semantic techniques of the next two chapters allow us to check the
validity of an argument quickly and efficiently without having to construct a proof at all.
The fact remains that when we use natural deduction in “real life” (and despite all
appearances to the contrary, there actually are some people who attempt to reason their
way through real-life problems) we do not have a conclusion worked out in advance. Rather,
we are given some evidence (which we express in premisses) and we try to work out what
that evidence implies. We may not know the conclusion in advance any more than we know
the sum of a column of figures before we add them up. Often, we do have some idea of
what requirements the conclusion should satisfy; for instance, a detective trying to solve a
murder mystery wants to find out who dunnit. Or else we may have a fairly definite idea of
the conclusion, but we may not have all of the premisses required to establish it deductively;
we might have to work backwards to fill in the gaps in the argument.
In this section I will introduce a few exercises in which we have to work out natural
deduction proofs in which part of the argument is missing.
Example: Suppose we know the following: either Bob or Alice paid the phone bill; if
Bob paid the bill he would have the receipt; if Alice paid the bill then she would have the
receipt; Bob does not have the receipt. Who paid the phone bill?
It is pretty obvious that Alice got stuck with the bill. But let’s go through the symbol-
ization and proof to illustrate the general method of attack.
First, let’s define our symbols:

A := Alice pays the bill.


B := Bob pays the bill.
R := Bob has the receipt.
S := Alice has the receipt.

We can put down all of the premisses we have on hand, and try a few steps to see where
they lead:
4.8. FILLING IN THE GAPS IN AN ARGUMENT 61

1 (1) B ∨A A
2 (2) B→R A
3 (3) A→S A
4 (4) −R A
2,4 (5) −B 2,4 MT
1,2,4 (6) A 1,5 DS

We quickly come to the conclusion that it was Alice who paid the bill, and we stop there
because this answers the question. We have therefore established the sequent B ∨ A, B →
R, A → S, −R ` A. Remember, in this sort of problem we usually don’t know in advance
what sequent we are going to be proving; rather, we explore a set of premisses to see where
they will lead. Note, also, that one of the premisses (line 3) turned out to be irrelevant; and
that, of course, is just like real life, where we often have to sort out the useful information
from the noise.
Example: We are given the following facts: If either Paul or Peter did it then either
Randa helped or Michael wasn’t entirely innocent. If the weapon was in Paul’s office he did
it. On the other hand, if the weapon wasn’t in Paul’s office, then Michael wasn’t entirely
innocent. Can we determine whether Randa was involved?
Symbolization:

P := Paul did it
Q := Peter did it
R := Randa helped
M := Michael was innocent
N := The weapon was found in Paul’s office

Write down the premisses:

1 (1) (P ∨ Q) → (R ∨ −M ) A
2 (2) N →P A
3 (3) −N → −M A

There is no way we can draw a definite conclusion about the truth or falsity of R from this,
since there is no way to extract it from its conditional. But now suppose that independent
evidence shows that Michael has an alibi — he was out of town for a meeting at the time
of the incident. Then we can continue:

4 (4) M A
3 (5) M →N 3 Trans
2,3 (6) M →P 2,5 HS
2,3,4 (7) P 4,6 MP
1 (8) (P → (R ∨ −M )) & (Q → (R ∨ −M )) 1 SI (Exercise 4.7.1, 1(b))
1 (9) P → (R ∨ −M ) 8 &E
1,2,3,4 (10) R ∨ −M 7,9 MP
1,2,3,4 (11) R 4,10 DS
62 CHAPTER 4. TECHNIQUES AND APPLICATIONS
Chapter 5

Propositional Semantics: Truth


Tables

5.1 Syntax versus Semantics


The method of natural deduction that we have learned so far are instances of what are
called syntactic procedures. These are procedures defined by rules that tell us the allowable
combinations of symbols that may be written down starting from given wffs. For instance,
MT says that given P → Q and −Q, we may (if it suits our purposes) then correctly write
down −P . Such rules are concerned only with syntax . We commonly interpret the P s and
Qs of propositional logic as meaningful statements that can bear truth values, but these
symbols could be given other interpretations as well — or no interpretation at all.
If you make a mistake when typing in commands to a computer program, it may tell
you that you have committed a “syntax error.” This simply means that you have broken
the rules that say which combinations of symbols are allowed. The computer has no notion
whatsoever of the possible meanings of the symbols being used by the program.
In this chapter we introduce truth tables, the first of two main semantic techniques we
will learn in this course. When we do semantics, we are concerned with the interpretation
or meaning of the symbols we write down. In propositional logic, the most common inter-
pretation of the propositional symbols is that they represent truth values of propositions;
propositional logic, in effect, is a calculus of truth values.
In propositional logic we make the simplifying assumptions that there are only two
possible truth values, which we may represent as T (true) and F (false); this is called
bivalence. (These are often symbolized as > and ⊥, or 1 and 0.) Bivalence is an idealization,
since we know that in real life there are often various shades, modes, and degrees of truth.
We also, at this stage at least, assume that every proposition has one and only one of these
two possible truth values; i.e., that every proposition is either definitely T or definitely F.
We will see later that this assumption also breaks down under certain interesting conditions.
Nevertheless, the logic based on these two somewhat idealized assumptions turns out to be
extremely powerful and useful, and is the basis for most or all further and more nuanced
elaborations of logic.
Bivalence should not be confused with the theorem derived in the previous chapter
called the Law of Excluded Middle, ` P ∨ −P . See [7] for an illuminating discussion of
this point.

63
64 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

5.2 Truth Table Definitions of the Propositional Connectives


I reproduce here the basic truth tables from Chapter Two, for convenience:

Atomic Negation: Conjunction: Disjunction:


Proposition: − P (P & Q) (P ∨ Q)
P F T T T T T T T
T T F T F F T T F
F F F T F T T
F F F F F F

Material Implication: Biconditional:


(P → Q) (P ↔ Q)
T T T T T T
T F F T F F
F T T F F T
F T F F T F

5.2.1 How Many Possible Truth Tables Are There?


The five familiar truth tables above are not the only possible truth tables for binary con-
nectives. Consider the general truth table for any binary connective ∗:

(P ∗ Q)
T ? T
T ? F
F ? T
F ? F

There are two ways to fill each space in the centre column and four lines per table,
meaning that the total number of ways to fill in the centre column will be just 2×2×2×2 =
24 = 16. In other words, there are 16 possible truth functions of two propositional variables.
While the ones we define above are the ones most commonly used in elementary logic, others
are possible, and some (such as exclusive OR) have practical applications in circuit theory.

5.2.2 Expressive Completeness


Clearly, some of the truth functions we have defined are redundant, in the sense that they
can be written as combinations of other symbols. For instance, we can show that P → Q is
equivalent to −P ∨ Q; hence our use of the symbol → is largely for linguistic convenience.
Some sets of symbols would not be sufficient to produce in combination all possible truth
tables. However, it can be shown that combinations of {−, ∨ } and {−, & } are sufficient
to generate all possible tables. It is also possible to define a single connective that is
expressively complete in this sense, and for this purpose one can use either the Sheffer
stroke P |Q (defined as −P ∨ −Q) or the Nicod stroke P ↓ Q (defined as −P & − Q). We
will examine some of the properties of these symbols later on.
5.2. TRUTH TABLE DEFINITIONS OF THE PROPOSITIONAL CONNECTIVES 65

5.2.3 Evaluating Complex Wffs


Evaluate wffs as follows:
1. Be sure you have written the input values for all the occurrences of each variable in
the same order.

2. Evaluate all negations of atomic formulas first; then,

3. starting with the innermost brackets, work outward until you come to the main con-
nective.
Arbitrarily complicated wffs can be evaluated mechanically by truth tables. Truth
tables therefore constitute what computer scientists call an effective procedure — meaning
simply a recipe that can be followed in a completely unambiguous manner that is guaranteed
to produce a unique result. The practical drawback of truth tables is that the number of
lines in a table for a formula containing n propositional variables is 2n . This
P is a irapidly
increasing function; recall the old fable about the wise man who demanded 64 i=1 2 grains
of rice from his king in return for a favour. (See [3]. A quick estimation will show why
the king may have been tempted to cut off the wise man’s head.) Large truth tables may
therefore take impractically long times to evaluate.
Many of the truth table problems encountered in elementary propositional logic can be
solved by means of short-cuts, and we will consider these below. There is no guarantee that
a short cut exists for a given problem. On the other hand, if in an introductory course you
find yourself faced with the task of evaluating a formula that would contain 8, 16, 32, or
more lines if fully written out, take a careful look at it before beginning blindly to fill in
the whole table. There is probably a short-cut!
Example: do the full truth table for the wff −(P → (−Q ∨ R)) & P .
Write out the formula as neatly as possible, with fairly wide, even spacing between the
symbols. Then enter all the possible combinations of truth values for the three letters in
the formula, P , Q, and R. (You can think of these as “inputs”.) There is nothing sacred
about the order in which I have written the inputs below, but it is a standard order that
is commonly used in logic, and it would be helpful to stick to this order. Since P appears
twice in the formula you will have to enter its input values twice. Whenever a letter appears
more than once in a table, it is essential that you write the values in its column of input
values in the same order for each instance of the letter; otherwise, the truth table will be
meaningless. There will be 8 = 23 lines in the table since there are three variables. Then
proceed to evaluate the various subformulas in the order indicated above, ending with the
main connective, which in this case is the & symbol.

− (P → (− Q ∨ R)) & P
F T T F T T T F T
T T F F T F F T T
F T T T F T T F T
F T T T F T F F T
F F T F T T T F F
F F T F T F F F F
F F T T F T T F F
F F T T F T F F F
66 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

We see that the formula is T only in the case in which P and Q are T and R is F.

5.3 Semantic Classification of Single Wffs


Well-formed formulas of propositional logic can be classified according to their semantic
properties:

5.3.1 Tautology
A wff is said to be tautologous, or to be a tautology, if and only if it comes out T under all
possible combinations of input values. Tautologies can also be referred to as logical truths
or necessary truths.
Example: Law of Excluded Middle
In the previous chapter we proved the Law of Excluded Middle as a theorem. The table
shows that it is a tautology as well, as it should be.

P ∨ − P
T T F T
F T T F
Example: (P → (Q → P ))
In the previous chapter we gave several proofs of this formula as a theorem. One can easily
see that it is a tautology as well. The only way it could be F would be if P were T and
(Q → P ) were F. But if P is T, then (Q → P ) is T and the whole wff is T. Hence, it is not
possible for this wff to be false.

5.3.2 Inconsistency
A wff is said to be an inconsistency, or to be inconsistent, if it comes out F for all possible
combinations of input values. Such a wff is sometimes loosely called a contradiction, but
that term is properly reserved for an inconsistent wff such as P & − P containing only one
propositional variable.
Example: Law of Non-Contradiction
The Law of Non-Contradiction states that it is never the case that a proposition is both
true and false; i.e., that (P & − P ) is always false. This is supported by the truth table:

P & − P
T F F T
F F T F

5.3.3 One-to-One Relation Between Inconsistencies and Tautologies


It is easily seen by de Morgan’s Law that the Law of Non-Contradiction is simply the
negation of the Law of Excluded Middle. It can be shown in general that for every tautology
there is a contradiction, generated by taking the negation of the tautology, and vice versa.
5.4. RELATION OF SYNTAX TO SEMANTICS 67

5.3.4 Contingency
A wff is said to be contingent or to be a contingency iff it has occurrences of both T and
F in its truth table. Its truth or falsity in a given situation therefore depends on the facts,
not on logical form. The simplest example of a contingency is just P ; other examples are
P & Q, P ∨ Q, and P → Q.

5.3.5 Consistency
A wff is said to be consistent if it is T for at least some of its input values; in other words, a
wff is consistent, or is a consistency, iff it is either a tautology or a contingency. Examples:
P , P → Q, P → P .

5.4 Relation of Syntax to Semantics


A logical system is said to be sound iff every theorem that it can generate is a tautology. A
logical system is said to be complete iff every tautology can be proven as a theorem. One
of the most important things done in metalogic (the logic of logic) is to investigate whether
various possible logical systems are sound or complete.
We do not attempt to prove the soundness or completeness of propositional logic in
this course. However, it can be shown that propositional logic is, in fact, both sound and
complete. (See Lemmon [17], Sect. 2.4 and 2.5, for one way of doing this.) This fact has
practical significance for us because it tells us that there is a nice one-to-one correspondence
between theorems and tautologies: every theorem we can prove using natural deduction will
turn out to be a tautology if we do truth table analysis on it, while every tautology can be
proven as a theorem.
It may seem too “obvious” to bear mentioning that propositional logic is both complete
and sound. In fact, it turns out that many important systems of logic that are more powerful
than propositional logic (in the sense that they can express and derive results about logical
or mathematical structure that is invisible to propositional logic) are incomplete. In 1930,
Kurt Gödel shocked the mathematical world by proving that any system of logic that
contains enough machinery to define the natural numbers is, if it is consistent, incomplete.
(See [10, 19].) This tells us that there is no finite set of formal rules from which one can
derive all the possible truths about the natural numbers; or to put it another way, there
could never be a computer program that could by itself generate all possible truths about
the numbers.

5.5 Semantic Classification of Pairs of Wffs


There are useful semantic properties that can be identified for pairs of wffs.

5.5.1 Implication
Proposition P is said to imply Q iff P → Q is a tautology.
68 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

5.5.2 Equivalence
One proposition P is equivalent to another, Q, iff P ↔ Q is a tautology.

5.5.3 Contrariety
A proposition P is contrary to Q if they are never both T. They could, however, both be
false. Two propositions P and Q are contraries iff −(P & Q) is a tautology.
Examples: The propositions P & Q and P & − Q are contraries, since they are both F
when P is F, but never both T. It is easily seen that the negation of their conjunction is a
tautology.

5.5.4 Subcontrariety
Two propositions P and Q are subcontraries if they are never both false. They could,
however, both be true. Formally, P and Q are subcontrary iff P ∨ Q is a tautology. The
concept of subcontrariety is not widely used these days, though it was important in medieval
logic.
Examples: The propositions P ∨ Q and P ∨ −Q are subcontraries since one of them
is T even if the other is F; however, they are both T if P is T. It is also easily seen that
their disjunction is a tautology.

5.5.5 Inconsistency
Two propositions P and Q are inconsistent iff they never agree in truth value. This is the
same thing as saying that −(P ↔ Q) is a tautology.

5.5.6 Consistency
Two propositions P and Q are consistent iff their conjunction P & Q is consistent.

5.5.7 Independence
Two propositions are said to be independent iff none of the above relations apply to them.
The simplest example of independent propositions are two atomic propositions, say P and
Q.

5.6 Relations Between Semantic Properties of Wffs


It is possible to use natural deduction to establish relationships between the semantic prop-
erties of pairs of wffs.
Example: Show that if P and Q are contrary then one implies the negation of the other.
The given propositions are contrary if and only if −(P & Q) is a tautology. This suggests
that to test the implications of contrariety, we assume that −(P & Q) is true and carry out
a simple natural deduction:
5.7. SEMANTIC CLASSIFICATION OF SETS OF WFFS 69

1 (1) −(P & Q) A


1 (2) −P ∨ −Q 1 deM
1 (3) P → −Q 2 Imp
1 (4) −Q ∨ −P 2 Comm
1 (5) Q → −P 4 Imp

Lines 3 and 5 express the required properties of P and Q.

5.7 Semantic Classification of Sets of Wffs


Some of the above properties of pairs of wffs can be usefully extended to sets of any number
of wffs.

5.7.1 Equivalence
A set of wffs is said to be equivalent iff all its members are pair-wise equivalent.

5.7.2 Inconsistency
A set of wffs is said to be inconsistent iff its conjunction is always false; or, equivalently, iff
the negation of its conjunction is a tautology. E.g.: {P, P → Q, −Q}.

5.7.3 Consistency
A set of wffs is said to be consistent iff it is not inconsistent; this is the same thing as saying
that its conjunction is consistent; i.e., either a tautology or a contingency.

5.7.4 Independence
A set of two or more propositions are independent of each other iff they are independent
pairwise.

5.8 Testing Validity of Sequents Using Truth Tables


Suppose you are asked to find a proof of a sequent such as P, Q, R ` S. If you can’t come
up with a proof, that doesn’t mean that the sequent is invalid; it may simply mean that
you are unable to think of a route from premisses to conclusion. If we can find a proof that
makes correct use of the rules, then the sequent is valid; but if we can’t find a proof, that
doesn’t mean that it is not valid. We need a general procedure for testing the validity of
sequents.
One of the most important uses of truth tables is in testing the validity of argument
forms. A sequent is said to be valid if and only if the conclusion is never false when the
premisses are true. To check the validity of a sequent, therefore, one thing we can do is set
up the truth table for the sequent, with separate columns for premisses and conclusion, and
see if there is any line in the truth table in which the premisses are all T and the conclusion
is F. If this is not the case, then the sequent is valid.
Here is an example of a valid sequent (MP):
70 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

P P → Q Q
T T T T T
T T F F F
F F T T T
F F T F F

(I use a double vertical line to separate the conclusion from the premisses.) In each line
in which the conclusion is F, at least one of the premisses is F as well. This is enough to
show that the sequent is valid.
By contrast, here is the table for Affirming the Consequent:

Q P → Q P
T T T T T
F T F F T
T F T T F
F F T F F

In the third line, both premisses are T while the conclusion is F. This proves that
Affirming the Consequent is invalid. You need only find one line in which this happens in
order to show invalidity. Note very carefully that we cannot conclude that the argument
is valid just because there are some lines (such as the first) in which everything comes up
true. That is just like an invalid computer programme that occasionally outputs the correct
answer. The program is valid only if it never makes a mistake, not if it is sometimes but
not always right.
Here is an example that may seem counter-intuitive. Is P, −P ` Q valid or invalid?
This is just ExF, which we already know how to prove using RAA. But the truth table is
instructive:

P − P Q
T F T T
T F T F
F T F T
F T F F

The sequent is valid since there is no line in which the conclusion is false and all the
premisses are T. But the sequent can never be sound , meaning valid with all premisses true.
Suppose now we have a sequent in which the conclusion is a tautology, such as P →
Q ` P → P . Here is the table:

P → Q P → P
T T T T T T
T F F T T T
F T T F T F
F T F F T F

As before, there is no line with the premiss T and the conclusion F, and the sequent is
valid. But we need hardly have done the whole table to see this, since the conclusion is a
5.8. TESTING VALIDITY OF SEQUENTS USING TRUTH TABLES 71

tautology and must always come out T.


We can summarize this as follows: suppose F is a contradictory proposition, T is a
tautology, and X is any proposition whatsoever, true or false. Then any sequents of the
following form are valid:

F ` X (5.1)
X ` T (5.2)

5.8.1 Deduction Theorem from the Semantic Viewpoint


From the semantic viewpoint, the Deduction Metatheorem can be expressed as follows:

The sequent P1 , P2 , . . . , Pn ` Q is valid iff the conditional (P1 & P2 & . . . & Pn ) →
Q is a tautology.

Our method for checking for validity is designed to take advantage of this fact.

5.8.2 Short-cuts
The truth-table check of validity is what logicians call an effective procedure, meaning that
it is always guaranteed to produce a definite answer. However, as noted, the size of a truth
table increases very rapidly as a function of the number of propositional variables involved;
the table for a problem with 6 letters, for instance, will have 26 = 64 lines. It would be
very good, therefore, to have short-cuts that would make it unnecessary for us to write out
every line of the table in order to check the validity of a sequent.
The key idea of the short-cut method is to work backwards from the conclusion to the
premisses. Recall that to show that a sequent is valid we have to show that there are no
lines in the truth table where all the premisses are T and the conclusion F; conversely,
to show that a sequent is invalid it is sufficient to find just one line in its table with all
premisses T and the conclusion F. What we do, therefore, is set up the truth table for the
sequent in the manner suggested above, with the conclusion set off to the right. Then set
the conclusion to be F, and see whether this forces any or all of the premisses to be T. Some
sequents can be checked in one line; some require more than one.
There can be no guarantee that a short-cut can be found for every problem; some may
be irreducibly complex and will require a full or nearly full truth table for their analysis.
But it is always a good idea to see if a short-cut can be found before starting to write out
a huge truth table.
Here are a few examples to illustrate the method.
Example: show that MP is valid by the short-cut method.

P P → Q Q
T T F F F
F F T F F

The only lines that need concern us are the two lines in which the conclusion is F.
However, we immediately see that in both of these lines one of the premisses is F; the
sequent is therefore valid. Writing what I have shown here constitutes a complete solution
to the problem of testing the validity of MP; it is not necessary to write out the whole table.
72 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

Example: show that Affirming the Consequent is invalid by the short-cut method.

Q P → Q P
T F T T F

Set the purported conclusion P to be F; this forces the premiss P → Q to be T


regardless of the value of Q; and this means that we can choose the other premiss Q to
be T. Therefore, there is a line with all premisses T and the conclusion F; therefore, the
sequent is invalid.
Example: let’s check Constructive Dilemma, which is more complicated. This sequent
has 4 propositional variables, and so its full table would have 24 = 16 lines.

P ∨ Q P → R Q → S R ∨ S
F F F F T F F T F F F F

This can be checked with just one line. Set the conclusion to be F. That forces both
R and S to be F. The values of P and Q remain to be determined. But any possible
invalidating line has to have P → R and Q → S come out T, and which means that P and
Q would have to be F. However, this makes the first premiss P ∨ Q F as well. Hence, there
is no invalidating line, and the argument is valid.
There will sometimes be sequents in which you have to check more than one line.
Example: check (−P ∨ −Q) → R, −R ` P & Q.

(− P ∨ − Q) → R − R P & Q
F T T T F F F T F T F F
T F T F T F F F F T
T F T T F F F F F F

It is only necessary to consider lines on which −R is T, which makes R false. However,


there are three ways in which P & Q can be false. Plugging R = F into the first premiss, we
see that all possible combinations of the truth values of P and Q which make the conclusion
F also make the first premiss F. Therefore, the argument is valid.

5.9 Using Short-Cuts in Testing for Other Semantic Proper-


ties
Short-cut truth tables can be used sometimes to test for semantic properties other than
validity as well.
It is impossible to run through all of the possible situations that might arise. Here is
an example which will suggest the sort of approaches that one could take.
Example: Determine whether (P & −P ) → (P → (Q ∨ R)) is contingent, contradictory,
or tautologous.
The antecedent (P & − P ) is always F, and therefore by the truth table for → the
formula is always T — i.e., it is a tautology. The table need only show enough information
to express this fact:
5.10. EVALUATING LONG CONJUNCTIONS OR DISJUNCTIONS 73

(P & − P ) → (P → (Q ∨ R))
F T

The general approach to this sort of problem is to see what it would take to make the
wff T and F. If it can be both T and F, then it is contingent; if it can only be F, it is
contradictory; and if it can only be T, it is a tautology.

5.10 Evaluating Long Conjunctions or Disjunctions

By a “long” conjunction or disjunction I simply mean one containing more than two formu-
las. We accept conjunctions and disjunctions of more than two formulas as wffs. However,
truth table analysis as we have set it up here (and as it is usually done in most elementary
texts) deals only with & and ∨ as binary connectives. In order to evaluate a formula
containing a long conjunction or disjunction, therefore, we have to bracket off the terms
two at a time, and arbitrarily pick one of the connectives as the main connective. In fact,
which one we pick makes no difference to the outcome, although we cannot prove that fact
in all generality in this course.
Example: Evaluate P ∨ Q ∨ R when P and Q are T and R is F , and show that we get
the same result regardless of what order we associate the terms.

P ∨ (Q ∨ R) (P ∨ Q) ∨ R
T T T T F T T T T F

(I’ve only shown two of the possible ways to associate this formula; the others can
be found by applying commutation.) In the left-hand formula, the first ∨ is the main
connective; in the right-hand formula, the last ∨ is the main connective.
In this course you will rarely if ever have to evaluate a conjunction or disjunction with
more than two terms, so this is not something that you will frequently need to worry about.

5.11 Sheffer and Nicod Strokes

At the beginning of this chapter I mentioned that there are other possible truth tables for
two propositional variables than the familiar ones we use in elementary logic. The latter are
chosen (i) because they are expressively complete, and (ii) because they come reasonably
close to capturing the sense of the logical connectives that we use in ordinary language.
It is, however, possible to do the logic of propositions with only one symbol, and for this
purpose one can use either the Sheffer or the Nicod strokes. As the practice examples at
the end of the chapter will convince you, these connectives are almost entirely useless for
practical calculations in logic. But their existence does tell us something about how logic
works, and they have important applications in circuit theory.
The Sheffer stroke is defined as “not both P and Q”, and has the following truth table:
74 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

P | Q
T F T
T T F
F T T
F T F

In circuit theory, this is called a NAND gate (“not-and”).

The Nicod stroke is defined as “neither P nor Q,” and has the following table:

P ↓ Q
T F T
T F F
F F T
F T F

In circuit theory, this is called a NOR gate (“not-or”).


5.12. REVIEW EXERCISES ON CHAPTER 5 75

5.12 Review Exercises on Chapter 5


A
1. Write the complete truth table for each of the following wffs:

(a) −(P ∨ Q) & − (P & Q)


(b) −(−(P → Q) & − (Q → P ))
(c) −(P & (Q & R))
(d) P → (Q → R)

2. Evaluate the following wffs, when P = T and Q = F .

(a) −(P → Q)
(b) −(P ↔ Q)
(c) (−P ↔ Q)
(d) (−P & − Q) ∨ (P & Q)

3. Use truth tables to characterize the following wffs as tautologous, inconsistent, or


contingent.

(a) P → P
(b) −P → P
(c) P → (−P → P )
(d) (P ∨ −P ) → (P & − P )

4. Check the following sequents for validity. Use short-cuts where they are suitable.

(a) P ∨ Q, −Q ` P (DS)
(b) P → Q, −Q ` −P (MT)
(c) P → Q ` P → (P & Q) (Abs)
(d) P → Q, −P ` −Q (Denying the Antecedent)

B
1. Use truth tables to classify the following wffs as either tautologous, inconsistent, or
contingent:

(a) P ∨ (Q & R)
(b) P → (R & S)
(c) P → (Q → P )
(d) (P → R) → (P → (P & R))
(e) (P → (Q → P )) → R
(f) (P ↔ Q) ↔ (Q ↔ −P )
(g) P → (R & − R)
76 CHAPTER 5. PROPOSITIONAL SEMANTICS: TRUTH TABLES

(h) (P → R) → −P
(i) (P → (Q & R)) & (P → (Q & − R))
(j) −(P → Q) & − (Q → P )

2. Use truth tables to check the following sequents for validity:

(a) P → Q, Q → R ` P → R (HS)
(b) P ↔ Q, Q ↔ R ` P ↔ R
(c) P → Q, R → Q ` P → R
(d) P → Q ` −Q → P
(e) P → Q, R → S, −Q ∨ −S ` −P ∨ −R (DD)
(f) P ∨ Q, P → R, Q → R ` R (Conv)
(g) P → Q ` P → (P ∨ Q)
(h) P → Q ` P → (P & Q)
(i) P ∨ R, P → (R ∨ S), −S ` −P
(j) P → (M & N ), M → R, N → S, −(R ∨ S) ` −P ∨ M

3. The following questions deal with the analysis of semantic properties.

(a) Show that if P and Q are contrary then −P and −Q are subcontrary.
(b) Show that P and Q are equivalent if and only if P and −Q are contradictory.
(c) Show that −P and Q are equivalent if and only if P and Q are equivalent.
(d) Show that two wffs are inconsistent iff they are both contraries and subcontraries.
(e) If P, Q ` R is a valid sequent, is it possible for R to be false? Explain your
answer.
(f) If P, Q ` R is invalid, is it impossible for P , Q, and R to all be true? Explain
your answer.

C
1. (a) Show how to write −P , P ∨ Q, P & Q, and P → Q in terms of the Sheffer stroke.
(b) Do the same for the Nicod stroke.
(c) Check the following sequent for validity by constructing its full truth table:
(P |P )|(Q|Q), P |(R|R) ` R.
(d) Translate the sequent in part (c) into ordinary propositional notation.
Chapter 6

Propositional Semantics: Truth


Trees

Truth trees are one of the most efficient ways of checking the semantic properties of propo-
sitional formulas. In particular, it gives a very easy way of checking the validity of sequents.
The basic idea of truth trees is that they give a graphic way of displaying whether or
not a set of formulas is inconsistent.

6.1 Branches and Stacks


Truth trees are graphic objects built up out of combinations of two basic structures, stacks
and branches. Stacks show the decomposition of conjunctions into their separate conjuncts,
and branches show the decomposition of disjunctions into their separate disjuncts.

P & QX P ∨ QX
P @
Q @
P Q
Basic Stack Basic Branch

We place a checkmark beside every formula that has been decomposed.


Because any number of propositions can be conjoined, a stack can contain any number
of propositions. For instance, we could start with the formula (P & Q) & − R & (P ∨ Q),
and break it down as shown on the top of the next page:

77
78 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

(P & Q) & − R & (P ∨ Q)X


(P & Q)X
−R
P
Q
P ∨ QX
@
@
P Q

Figure 6.1: A Typical Stack

Similarly, because any number of propositions can be disjoined, we can have branches
with any number of stems.

P ∨Q∨R
HH
 H
P R
Q

Figure 6.2: A Triple Branch

However, we will rarely need branches with more than two stems in this book.
Stacks and branches can be linked together into trees of arbitrary size. For instance,
the wff (P & Q) ∨ R can be decomposed as follows:

(P & Q) ∨ R X
@
@
P &QX R
P
Q

Figure 6.3: A Typical Tree

Decomposition has gone as far as it can when all molecular formulas are broken down
into atomic formulas or their negations (which are collectively called literals).

6.2 Paths
A route going from the initial set of formulas at the top down to one of the atomic formulas
at the bottom, while choosing only one side of each branch as one goes, will be called here
a path. (In some logic books these are called branches, but this invites confusion with the
sense of branch as a forked structure splitting off from a disjunction, as noted above.) As
each formula is broken down into its corresponding branch or stack, it is checked off as
shown so that we don’t forget that we have broken it down.
Here is another simple example:
P &QX
P ∨ −Q X
P
Q
@
@
P −Q
×
6.3. STACKING AND BRANCHING RULES 79

To arrive at this we first broke down the conjunction (P & Q) into a stack, and then
broke down the disjunction (P ∨ −Q) into a branch. It would have been perfectly valid
to do the branch first, and then the stack. (Try it.) As a rule, though, it is usually more
efficient to do as many stacks as one can first, and then do the branches, since this will
minimize the number of branches to be drawn.
Truth trees are a kind of partially ordered set or poset. These terms will be explained
later in this book.
My use of the term path here is analogous to the way it is used in DOS and UNIX,
which possess hierarchical file structures. In the last tree above, the two paths could be
notated in a form similar to the way we note paths in UNIX. That is, the two paths in
the above tree could be written more abstractly as ((P & Q) & (P ∨ −Q)) \ (P & Q) \ P
and ((P & Q) & (P ∨ −Q)) \ (P & Q) \ Q. However, it will not be especially useful to do
it this way, since the main purpose of truth trees is to make the implications of a stack of
propositions graphically explicit.
Notice that the path ending in −Q contains contradictory formulas, namely −Q itself
and Q. Any path that contains a contradiction is marked off with an × at the end, and
declared to be closed . Any path that is not closed is open.
If all branches on a tree are closed, then the tree itself is said to be closed. If just one
branch remains open after decomposition has gone as far as it can go, the tree itself is open.
Although we will usually have to break formulas all the way down to the literal level,
we can stop decomposing formulas along a path as soon as a contradiction is found. For
example, in the above tree, replace Q with (R → S). (Try it.) You will obviously get a
closed path without having to break down (R → S) into its literal components, and you
can stop working on that path at that point. In general, once a path is closed, we do not
need to go any farther along it.
What we are doing, in effect, is treating all the formulas along a given path as if they
were true, and seeing whether or not this will lead to a contradiction. Testing the branches
that sprout from a disjunction is similar to vE in that we go exploring down each branch
of the disjunction, one at a time, to see where it will lead. We can start from any number
of initial formulas and break them down as far as we can. If all branches of the resulting
tree are closed, the set of formulas is inconsistent.

6.3 Stacking and Branching Rules


We need to know how to decompose other formulas besides simple conjunctions and dis-
junctions. The rules for doing this can be divided into stacking rules and branching rules.

P &QX −−P X −(P ∨ Q) X −(P → Q) X


P P −P P
Q −Q −Q
Stacking Rules

P ∨ QX P → QX −(P & Q) X P ↔ QX −(P ↔ Q) X


@
@ @
@ @
@ @
@ @
@
P Q −P Q −P −Q P −P P −P
Q −Q −Q Q
Branching Rules
80 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

These rules follow from equivalence formulas (such as ` (P → Q) ↔ (−P ∨ Q)) which
can be easily established by natural deduction, and checked by truth tables.

6.3.1 Substitution of Equivalents


To further simplify the use of trees, we can also make it a rule that we can replace formulas
with their equivalents, either piecewise or as a whole, in stacks. Thus, a valid stack could
look like this:

(P → Q) X
−P ∨ Q

Be sure to check off the formula you have replaced. It would also be a courtesy to our belea-
guered marker to jot down, off to the side, the rule you are using (in this case Implication)
that justified the replacement.
The stacking rules cover all possible cases you could encounter in propositional logic,
so strictly speaking replacement of equivalents like this is not necessary. However, in the
spirit of the approach in this text we will allow any method that makes the work easier —
which might be the case if, say, you have forgotten a stacking or branching rule!

6.4 Checking Semantic Properties of Formulas with Trees


Although the main purpose of truth trees is to reveal inconsistency, all of the major semantic
properties of sets of propositional formulas can be checked using truth trees.

6.4.1 Tautology, Contingency, and Consistency


To check whether or not a wff is a tautology, we run the tree for its negation. The formula
P is a tautology if and only if the tree for −P closes. For example, to show that (P →
(Q → P )) is a tautology, we can run the following tree:

−(P → (Q → P )) X
P
−(Q → P ) X
Q
−P
×
There is only one path, and since it is closed we see that the negation of the formula is
inconsistent, meaning that the formula itself is tautologous.
If we ran the a tree on the formula (P → (Q → P )) itself, we would get this:

(P → (Q → P )) X
@
@
−P (Q → P ) X
@
@
−Q P
6.4. CHECKING SEMANTIC PROPERTIES OF FORMULAS WITH TREES 81

All paths on this tree are open. However, this does not show that the formula is a
tautology, only that it is consistent; and we already know that if all we know about a
formula is that it is consistent then it could be either a tautology or a contingency.
A set of formulas {P1 , . . . , Pn } is inconsistent if and only if (P1 & . . . & Pn ) is always
false, which is the same thing as saying that −(P1 & . . . & Pn ) is a tautology. To check
whether a set of formulas is inconsistent, stack them up and run a tree. The set is incon-
sistent iff the tree closes.
To check whether a formula P is contingent, we have to run trees for both P and −P .
The formula is contingent if and only if neither tree closes. For instance, the following trees
show that (P → Q) is contingent:
−(P → Q) X
P → QX
P
@@ −Q
−P Q

In summary: a set of formulas is inconsistent iff its stack has a closed tree, and is
consistent iff its stack has an open tree. A formula is contingent iff the tree for both the
formula and its negation are open.

6.4.2 Equivalence
To check whether or not two formulas P and Q are equivalent, we use the fact that P and
Q are equivalent iff (P ↔ Q) is a tautology, and this is the same thing as saying that the
tree for −(P ↔ Q) must close. For instance, to demonstrate that −(P & Q) ↔ (−P ∨ −Q)
(one of de Morgan’s Laws), we run the following tree:

−(−(P & Q) ↔ (−P ∨ −Q)) X


HH
 HH
 H
−(P & Q) X P &QX
−(−P ∨ −Q) X −P ∨ −Q X
P P
Q Q
@
@ @
@
−P −Q −P −Q
× × × ×

Since all paths close, the formula is a tautology. This tree looks relatively complex,
but it follows straightforwardly by the application of the branching and stacking rules given
above.
Of course, we could have also demonstrated the same result by doing two trees, one for
−(P & Q) ` (−P ∨ −Q), and one for (−P ∨ −Q) ` −(P & Q). (Try it.)
Checking a set of more than two formulas for equivalence can be a little more compli-
cated, and is in fact not something that one often has to do. But suppose we have to check
whether all formulas in the set {P1 , P2 , P3 } are equivalent. We could check each possible
pair for equivalence. However, we should keep in mind that equivalence is transitive, that
is, P1 ↔ P2 , P2 ↔ P3 ` P1 ↔ P3 . (This sequent can be proven by an easy application of
82 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

HS; make sure you try it!) Therefore, to check three formulas for equivalence, we only need
to check two formulas for inconsistency: −(P1 ↔ P2 ) and −(P2 ↔ P3 ).
In many cases the formulas will be simple enough in structure that it will be obvious
whether some pair of them is not equivalent. If that is the case, you are done; since to show
that a whole set of formulas is inequivalent it is sufficient to show that any two members of
a set are inequivalent.

6.5 Checking Validity of Sequents


The most important application of truth trees is in checking the validity of sequents.
A sequent P1 , . . . , Pn ` Q is valid if and only if (P1 & . . . Pn & −Q) is inconsistent. This
simply expresses the notion of validity, which is that if the argument (or the sequent that
formally represents its propositional structure) is valid, then the premisses cannot lead to
the negation of the conclusion.
We can check the validity of any propositional sequent P1 , . . . , Pn ` Q simply by making
a stack consisting of the premisses and the negation of the conclusion, and breaking down
all of the formulas to see if we arrive at closed paths. The sequent is valid if and only if the
whole tree is closed. In other words, if even one path is open, the sequent is invalid; since
that would express the fact that it is possible to get from the premisses to the negation of
the purported conclusion without contradiction.
To check the validity of an interderivability result such as (P → Q) a` (−P ∨ Q) we
have to run two trees, one in each direction.
Here are some examples of trees for familiar valid sequents:

P → QX P → QX
P Q → RX
−Q −(P → R) X
@ P
−R
@
−P Q
× × @
@
−Q R
MP @ ×
@
−P Q
× ×
HS
These sequents are shown to be valid by the fact that all paths close.
Don’t forget that the order in which we choose to break down the formulas in our initial
stack does not make any difference to the validity of the result. However, one often arrives
at a simpler tree structure by doing as much stacking as possible first. Also, don’t forget
that as shown in the tree for HS, once a path is closed, no further breakdown is needed.
On the next page, for comparison, are the trees for some invalid sequents:
6.6. TREES OR TABLES? 83

P → QX P ∨ QX
Q Q ∨ RX
−P −R
@ HH
@  HH
−P Q  H
P Q
@
@ @
@
Q R Q R
× ×

In the first example (which is just Affirming the Consequent), neither path closes. In
the second example, two paths close, but two remain open, showing that the sequent is
invalid. Also, in the second example, you will see that if you decompose Q ∨ R first you
will get a slightly simpler tree structure.
To check whether or not a formula is a theorem is (by the soundness and completeness
of propositional logic) the same thing as checking whether it is a tautology. Thus, for
instance, the tree above that checks whether (P → (Q → P )) is a tautology shows that
` (P → (Q → P )).

6.6 Trees or Tables?

Truth trees are usually quicker and simpler to use than truth tables, especially when eval-
uating very complex arguments with several premisses. But there are cases in which truth
tables are simpler to use. For example, let us check the sequent P → Q, Q → R ` R → P.
It should be apparent that this sequent is invalid. To demonstrate this fact by means of a
truth tree, we would have to construct the following tree:

P → QX
Q → RX
−(R → P ) X
R
−P
HHH
 H
 H
−Q R
@
@ @
@
−P Q −P Q
×

Since the tree is open, the sequent is invalid. Strictly speaking, one could stop drawing
the tree as soon as even one open path was revealed since that would be sufficient to
demonstrate invalidity, but it is still a moderately complex tree.
We could also demonstrate that the above sequent is invalid by using a short-cut truth
table, as follows:
84 CHAPTER 6. PROPOSITIONAL SEMANTICS: TRUTH TREES

P → Q Q → R R → P
F T T T T F F
While the full table for this sequent has eight lines, only one partial line is needed to
demonstrate invalidity: assume the conclusion is false, and work backwards. We can see
that it is possible to make both the premisses T when the conclusion is F, regardless of
the value of Q. This is clearly less work than the tree. The lesson is, therefore, that both
trees and tables should be in the logician’s toolkit, and with experience the logician will get
to know which is likely to be easier to try. If there are no more than (say) three or four
premisses, and if the tree will involve a lot of branching, it may be easier to use a table.

6.7 Exercises
B
1. Use truth trees to check the following sequents for validity:

(a) P → Q, −P ` −Q (Denying the Antecedent)


(b) P ∨ Q, −P ` Q (Disjunctive Syllogism)
(c) P ∨ Q, P → R, Q → S ` R ∨ S (Constructive Dilemma)
(d) P → R, Q → S, −R ∨ −S ` −P ∨ −Q (Destructive Dilemma)
(e) P → Q ` P → (P & Q) (Absorption)
(f) ` (P → Q) ∨ (Q → R)
(g) P ` (P → Q)
(h) ` −(P ∨ Q) ↔ (−P & − Q) (de Morgan)
(i) ` P ∨ −P (Excluded Middle)
(j) P → (Q & R), −Q ` −P

6.8 Questions for Further Research


1. Set up general rule for checking the equivalence of a set of any number of wffs.
Chapter 7

Predicate Logic: Symbolization


and Translation

7.1 Why We Need Predicate Logic


Logic can be compared to a computer game in which there is a hierarchy of levels, where
once you have conquered a level you move onto the next higher level. In this chapter we
move to a higher level. It builds upon what we have learned before but gives us tools with
which to study the validity of a very large class of widely-used arguments that so far we do
not know how to analyze.
Consider the following elementary arguments:
All philosophers are absent-minded.
Kent is a philosopher.
∴ Kent is absent-minded.
Intuitively, this argument seems valid. (Whether or not it is sound is another question.
I have, in my roughly twenty year career, known two or three philosophers who were not
absent-minded, so I think that in fairness to the profession I would have to say that the first
premiss is false.) However, we have no way of representing the validity of this argument
using the tools of propositional logic we have learned so far, although it does seem as if it
might have something to do with MP.
Now consider a slight generalization of this argument:
All philosophers are absent-minded.
There are some philosophers.
∴ there are some who are absent-minded.
This again seems obviously valid, but, again, we have no tools with which to express its
validity in any precise way. Arguments like these seem to have a structure suggesting
some sort of generalization of propositional structures with which we are already familiar.
However, all that propositional logic as such could do for us would be to translate the
sentences in these arguments into logically independent symbols, and so both arguments
would have the generally invalid form P, Q ` R.
To represent the logical structure of these arguments we therefore need the following
tools:

85
86 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

1. A way to symbolize properties of individual persons or things (so that we can say
something like “Kent is a philosopher” or “Mars is a planet”);

2. Ways to express the logic of quantifiers such as “all” and “some.”

7.2 Predicate Logic Notation


In the following I introduce the main types of logical structures that occur in predicate
translation.

7.2.1 Universe of Discourse


Predicate notation is always assumed to refer to a domain or universe of discourse U. This
could be the set of all entities whatsoever, the set of all people, the set of all students in
Logic 2003A, the set of all natural numbers, or any other well-defined collection of objects,
persons, or entities. Sometimes the universe of discourse will be understood implicitly;
sometimes it will need to be specified explicitly; and sometimes it does not need to be
mentioned at all. Usually we will assume that U is not empty.

7.2.2 Terms and Variables


We first need to introduce symbols to denote elements of the universe of discourse.
Proper names are letters which serve the same function as proper names such as “Susan”
or “the planet Mars” in a natural language; that is, they denote individual entities within
U. By convention, proper names are usually written using lower-case letters chosen from
the beginning or middle of the alphabet. For instance, we could set k := “Kent” and
m := “the planet Mars”. If there are a large number of proper names, it may be useful to
subscript them, as n1 , n2 , . . ..
Arbitrary constants, also called arbitrary names, are letters that behave much like
proper names, or if you like as if they were proper names; but they are supposed to refer
to individuals picked from U at random, and about which we know nothing to distinguish
them from any other member of U. By contrast, if we use a letter as a proper name, it
is assumed that we know something that distinguishes the individual to which the letter
refers from all other members of U. Letters for arbitrary constants are, like proper names,
usually taken from the beginning to the middle of the alphabet, and, again like proper
names, can be subscripted like a1 , a2 , . . . if it is convenient to do so. Arbitrary constants
are very important in predicate natural deduction, and, as we shall see in the next chapter,
their use requires some care.
Proper names and arbitrary constants are collectively called terms.
Variables are letters which serve as place-holders where proper names or arbitrary
constants can be inserted in a formula. They serve a function very similar to that served by
variables in algebra. By convention, variables are usually written using lower-case letters
from the end of the alphabet, such as x, y, z, t . . .. As with constants and names, they can
be subscripted if it is convenient to do so. For instance, we could have a set of variables of
the form x1 , x2 , . . .. A variable is not a term, but rather a place-holder where a term can
be inserted.
7.2. PREDICATE LOGIC NOTATION 87

7.2.3 Monadic Predicates


Suppose we want to translate “Mars is a planet” into predicate notation. We now know
that we can treat “Mars” as a proper name, and choose some letter for it such as, say, m.
We also need to know how to say that a term has a property, such as the property of being a
planet. To say things like this, we introduce predicate letters. These will usually be written
as upper-case letters like A, B, . . . , F, G, . . . , P and so forth; and they can, if necessary, be
subscripted as well. We define predicate letters by expressions such as

P x := “x is a planet”.

The terminology for properties that apply to single terms is diverse: they can be called
predicates, monadic predicates, qualities, properties, attributes, adjectives, unary relations,
or 1-place relations.

7.2.4 Relations
Predicates that apply to two or more terms, also called relations, polyadic relations, n-ary
relations, or n-place relations, can easily be accommodated in the notation of predicate
calculus.
Here are some examples:

T xy := “x is taller than y”
Bxyz := “y is between x and z”

Proper names or arbitrary constants can be substituted for variables in relation-expressions.


For instance, if we take (as before) k := “Kent”, and p := “Paul”, then we can say T pk,
meaning “Paul is taller than Kent”.
Note that the order in which the variables are expressed in a relation is usually crucial to
the meaning of the proposition. For example, while T pk happens to be true, the proposition
T kp is false.
In principle the notation can allow for relations with any number of arguments, although
in this course we will rarely consider relations with more than two arguments. Predicate
logic thus allows us to express much more complex relational concepts than could be ex-
pressed in any natural language. This way of expressing relational concepts seems to be
very obvious, but its introduction was a major step in the history of logic.
There is a detailed theory of relations, which studies the properties of special classes of
relations, such as symmetric relations, equivalence relations, and so forth. We will not do
relation theory in this course (except in some C exercises); Lemmon’s Beginning Logic [17]
contains an excellent introduction to relation theory.

Alternate Notation
Some books write two-place relations Rxy as xRy, which conforms more closely to standard
usage in English. (Unless we are Yoda of Star Wars, we would be more likely to say “x is
taller than y” than “Taller is x than y.”) However, we will stick with the notation in which
the variables are to the right of the relation letter (call this Yoda-notation, if you like), since
it is more easily generalized.
88 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

7.2.5 Propositions and Propositional Functions


Expressions such as P x or T xy are called open sentences or (less correctly) propositional
functions. In themselves they have no truth value since they are, in effect, incomplete
sentences. We can’t evaluate “x is a philosopher” until we know who or what x is supposed
to be. When we substitute proper names or arbitrary constants for the variables in open
sentences then we get propositions which can be presumed to be either T or F, and which
can be combined using the familiar connectives that we use in propositional logic. For
example, if Gx := “x is a game-theorist”, P x := “x is a philosopher”, k := “Kent”, and p
:= “Paul”, then we can write sentences such as P k & Gp (“Kent is a philosopher and Paul is
a game-theorist”), Gk → −P p (“Kent is a game-theorist only if Paul is not a philosopher”),
and so forth.
Note carefully that an expression like T xa is an open sentence, not a proposition, since it
contains one unassigned variable. Expressions like T xa or P x are often called propositional
functions, especially in older literature in analytical philosophy. Strictly speaking, this is a
misnomer, since a function is not an expression but a mapping from a domain to a range
such that every set of input values from the domain maps into one and only one element of
the range. Propositional functions therefore map elements of U into the set of all possible
propositions; since propositions, in turn, map into truth values, propositional functions in
effect map elements of U into truth values. Despite this complication, it is often convenient
to refer to expressions like P x or T xy as propositional functions, just as it is convenient in
ordinary mathematics to refer to an expression such as x2 as a function of x, even though
properly speaking it is just part of the function that takes real numbers into their squares.

7.2.6 Quantifiers
Quantifiers are powerful symbolic tools that allow us to refer to collections where we do
not or cannot know all the members in advance. These could be large finite collections
where it would be impractical to name all members even if we knew them, or (especially
in mathematics) infinite collections where it would be impossible to name all the members
even if we knew what they were.
The introduction of quantifiers (principally by Gottlob Frege) toward the end of the
19th century was one of the major innovations in the history of logic.

The Universal Quantifier

We always quantify over a domain or universe of discourse U. Suppose that U is the set of
students in Logic 2003A, in the Spring Term of 2004. Define Sx := “x studies logic”. If we
had the class list, then we could express the proposition “Everyone (in U) studies logic” as

S(Lindsey) & S(Ben) & . . . & S(Craig) (7.1)

for all of the approximately 55 students in the class. A universal statement, which expresses
the concept that everything in the domain of discourse has a certain property, is therefore
equivalent to a conjunction over all members of the domain of discourse.
For a large class it is quite inconvenient to write a universal claim this way, and impos-
sible if we do not know everyone’s name, if the membership in the class was unknown or
7.2. PREDICATE LOGIC NOTATION 89

undetermined, or if it was infinite. We can get around these difficulties by introducing the
universal quantifier ∀x . . ., and write
∀xSx. (7.2)
This can be read in several ways, the most common of which are the following:
Everyone studies logic.
For all x, Sx.
For any x, Sx.
It is important to remember that the meaning of a quantificational statement is always
relative to a universe of discourse, although U does not always need to be mentioned. If we
know in advance that U is a certain logic class, then we can write “Everyone studies logic”
as in 7.2 above. If U were a larger set, such as the set of all students at a certain university,
or the set of all people, then to express the same fact — that everyone in a certain class
studies logic — we would have to write
∀x(Lx → Sx), (7.3)
where Lx := “x is in Logic 2003A”. The latter would be read “Everyone in Logic 2003A
studies logic,” or “If anyone is in Logic 2003A, then that person studies logic.”

The Existential Quantifier


Suppose now we wanted to express the claim that someone in Logic 2003A is wearing blue
jeans. Let Bx := “x wears blue jeans”. If we know in advance that U is the membership of
that particular class, then we could write
B(Lindsey) ∨ B(Ben) ∨ . . . ∨ B(Craig) (7.4)
As before, if the class is large it would be quite inconvenient to write this all out, and
impossible if we do not know everyone’s name, or if the membership in the class was
unknown, undetermined, or infinite. We get around these difficulties by introducing the
existential quantifier ∃x . . ., and write simply
∃xBx. (7.5)
This can be read in various ways:
There exists something such that it wears blue jeans.
Someone wears blue jeans.
At least one person wears blue jeans.
Note carefully: In logic, the word “some” means nothing more than “at least one” or
(in the case of continuous quantities such as liquids) “an amount greater than zero.” The
statement “Some mammals are warm-blooded” is therefore consistent with the possibilities
either that there is only one warm-blooded mammal or that all mammals are warm-blooded
(and in fact the latter statement happens to be true, although as logicians we do not concern
ourselves with the mere facts of biology).
If U were a larger set than the membership of Logic 2003A, then we would have to
write
∃x(Lx & Bx), (7.6)
which can be read as “Someone in Logic 2003A wears blue jeans.”
90 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

Alternate Notation
Quantifiers are often written in enclosing brackets, such as (∀x)F x. Many older logic books
write the universal quantifier without the ∀ symbol, so that (x)F x would be read as “for
all x, F x”.
A few books (such as the widely-read Metalogic
V by G. Hunter [11]) use
W so-called Stanford
notation, wherein universals are represented by and existentials by . This is inspired
by the fact that universals and existentials are generalized “ands” and “ors” respectively.

Bound and Free Variables


Variables in quantificational formulas are either bound or free. Bound variables are those
that are linked to a quantifier, while free variables are those that do not appear in a
quantifier. If a quantificational expression contains a free variable, it is an open sentence;
it is a proposition only if all variables are bound.
Here are some examples:

In P x, the variable x is free, and the expression is an open sentence.


In ∀xP x and ∃yT y the variables are bound, and the expressions are propositions
(which say, respectively, “Everything is P ” and “Something is T ”).
In ∃yBayx, a is an arbitrary constant or name and y is bound; but x is unbound,
so that the expression is an open sentence with no truth value.

Scope of Quantifiers
It is possible to give a formal recursive definition of a well-formed formula of predicate logic
(see Lemmon [17], for instance), but we will not bother to do so here since it is generally
obvious which formulas are properly formed and which are not.
Here are the main things to keep in mind: quantifiers, like negation signs, apply to the
shortest possible string that follows them. Thus, for instance, in −(P a & Qb), the whole
expression (P a & Qb) is negated, while in −P a & Qb, only the sentence P a is negated.
Similarly, in ∃xF x & Gx, the quantifier binds only the x in F x; this expression, therefore,
is an open sentence. However, in ∃x(F x & Gx), the quantifier applies to all of (F x & Gx), .
Also, a quantifier only binds variables within its scope, and variables can be used over
again if there is no ambiguity about which quantifier they link with. Thus, ∃xF x & ∃xGx
means exactly the same thing as ∃xF x & ∃yGy.

7.2.7 Order
The predicate logic we study in this book is said to be first-order . This means that the
variables in the quantifiers apply only to elements of the universe of discourse U. In second-
order logic we quantify over predicates and relations. This introduces further complications
that are well beyond the scope of this course.

7.2.8 Categorical Forms


Universal affirmative (A): All A are B.

∀x(Ax → Bx) (7.7)


7.2. PREDICATE LOGIC NOTATION 91

Universal negative (E): No A are B.

∀x(Ax → −Bx) (7.8)

Particular affirmative (I): Some A are B.

∃x(Ax & Bx) (7.9)

Particular negative (O): Some A are not B.

∃x(Ax & − Bx) (7.10)

Note carefully that a universal statement does not involve an existence claim. That
is, ∀x(Ax → Bx) does not claim that there are any A which are B. It says merely that
if there are any As, then they are B. (Thus, we can truly say “All orcs are nefarious”
without, fortunately, there being any orcs in the real world.) By the truth table for the
material conditional →, the conditional in the quantifier is true even if there are no As.
Such universals are said to be vacuously true.
Existential claims such as “Some animals are mammals” do assert existence. The latter
statement can also be read “There are (or exist) some animals which are mammals.”
If we specifically wanted to express the true statement that “Some but not all animals
are mammals,” then we would have to write

∃x(Ax & M x) & − ∀x(Ax → M x), (7.11)

or equivalently

∃x(Ax & M x) & ∃x(Ax & − M x), (7.12)

where M x := “x is a mammal” and Ax := “x is an animal”.


A negative existential such as “Some animals are not reptiles” also asserts existence,
for it is translated as

∃x(Ax & − Rx). (7.13)

7.2.9 Don’t Be Fooled. . . !


Translation from natural languages such as English into predicate notation requires some
skill and attention. The translations we have described here so far are not inflexible rules
to be followed blindly, but rather guidelines to be used with good judgement. You have to
be aware of the nuances of your language and of the intended logic of the statements you
are translating.
Here is an example where “and” should not be translated as & : “All dogs and cats
are carnivorous.” Let M x := “x is carnivorous”. We might be tempted to translate this as

∀x((Dx & Cx) → M x). (7.14)

But this cannot be right, since it literally says, “If anything is both a dog and a cat, then
it is carnivorous.” And that is not quite what we wanted to say. In fact, we wanted to say
“All dogs are carnivorous and all cats are carnivorous.” This can be written as

∀x(Dx → M x) & ∀x(Cx → M x). (7.15)


92 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

In the next chapter we will learn techniques that will enable us to show that this is equivalent
to the following:

∀x((Dx ∨ Cx) → M x). (7.16)

This can be read as “If anything is either a dog or a cat, then it is carnivorous”; and that
is what we actually wanted to say.

7.2.10 Combinations of Properties


The basic categorical forms can be easily extended to describe situations involving combi-
nations of properties. It is impossible to anticipate all possible cases, but here are a few
typical examples that will serve as guides:

1. All philosophers at the University of Lethbridge are either brilliant or witty.


∀x((P x & Lx) → (Bx ∨ W x)).

2. Some philosophers at the University of Lethbridge are brilliant but not witty.
∃x(P x & Lx & Bx & − W x).

3. No philosopher at the University of Lethbridge is neither brilliant nor witty.


∀x((P x & Lx) → − − (Bx ∨ W x))
(By Double Negation, this is of course equivalent to
∀x((P x & Lx) → (Bx ∨ W x)).)

7.2.11 Use of “Except”


Propositions of the general form “all A except B are C” are called exceptives. Not all logic
texts agree on how they should be translated.
Suppose we want to translate into symbols the sentence “All dogs except chihuahuas
like the cold,” to borrow Lemmon’s example (see his Chapter 3, §1). Using obvious notation
for the predicates, he expresses this as

∀x((Dx & − Cx) → Lx) (7.17)

Literally, this says that if anything is a dog but not a chihuahua, then it likes the cold.
Unfortunately, this does not tell us what is the case if a dog is a chihuahua. Lemmon must
have been aware of this, of course, and so I assume that his translation reflected a preference
for logical minimalism — that is, translating into symbols in a way that requires as few
commitments as possible. This may not be a bad practice in general. However, most logic
texts now translate exceptives in such a way as to make the intention as explicit as possible.
For instance, Lemmon’s sentence above would now be translated as

∀x((Dx & − Cx) → Lx) & ∀x((Dx & Cx) → −Lx). (7.18)

It can be shown that this is equivalent to

∀x(Dx → (Cx ↔ −Lx)). (7.19)

In words, to say that all dogs except chihuahuas like the cold is the same thing as saying
that if anything is a dog, then it is a chihuahua if and only if it does not like the cold.
7.3. EXERCISES 93

Negative exceptives can be translated in a similar way. “No animal except the mongoose
likes to eat cobras” could be translated as

∀x(Ax → (M x ↔ Cx)). (7.20)

Examples such as exceptives illustrate the fact that the aim of symbolization is not
to translate the natural language expression literally into symbols, but to analyze and
express the implicit logical intention of the natural language expression — if one can be
found (since in natural language we are often not very logically precise at all). In order to
translate a sentence correctly, we have to figure out its logic, not merely transliterate words
into symbols without thinking about their meanings.

7.2.12 Use of “Only”


The key to these is to recall that P → Q can be translated not only as “If P then Q,”
but also as “P only if Q” or “Only if Q, P .” So we would therefore translate “Only the
mongoose likes eating cobras” as

∀x(Cx → M x). (7.21)

where Cx := “likes eating cobras” and M x := “is a mongoose”.

7.2.13 Variants of Categorical Forms Using Relations


We can construct innumerable variations of the basic categorical forms using relations.
These will often involve multiple quantification.
Here are a few illustrative examples, which are, again, in no way meant to exhaust all
the possibilities.

1. Every A has relation R to some B:


∀x∃y((Ax & By) → Rxy).

2. Any cat is smarter than any dog.


∀x∀y((Cx & Dy) → Sxy).

3. Some dogs are smarter than some people.


∃x∃y(Dx & P y & Sxy).

Note that order of quantifiers is crucial to the meaning. For instance, let U be the set of
all people, and P xy := “x is the parent of y”. Then ∀x∃yP xy says that everyone is the
parent of someone, while ∃x∀yP xy says that at least one person is the parent of everyone
— obviously a very different claim.

7.3 Exercises
In these exercises, use the following symbolizations:

Ax := “x is an aardvark”
V x := “x is from the Transvaal”
F x := “x is an ant”
94 CHAPTER 7. PREDICATE LOGIC: SYMBOLIZATION AND TRANSLATION

Sx := “x likes to sleep in the sun”


Cx := “x is a cobra”
Dx := “x has a strange diet”
T x := “x is a termite”
Lx := “x is an animal”
M x := “x is a mongoose”
Exy := “x likes to eat y”
T xy := “x wins the Lou Marsh trophy in y”
m := “Mike Weir”
n := “2003”

1. Translate the following sentences into predicate notation:

(a) All aardvarks like eating any ants.


(b) Only mongooses like to eat cobras.
(c) All aardvarks except those from the Transvaal like to sleep in the sun.
(d) No animals except aardvarks like to eat termites.
(e) No mongoose is from the Transvaal.
(f) Mike Weir won the Lou Marsh Trophy in 2003.

2. Translate the following formulas into sensible English:

(a) ∀x((M x & V x) → Dx)


(b) ∃x∃y(Cx & M y & Sx & Sy)
(c) ∀x((M x ∨ Ax) → ∀y(F y → Exy))
(d) ∀x((Cx ∨ M x) → −V x)
Chapter 8

Predicate Logic: Natural


Deduction

Natural deduction for first-order predicate logic consists of ordinary natural deduction plus
four special rules that allow us to take off quantifiers and put them back on again.

8.1 Duality Relations


Before stating the introduction and elimination rules for quantifiers, however, it is useful to
state the so-called Duality Relations which relate existentials and universals. Later in this
chapter we will see how formally to prove these relations, but they can be seen intuitively
to follow from the basic meanings of the quantifiers.

∀xF x ↔ −∃x − F x
∃xF x ↔ −∀x − F x
∀x − F x ↔ −∃xF x
∃x − F x ↔ −∀xF x

When used to justify a move in a deduction, an instance of any one of these rules
can be referred to simply as Duality. Examples will be given below, once we review the
introduction and elimination rules for quantifiers.

8.2 Introduction and Elimination Rules


We will introduce the rules for the introduction and elimination of quantifier symbols in
order of their difficulty.

8.2.1 Universal Elimination (UE)


From ∀xF x we may infer F a, where F x is any propositional function of x (possibly including
other bound quantifiers) and a is any term. This rule simply expresses what is naturally
implied by a universal claim, which is that it applies to anything in the universe of discourse.
(UE is sometimes called “Universal Instantiation,” because we use it to write an instance
of a universal claim.)

95
96 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

Example: From the statement “All philosophers are witty” and the statement “Paul is
a philosopher”, we can validly conclude that Paul is witty:
1 (1) ∀x(P x → W x) A
2 (2) Pp A
1 (3) Pp → Wp 1 UE
1,2 (4) Wp 2,3 MP

8.2.2 Universal Introduction (UI)


This rule allows us to move from a proposition of the form F a, where a is an arbitrary
constant, to a universal ∀xF x. This is valid, however, only if a is genuinely arbitrary. We
can assure ourselves of that if we make sure that a was not mentioned in the premisses.
(Remember, in deductive logic we have no access to any information that is not explicitly
given in the premisses.) UI is called “Universal Generalization” in many logic texts.
Here is a valid use of UI.
Example: Symbolize the following argument, and show that it is valid: All horses are
mammals, all mammals are warm-blooded; therefore, all horses are warm-blooded.
Using obvious definitions of the predicate symbols, we can symbolize this as follows:

46 ∀x(Hx → M x), ∀x(M x → W x) ` ∀x(Hx → W x)

1 (1) ∀x(Hx → M x) A
2 (2) ∀x(M x → W x) A
1 (3) Ha → M a 1 UE
2 (4) Ma → Wa 2 UE
1,2 (5) Ha → W a 3,4 HS
1,2 (6) ∀x(Hx → W x) 5 UI

The arbitrary constant a introduced in line 3 does not appear in the premisses. The
use of UI in moving from line 5 to 6 is therefore valid.
For those who have studied categorical logic, this sequent is the familiar syllogism in
Barbara: all A’s are B’s, all B’s are C’s; therefore, all A’s are C’s.

Restriction on UI
We cannot use UI on a term that was introduced in an assumption.

8.2.3 Fallacy Alert!


If we attempt to universalize on a premiss that contains the arbitrary constant, we risk
committing the fallacy of Hasty Generalization. For instance, from the fact that Paul is
egregiously bald, we cannot validly infer that everyone is egregiously bald:

1 (1) Bp A
1 (2) ∀yBy 1 UI (?!)

This is invalid because the term p appears in an assumption.


8.2. INTRODUCTION AND ELIMINATION RULES 97

8.2.4 Existential Introduction (EI)


Existential Introduction, like Universal Elimination, is a very safe rule to use, in that we
do not have to worry about any special restrictions on the terms involved.
EI states that from F a, where a is any term (proper name or arbitrary constant), we
may infer ∃xF x.
For example, from the statement that Paul is a witty philosopher, we can validly con-
clude that something is a witty philosopher (or, in other words, that there exists a witty
philosopher):

1 (1) Pp&Wp A
1 (2) ∃x(P x & W x) 1 EI
There are no restrictions on EI.

8.2.5 Existential Elimination (EE)


The general pattern of EE is that we infer a conclusion P from an existential claim ∃xF x
(possibly together with other premisses). Just as an existentially quantified statement is a
generalization of an OR statement, Existential Elimination is a generalization of vE. Like
vE, as well, it is a bit tricky to use, but most of the difficulties are matters of bookkeeping.
In vE we have to examine each possible inferential path starting by temporarily assuming
each disjunct and seeing if it leads to the desired conclusion; in EE, we examine a typical
inferential path defined by introducing an arbitrary constant.
A typical example of the use of EE will make this clearer. Suppose we want to es-
tablish the validity of the following argument: All philosophers are wise; there are some
philosophers; therefore, something is wise. We proceed as follows:

47 ∀x(P x → W x), ∃xP x ` ∃xW x

1 (1) ∀x(P x → W x) A
2 (2) ∃xP x A
3 (3) Pa A (EE)
1 (4) Pa → Wa 1 UE
1,3 (5) Wa 3,4 MP
1,3 (6) ∃xW x 5 EI
1,2 (7) ∃xW x 2,3,6 EE

The object is to prove the desired conclusion from the existential in line 2, in combina-
tion with the assumption at line 1. At line 3 we start a subproof that is similar in function
to the subproofs used in vE. In 3 we introduce an arbitrary constant a. It is crucial (as
we shall see) that the arbitrary constant introduced here be a new constant that has not
appeared in any earlier lines (because that would, in effect, remove its arbitrariness). In
effect, we are saying that we will suppose that a is the name of the entity referred to in line
2.
In line 4 we can instantiate on line 1 with UE in terms of a because UE can be applied
to any term whatsoever. We then do a little elementary propositional logic, and then at
line 6 we get the desired conclusion by using EI to go from line 5. However, we can’t stop
at this point, because line 6 still depends upon the temporary or working assumption on
98 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

line 3. We therefore have to discharge line 3. EE allows us to do this; on the next line, we
finally arrive at the conclusion dependent only on lines 1 and 2, as desired.
Note carefully the form of the justification given at the end of line 7. The first number,
2, is the line number of the existential statement on which we run the EE. The next number,
3, is the line number of the working assumption; this assumption is discharged at this point.
The final number, 6, is the line number at which we arrive at the desired conclusion. This
is the general pattern of EE.
This notation may seem a little tricky, but what we are doing is actually very simple,
and is a pattern of reasoning that we often use in day-to-day discussions. Here is the same
argument cast into informal language:

Suppose we accept that all philosophers are wise and that there are some philoso-
phers. Suppose also that a is a typical philosopher. (Note that this is an as-
sumption; we are picking an arbitrary name and supposing that it is attached
to one of the philosophers whose existence is claimed in the premisses.) Then a
must be wise; therefore, someone is wise.

Restrictions on EE
There are two restrictions on the use of EE.

1. The arbitrary constant cannot appear in a premiss of the argument, or any previous
assumptions unless they were discharged.

2. The arbitrary constant cannot appear in the conclusion of the EE subproof.

8.2.6 Fallacy Alert!


There are several types of fallacies that we can fall into if we do not obey the restrictions
on the use of the arbitrary constant.

Fallacies from Constants in a Premiss


Here is a fallacy that results from running an EE with a constant that appears in a premiss:

1 (1) ∃xF x A
2 (2) Ga A
3 (3) Fa A (EE)
2,3 (4) F a & Ga 2,3 &I
2,3 (5) ∃x(F x & Gx) 4 EI
1,2 (6) ∃x(F x & Gx) (?!) 1,3,5 EE (?)

The mistake here is to use the arbitrary constant a from line 2 in the assumption in line 3.
The use of EI in going from 4 to 5 is in itself correct.
Intuitively, this “argument” says that from the fact that something is F , and that a is
G, we can conclude that the same thing is both F and G. This is as if we said that from
the fact that there exists a man, and from the fact that our friend Susan is a woman, that
Susan is both woman and man.
By contrast, here is a slightly different sequent that is, in fact, valid:
8.3. SUMMARY OF INTRODUCTION AND ELIMINATION RULES FOR QUANTIFIERS99

48 ∃xF x, Ga ` ∃xF x & ∃xGx

1 (1) ∃xF x A
2 (2) Ga A
2 (3) ∃xGx 2 EI
1,2 (4) ∃xF x & ∃xGx 1,3 &I

Here is an instance of this sequent in words: from the fact that there exists a man, and
from the fact that Susan is a woman, we can validly conclude that there exists a man and
that there exists a woman.

Fallacies from Constants in the Conclusion of the Sub-Proof


Here is a fallacious example which shows that the arbitrary constant cannot appear in the
conclusion of the EE subproof:

1 (1) ∃xF x A
2 (2) Fa A (EE)
1 (3) Fa 1,2,2 EE
1 (4) ∀xF x 3 UI

The final move, from line 3 to 4 using UI, is valid. The mistake was to use a in the conclusion
of the subproof at line 3. This “argument” states that from the fact that something is F
we can conclude that everything is F , and this is clearly invalid.

Fallacies Combining EE with UI


The example given in the “Fallacy Alert!” for UI, involving Hasty Generalization, is obvi-
ously fallacious. But slightly more subtle fallacies are possible if we are not careful to obey
the prohibition against using UI on constants that appeared in an assumption. Consider
the following “derivation”:

1 (1) ∃xF x A
2 (2) Fa A (EE)
2 (3) ∀xF x 2 UI (?)
1 (4) ∀xF x (?!) 1,2,3 EE

From the assumption that something is F we seem again to have arrived at the conclusion
that everything is F ! In fact, the use of EE at line 4 is correct. The fallacy here lies in having
universalized on the assumption in line 2. This shows that the restriction on universalizing
on constants that appear in a premiss applies to the assumptions that start a subproof as
well as the main premisses of a sequent.

8.3 Summary of Introduction and Elimination Rules for Quan-


tifiers
Universal Elimination (UE):
From ∀xF x, we can infer F a, where a is any term.
Restrictions: none.
100 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

Universal Introduction (UI):


From F a, where a is an arbitrary constant, we can infer ∀xF x.
Restrictions: the arbitrary constant a cannot have been mentioned in a premiss.

Existential Introduction (EI):


From F a, where a is a term, we can infer ∃xF x.
Restrictions: none.

Existential Elimination (EE):


We derive a conclusion P from ∃xF x and possibly other premisses, using a subproof
in which we make an assumption of the form F a.
Restrictions:

• The arbitrary constant a cannot appear in a previous undischarged premiss.


• The arbitrary constant a cannot appear in the conclusion of the EE subproof.

8.4 Examples
In this section I develop a number of typical quantificational arguments to show the appli-
cation of the Introduction and Elimination Rules.

49 ∀x(Hx → M x), ∀x(M x → −Cx) ` ∀x(Hx → −Cx)

1 (1) ∀x(Hx → M x) A
2 (2) ∀x(M x → −Cx) A
1 (3) Ha → M a 1 UE
2 (4) M a → −Ca 2 UE
1,2 (5) Ha → −Ca 3,4 HS
1,2 (6) ∀x(Hx → −Cx) 5 UI

Here is an instance of this sequent in words:

All horses are mammals;


no mammals are chitinous;
∴ no horses are chitinous.

50 ∀x((Dx ∨ F x) → Cx), −Cb ` −Db

1 (1) ∀x((Dx ∨ F x) → Cx) A


2 (2) −Cb A
1 (3) (Db ∨ F b) → Cb 1 UE
1,2 (4) −(Db ∨ F b) 2,3 MT
1,2 (5) −Db & − F b 4 deM
1,2 (6) −Db 5 &E

An instance in words:
8.4. EXAMPLES 101

All dogs and cats are carnivorous;


Bessy is not carnivorous;
∴ Bessy is not a dog.

51 ∀x(Rx → V x), ∃xRx ` ∃xV x

1 (1) ∀x(Rx → V x) A
2 (2) ∃xRx A
3 (3) Ra A (EE)
1 (4) Ra → V a 1 UE
1,3 (5) Va 3,4 MP
1,3 (6) ∃xV x 5 EI
1,2 (7) ∃xV x 2,3,6 EE

Here is an instance in words:

Whoever wears red has a vivid colour sense;


someone is wearing red;
∴ someone has a vivid colour sense.

52 ∀x(Rx → (V x & Ix)), ∃xRx ` ∃xIx

1 (1) ∀x(Rx → (V x & Ix)) A


2 (2) ∃xRx A
3 (3) Ra A (EE)
1 (4) Ra → (V a & Ia) 1 UE
1,3 (5) V a & Ia 3,4 MP
1,3 (6) Ia 5 &E
1,3 (7) ∃xIx 6 EI
1,2 (8) ∃xIx 2,3,7 EE

Here is an instance in words:

Whoever wears red has a vivid colour sense and imagination;


someone here is wearing red;
∴ someone here has imagination.

53 ∀x(Rx → (V x & Ix)), ∀x(Ix → Gx), ∃xRx ` ∃xGx

1 (1) ∀x(Rx → (V x & Ix)) A


2 (2) ∀x(Ix → Gx) A
3 (3) ∃xRx A
4 (4) Ra A (EE)
1 (5) Ra → (V a & Ia) 1 UE
1,4 (6) V a & Ia 4,5 MP
1,4 (7) Ia 6 &E
102 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

2 (8) Ia → Ga 2 UE
1,2,4 (9) Ga 7,8 MP
1,2,4 (10) ∃xGx 9 EI
1,2,3 (11) ∃xGx 3,4,10 EE

Instance in words:

Whoever wears red has a vivid colour sense and imagination;


whoever has imagination will go far in life;
someone is wearing red;
∴ someone will go far in life.

8.5 Distribution Rules for Quantifiers


There are a number of sequents which govern the distribution of quantifiers over formulas.
Some require care in their application.

54 ∀x(F x & Gx) a` ∀xF x & ∀xGx


The proof of this interderivability result is fairly straightforward, and is left as an
exercise (see Review Exercises). Note, however, that the following, which very similar, is
invalid:

∀x(F x ∨ Gx) ` ∀xF x ∨ ∀xGx

A simple example shows that this is invalid: it is true that all natural numbers are either
even or odd, but it is false that either all natural numbers are even or all natural numbers
are odd.
The converse, however, is valid:

55 ∀xF x ∨ ∀xGx ` ∀x(F x ∨ Gx)


If everything is red or everything is green, then everything is red or green. (See Review
Exercises.)

56 ∃x(F x ∨ Gx) a` ∃xF x ∨ ∃xGx


(See Review Exercises.)

57 ∃x(F x & Gx) ` ∃xF x & ∃xGx


(See Review Exercises.) The converse is invalid, however; from the fact that there are
some numbers which are even and some numbers which are odd, we cannot conclude that
there are numbers which are both even and odd.

8.6 Proofs of Duality Rules


The Duality Rules make intuitive sense from the meaning of quantificational symbols. But
they can be proven using the introduction and elimination rules for predicate logic. I will
prove the first one here; the remaining proofs are covered in the Review Exercises. Some
8.7. RULES OF PASSAGE 103

of these derivations, like many proofs of propositions that seem to be “obvious,” are rather
subtle.

58 ∀xF x a` −∃x − F x

(a) ∀xF x ` −∃x − F x


1 (1) ∀xF x A (RAA!)
2 (2) ∃x − F x A (RAA)
3 (3) −F a A (EE)
1 (4) Fa 1 UE
1,3 (5) Fa& − Fa 3,4 &I
3 (6) −∀xF x 1,5 RAA
2 (7) −∀xF x 2,3,6 EE
1,2 (8) ∀xF x & − ∀xF x 1,7 &I
1 (9) −∃x − F x 2,8 RAA

(b) −∃x − F x ` ∀xF x


1 (1) −∃x − F x A
2 (2) −F a A (RAA)
2 (3) ∃x − F x 3 EI
1,2 (4) ∃x − F x & − ∃x − F x 1,3 &I
1 (5) Fa 2,4 RAA
1 (6) ∀xF x 5 UI
These elegant proofs are due to Lemmon [17, p. 123–4]. Note the trick in the first proof
of using the starting assumption as the beginning of an RAA. In this proof there are, in
effect, two overlapping RAAs. In the second proof, the step from (5) to (6) may seem fishy,
since we are using a term a that was introduced in the assumption in line (2). However, we
are on safe ground, since (2) was discharged when we backed out of the RAA. Therefore,
there is nothing special about the constant a and we can safely universalize upon it. But
the way Lemmon emerges from the RAA with a still intact does look a lot like a magician
pulling a rabbit out of a hat!

59 ∃xF x a` −∀x − F x

60 ∀x − F x a` −∃xF x

61 ∃x − F x a` −∀xF x

8.7 Rules of Passage


There are a number of quantificational formulas that are known as the Rules of Passage.
In the following, P is any proposition whatsoever; it could, for instance, be a predicate
formula itself containing quantifiers and bound variables. The rules themselves are called
“Rules of Passage” because they allow us to pass P in or out of the scope of a quantifier.
∃x(P & F x) a` P & ∃xF x (8.1)
104 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

∀x(P & F x) a` P & ∀xF x (8.2)


∃x(P ∨ F x) a` P ∨ ∃xF x (8.3)
∀x(P ∨ F x) a` P ∨ ∀xF x (8.4)
∃x(P → F x) a` P → ∃xF x (8.5)
∀x(P → F x) a` P → ∀xF x (8.6)
∃x(F x → P ) a` ∀xF x → P (8.7)
∀x(F x → P ) a` ∃xF x → P (8.8)

Note the switch between universals and existentials in the last two formulas. I will give
proofs of the last formula here; the rest of the Rules of Passage are left as Review Exercises.

(a) ∀x(F x → P ) ` ∃xF x → P


1 (1) ∀x(F x → P ) A
2 (2) ∃xF x A (CP)
3 (3) Fa A (EE)
1 (4) Fa → P 1 UE
1,3 (5) P 3,4 MP
1,2 (6) P 2,3,5 EE
1 (7) ∃xF x → P 2,6 CP

(b) ∃xF x → P ` ∀x(F x → P )


1 (1) ∃xF x → P A
2 (2) −∀x(F x → P ) A (RAA)
2 (3) ∃x − (F x → P ) 2 Duality
2 (4) ∃x(F x & − P ) 3 Neg→
5 (5) Fa& − P A (EE)
5 (6) Fa 5 &E
5 (7) −P 5 &E
5 (8) ∃xF x 6 EI
1,5 (9) P 1,8 MP
1,5 (10) P & −P 7,9 &I
1,2 (11) P & −P 4,5–10 EE
1 (12) ∀x(F x → P ) 2–10 RAA

8.8 Interderivability ⇔ Equivalence Theorem


Don’t forget that any interderivability result in predicate logic, such as the Duality Rules
or the Rules of Passage, is translatable into a theorem stating an equivalence, and any such
equivalences can be used for substitutions just as with propositional logic.
For instance, from the first Rule of Passage listed above, we have

62 ` ∃x(P & F x) ↔ P & ∃xF x


8.9. WHAT TO MEMORIZE 105

8.9 What to Memorize


I do not expect you to memorize the Rules of Passage for this course. You should know the
Duality Rules, however, since they depend on a very basic understanding of the meaning of
the quantificational symbols. And of course you should be able to use UE, UI, EE, and EI.

8.10 A Shortcut
It is essential to master the use of the four introduction and elimination rules, because
they allow for general procedures that can be applied to any first-order quantificational
argument. Under some circumstances to be specified below, however, we can skip the use of
these rules and do Boolean operations directly on propositional functions within the scope
of quantifiers. This can greatly simplify many derivations.
Here are a few simple examples:

∀x∃y − (F x → Rxy) ↔ ∀x∃y(F x & − Rxy) Neg→

∀x∀y(F x → (Gx → Hy)) ↔ ∀x∀y((F x & Gx) → Hy) Exp

∀x(F x → (Gx & Hx)) ↔ ∀x((F x → Gx) & (F x → Hx)) Dist→

In other words, propositional equivalence relations can be applied to propositional functions


so long as they are entirely within the scope of the quantifiers that govern them.
The proof of the general validity of this rule requires metalogical techniques that are
beyond the scope of this course, although individual examples can be proven using the
techniques in this chapter.

8.11 Fallacy Alert!


A rather common error is the following:
(n) −∀xF x
(n+1) −F a n UE
Line (n) is (by Duality) equivalent to ∃x(−F x). So there is something that is −F , but
it might not be the individual a! If we want to use the statement −F a (as we might, for
instance, if we were going to try an EE), we would have to assume it, and then (we hope)
discharge it if the EE is successful. In other words, the statement −∀xF x is not really a
universal statement at all, and hence we cannot apply UE to it.

8.12 Fallacy Alert!


It is not possible to derive a particular existential from a universal. For instance, from the
statement that all hobbits live in the Shire, we cannot infer that there are some hobbits
living in the Shire. The following inference is invalid:
1 (1) ∀x(Hx → Sx) A
1 (2) ∃x(Hx & Sx) 1 ???
106 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

This is essentially a consequence of the fact that a universal claim is hypothetical and does
not assert existence, while an existential claim does.
By comparison, the following pattern of inference is valid:

1 (1) ∀x(Hx → Sx) A


1 (2) Ha → Sa 1 UE
1 (3) ∃y(Hy → Sy) 2 EI

However, line (3) does not assert the existence of hobbits, whether living in the Shire or
not. It merely says that there is something which if it is a hobbit, lives in the Shire. This
is like saying that if I were a hobbit, I would live in the Shire; and this in no way asserts
the existence of hobbits!

8.13 Summary of Rules for Predicate Natural Deduction


In this section I summarize the rules for predicate natural deduction. The derived rules,
such as the Rules of Passage, the Duality relations, and the Distribution rules, may be used
freely to help you solve more complicated problems.

8.13.1 Basic Rules


Here, first, in succinct form, are the Basic Rules of predicate natural deduction.

Universal Elimination (UE)


∀xF x ` F a, where a is any constant or proper name (term).
No restrictions on x.

Universal Introduction (UI)


F a ` ∀xF x.
Restrictions: a must be an arbitrary constant, which means that it must not have been
used in an undischarged assumption.

Existential Introduction (EI)


F a ` ∃xF x.
No restrictions on a.
8.13. SUMMARY OF RULES FOR PREDICATE NATURAL DEDUCTION 107

Existential Elimination (EE)

EE is a generalized vE, and if you have any questions about how vE works you should
review it at this point. The object of EE is to derive a conclusion from an existential claim,
so the overall result will look like ∃x ` P .
Here is the general pattern of EE:

1 (1) ∃xF x A
2 (2) (other premisses, if any) A
3 (3) Fa A (EE)
..
.
2,3 (n) P
1,2 (n + 1) P 1,3,n EE

Lines 3 to n are the EE subproof. In the justification of line n + 1, the first number is the
line where the starting existential claim was introduced, and the second and third numbers
are the beginning and end lines of the subproof.
Restrictions on the choice of a:

1. The constant a must be arbitrary, which means that it cannot appear in any previous
assumption.

2. The constant a cannot appear in the conclusion of the EE subproof.

8.13.2 Duality Rules


I state the Duality Rules here as interderivability results:

∀xF x a` −∃x − F x
∃xF x a` −∀x − F x
∀x − F x a` −∃xF x
∃x − F x a` −∀xF x

Any of these can be cited in a justification simply as “Duality”. (See examples in the Review
Exercises.)

8.13.3 Rules of Passage


I won’t repeat here all of the Rules of Passage, which are listed in 8.7. Any of them can
be cited simply by saying “Passage” in the justification column. (See Review Exercises for
examples.)

8.13.4 Distribution Rules


As described in detail in 8.5, quantifiers can be distributed over their scopes sometimes,
but not in all cases; one has to be careful in using these rules. Here are the ones that are
108 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

valid, together with convenient short-form names that can be used when they are cited in
a justification line:

∀x(F x & Gx) a` ∀xF x & ∀xGx Dist∀


∀x(F x ∨ Gx) ` ∀xF x ∨ ∀xGx Dist∀
∃x(F x ∨ Gx) a` ∃xF x ∨ ∃xGx Dist∃
∃x(F x & Gx) ` ∃xF x & ∃xGx Dist∃

Note carefully that some of these results are interderivabilities, while some go only one way.
Quantifiers can in some cases be “distributed” over → but the results are more com-
plicated and are best dealt with on an individual basis. (See the Review Exercises.)

8.13.5 Direct Manipulation of Propositional Functions


Don’t forget that propositional functions such as F x & Gy or Hx → Gxy can be manipulated
using standard propositional equivalence rules, so long as you stay within the scope of the
quantifier. For instance, the following transformation is valid:

∀x∃y − (P x → Qxy) ↔ ∀x∃y(P x & − Qxy) Neg → (8.9)


8.14. REVIEW EXERCISES FOR CHAPTER 8 109

8.14 Review Exercises for Chapter 8


Use the following symbols for individuals, predicates, and relations:

p := Plato Bx := x is brown
s := Socrates U x := x is a bear
h := Heidegger Hx := x is a person
r := Rufus N x := x loves honey
k := The Yukon Ax := x loves asparagus
c := the Republic Rx := x loves broccoli
t := Thumper M x := x is a mammal
b := Being and Time W x := x is warm-blooded
Lx := x is a rabbit Lxy := x loves y
Dx := x is a dialogue Cxy := x came from y
Ox := x is bamboo T xy := x takes y seriously
Axy := x is the author of y

A
1. Translate the following arguments into symbols, and prove them valid:

(a) Rufus loves honey.


∴ something loves honey.
(b) Plato was the author of The Republic.
∴ Plato was the author of something.
(c) All bears love honey.
Rufus is a bear.
∴ Rufus loves honey.
(d) No bears love broccoli.
Thumper loves broccoli.
∴ Thumper is not a bear.
(e) All bears are mammals.
All mammals are warm-blooded.
Rufus is a bear.
∴ Rufus is warm-blooded.

2. Prove the validity of the following sequents:

(a) F a & F b ` ∃xF x


(b) ∀x(M x → W x), M a & Ra ` W a
(c) ∀x(F x → Gx), ∃xF x ` ∃xGx
(d) ∀x(F x → −Gx), ∃xGx ` ∃x − F x
(e) ∀x − (F x ∨ Gx) ` −∃x(F x ∨ Gx)
110 CHAPTER 8. PREDICATE LOGIC: NATURAL DEDUCTION

B
1. Translate the following arguments into symbols and prove them valid:

(a) No one takes anything written by Heidegger seriously.


Being and Time was written by Heidegger.
∴ Socrates does not take Being and Time seriously.
(b) All bears love honey.
Rufus is a brown bear.
∴ Something brown loves honey.
(c) Plato was the author of Republic.
Republic is a dialogue.
∴ Plato wrote some dialogues.
(d) All bears love honey.
Some bears are brown.
All bears are mammals.
∴ Some brown mammals love honey.
(e) All bears and rabbits are mammals.
Bears do not like asparagus.
Some bears come from the Yukon.
∴ some mammals do not like asparagus.

2. Prove the Rules of Passage, 8.1 through 8.7

3. Prove the validity of the following sequents:

(a) Dist∀ (all of them)


(b) Dist∃ (all of them)
(c) ∀x(F x → Gx) ` ∀xF x → ∀xGx
(d) ∃x(F x → Gx) ` ∀xF x → ∃xGx

4. It can be very instructive to try to make up your own examples as well!

C
1. Prove the Duality relations that were not proven in the Chapter; that is, Sequents
59, 60, and 61.
Chapter 9

Predicate Logic With Identity

In this chapter we introduce the use of two simple rules concerning the relation of identity.
These rules permit a powerful extension of predicate logic which enables us to express the
logical structure of a much larger class of quantificational propositions with a very minimal
increase in formal machinery.
By “identity” we simply mean a two-term relation which says that two proper names
or constants point to the same object in the universe of discourse. We could write this as
Ixy := x is identical to y
but it is easier, and more in keeping with standard mathematical usage, to express identity
as follows:
x = y := x and y point to the same member of U.
We will also allow the use of the following obvious notational convenience:

a 6= b := −(a = b)
Defining identity is, in fact, a tricky philosophical problem. The definition I have given
here is close to being circular, since we probably would have to use the identity relation in
order to express the fact that two symbols single out the same individual. Defining identity
in a way that is clearly non-circular would probably require the resources of higher-order
logic (since we would have to quantify over predicates and relations); one would also likely
have to examine how identity is used in physics (where the notion of “identical particle” is
very difficult). For our purposes, we can define identity simply as a relation that obeys the
introduction and elimination rules, given below.
We now introduce introduction and elimination rules for identity:
Identity Introduction (=I):
For any term a the formula a = a is a theorem, and may be introduced at any
point in a proof. In symbols:
` a = a.

Identity Elimination (=E):


If s and t are terms, if s = t, and given a proposition A(s) (that is, a proposition
containing s), we can derive A(t) by substituting one or more occurrences of
s in A(s) with t.

111
112 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

The second rule, =E, is the one we will use most often. It simply allows the substitution
of one term symbol for another, similarly to the way we can substitute equals for equals in
mathematics.

9.1 Examples
Here are a few elementary examples which illustrate the use of these rules, and demonstrate
some basic properties of the identity relation:

63 a=b`b=a

1 (1) a=b A
(2) a=a =I
1 (3) b=a 1,2 =E

In line (3) we cite two lines, one of which is the statement of the identity we are using, while
the other is the line into which we make a substitution. In line (2) we substitute b for a.
This proof demonstrates that identity is a symmetric relation.

64 a = b&b = c ` a = c

1 (1) a = b&b = c A
1 (2) a=b 1 &E
1 (3) b=c 1 &E
(4) a=c 2,3 =E

This proof demonstrates that identity is a transitive relation.


The next proof demonstrates a triviality: to say that something is Paul, and that thing
loves curry, is equivalent to saying that Paul loves curry. The main interest in this proof is
that it demonstrates the use of =E.

65 F p a` ∃x(x = p & F x)
I’ll prove one part of it; the converse is left as an exercise.

1 (1) ∃x(x = p & F x) A


2 (2) b = p&Fb A (EE)
2 (3) b=p 2 &E
2 (4) Fb 2 &E
2 (5) Fp 3,4 =E
1 (6) Fp 1,2,5 EE

The next proof shows that the way we use identity here corresponds to our basic
notion of identity, which is that things that are presumed identical should have all the
same properties.

66 a = b ` Fa ↔ Fb
9.2. FALLACY ALERT! 113

1 (1) a=b A
2 (2) Fa A (CP)
1,2 (3) Fb 1,2 =E
1 (4) Fa → Fb 2,3 CP
5 (5) Fb A (CP)
1,5 (6) Fa 1,5 =E
1 (7) Fb → Fa 5,6 CP
1 (8) (F a → F b) & (F b → F a) 4,7 &I
1 (9) Fa ↔ Fb 8 Def↔

9.2 Fallacy Alert!


As shown above, the following pattern of reasoning is correct:
a = b ` F a ↔ F b. (9.1)
This simply expresses what we mean by identity. But do not attempt the following move:
F a ↔ F b ` a = b. (????) (9.2)
This is invalid because the two individuals a and b do not necessarily have to be the same
in order for F a to be true iff F b is true. Suppose F x is defined as “x is a philosopher.” The
statement “F (Socrates) ↔ F (Plato)” is true, and yet Socrates was a distinct individual
from Plato.
Remember, to say a = b is to say that the symbols a and b point to the same individual
in the universe of discourse. To say F a ↔ F b is to say that the propositions F a and F b
have the same truth value.

9.3 Definite Descriptions


Phrases such as “the present King of France,” “the man in the Moon,” “the last person to
cross the finish line,” or “the first African woman to win the Nobel Peace Prize,” are called
definite descriptions. They are widely used in ordinary language, and yet it might not be
immediately obvious how they should be analyzed logically. To see why this is important,
consider the following sentences:
1. “France presently has a bald King.”
2. “The present King of France is bald.”
The first sentence is false. The second sentence seems very suspicious, but it is not imme-
diately obvious how to assign it a truth value. Some philosophers have gone so far as to
declare sentences such as this, in which a definite description points to a nonexistent ob-
ject, to be meaningless. But that doesn’t seem quite right either, since we make deductions
using definite descriptions all the time. Consider, for instance, the following elementary
argument:
Harper is the Prime Minister.
Layton is not Harper.
∴ Layton is not the Prime Minister.
114 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

Intuitively this seems to be valid, but we don’t yet know how to represent its validity
formally. As before, we motivate the need for new types of symbols by showing that
there are arguments that are clearly valid, but whose validity we do not yet know how to
demonstrate.
What makes this argument work is clearly the fact that (as of 2006) Harper is the Prime
Minister, not just any Prime Minister. In 1905, Bertrand Russell suggested that we should
analyse propositions using definite descriptions as combinations of existence claims with
uniqueness claims. Thus, for instance, we could express “Harper is the Prime Minister” as
P h & ∀y(P y → (y = h))
In words, “Harper is the Prime Minister and if anyone is the Prime Minister, that person is
Harper”. This statement needs to be distinguished carefully from the claim that “Harper is
a Prime Minister,” which would be symbolized simply as P h. This statement makes sense,
since there are, after all, several countries that have Prime Ministers.
To say “The Prime Minister is a former Finance Minister,” we can write
∃x(P x & F x & ∀y(P y → (y = x)))
And to say “The present King of France is bald,” we can write
∃x(Kx & Bx & ∀y(Ky → (y = x))
In other words, this simply says, “There exists a unique bald King of France,” and, at the
time of this writing, this statement is false.
We can also express the identity of two descriptions. “The author of Hamlet is the
author of Lear ” can be expressed as
∃x(Hx & Lx & ∀y((Hy & Ly) → (y = x))
As these examples suggest, a definite description such as “the Prime Minister of Canada”
functions grammatically very much like a proper name such as “Mr. Stephen Harper.”
What about negations of statements using definite descriptions? For instance, how
would we translate “Layton is not the Prime Minister?” Intuitively, it seems that to deny
this statement should amount to saying that either Layton is not a Prime Minister, or that
someone other than Layton is a Prime Minister; either or both of those possibilities would
be sufficient to negate the claim that Layton is the Prime Minister.
A quick bit of predicate natural deduction shows that this analysis is correct:
−(P l & ∀y(P y → (y = l)))
↔ −P l ∨ −∀y(P y → (y = l)) deM
↔ −P l ∨ ∃y − (P y → (y = l)) Duality
↔ −P l ∨ ∃y(P y & (y 6= l)) Neg→
Russell’s method of analysing the logic of definite descriptions has generated a lot of
controversy in the literature on philosophy of language. However, it provides a workable
logical analysis of definite descriptive claims and, as we shall show next, allows us to demon-
strate the validity of arguments involving definite descriptions which should, indeed, turn
out to be valid. Russell’s method also illustrates a point that we noted earlier, which is that
the grammatical structure of a sentence in a particular natural language such as English is
not always a good indication of its underlying logical form.
In his later years Russell defended his approach against criticism by P. F. Strawson:
9.3. DEFINITE DESCRIPTIONS 115

My theory of descriptions was never intended as an analysis of the state of mind


of those who utter sentences containing descriptions. Mr Strawson gives the
name ‘S’ to the sentence ‘The King of France is wise’, and he says of me ‘The
way in which he [Russell] arrived at the analysis was clearly by asking himself
what would be the circumstances in which we would say that anyone who uttered
the sentence S had made a true assertion’. This does not seem to me a correct
account of what I was doing. . . . I was concerned to find a more accurate and
analysed thought to replace the somewhat confused thoughts which most people
at most times have in their heads. [21, p. 243]

Russell, and indeed many of those who contributed to modern symbolic logic, believed that
the job of the logician is not only to describe the logic of ordinary reasoning in a precise
way, but to devise techniques that can help us to reason more effectively.

9.3.1 Example
We return to the argument introduced above:

Harper is the Prime Minister.


Layton is not Harper.
∴ Layton is not the Prime Minister.

Using obvious definitions for the term and predicate symbols, we can prove this argu-
ment valid as follows:

1 (1) P h & ∀x(P x → (x = h)) A


2 (2) l 6= h A
1 (3) Ph 1 &E
1 (4) ∀x(P x → (x = h)) 1 &E
5 (5) P l → (l = h) 4 UE
1,2 (6) −P l 2,5 MT
1,2 (7) −P l ∨ ∃y(P y & (y 6= l)) 6 vI

Note, again, that line (6) only says that Layton is not a Prime Minister. We need to “or”
on the extra clause that states that someone else might be Prime Minister as well as Layton.
(I thank Heather Rivera for insisting that I clarify this point.)

A Fine Point
Many logic texts write statements such as “Harper is the Prime Minister” in the following
more complicated form:

∃x(P x & ∀y(P y → (y = h)) & P h) (9.3)

This is equivalent to the simpler form we use here, which can be used whenever we know
the proper name of “the such-and-such”. This can be proven from the fact that

∃x(P x & (x = h)) a` P h. (9.4)

This is proven in the Exercises, below.


116 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

9.3.2 Expressing Numerical Relations


It is also possible to express simple numerical relationships using identity.

At most one person knows the secret.


∀x∀y((Kx & Ky) → (x = y))

Here it is in words that are half-way between logic and ordinary English: For
all x and y, if x and y know the secret, then x and y are the same person.

Exactly (or only) one person knows the secret.


∃x(Kx & ∀y(Ky → (y = x))).

In half-logic: Someone knows the secret, and anyone else who knows the secret
is that person.

Exactly two people know the secret.


∃x∃y(Kx & Ky & (x 6= y) & ∀z(Kz → ((z = x) ∨ (z = y)))).

In half-logic: there are two distinct people who know the secret, and anyone else
who knows the secret is identical to one or the other of those two people.

9.3.3 Superlatives
It is also possible to express simple superlatives using identity. Here’s an example, with
hopefully obvious notation:

Smith is the tallest person in Logic class.


Ls & ∀y((Ly & (y 6= s)) → T sy)

In half-logic: Smith is in the Logic class and if there is anyone else in the Logic
class who is not Smith then Smith is taller than that person.
9.4. REVIEW EXERCISES 117

9.4 Review Exercises


Use the following symbol definitions:

a := Aristotle
s := The Stagirite1
p := Plato
e := The Ethics
r := The Republic
t := The Timaeus
Kx := x knew (knows) The Good
Axy := x was (is) the author of y

A
1. Translate into symbols:

(a) Aristotle was not Plato.


(b) Aristotle was The Stagirite.
(c) Either Aristotle or Plato wrote the Republic.
(d) Plato wrote both the Republic and the Timaeus.

2. Translate into good, idiomatic English:

(a) p 6= s
(b) r 6= t
(c) Apr ∨ −Apr

3. Translate and prove:

(a) Aristotle wrote the Ethics.


Aristotle was The Stagirite.
∴ The Stagirite wrote the Ethics.
(b) If Aristotle wrote the Republic then Aristotle was Plato.
However, Aristotle was not Plato.
∴ Aristotle did not write the Republic.

4. Prove the following sequents:

(a) F a & Ga, (a = b) ` F a & Gb


(b) F a → Ga, (a = b) ` F b → Ga
(c) ∀x(F ax → Gbx), a = b ` ∀x(F ax → Gax)
(d) F aa → (a 6= a) ` −F aa
1
Aristotle came from a small town called Stagira, and so he was often called The Stagirite.
118 CHAPTER 9. PREDICATE LOGIC WITH IDENTITY

B
1. Translate into symbols:

(a) Plato was the author of the Republic.


(b) Aristotle was not the author of the Republic.

2. Translate into good, idiomatic English:

(a) ∃x((Axr & Axt) & ∀y(Ayr → (x = y)))


(b) Apr & Apt & ∀y(Ayr → (y = p))

3. Prove the following sequent:


∃x(P x & (x = l)) a` P l

4. Translate the following argument into symbols, and prove the resulting sequent:

Plato knew the Good.


Aristotle was not Plato.
At most one person knew the Good.
∴ Aristotle did not know the Good.
Solutions to Selected Exercises

Chapter 2
Exercises 2.2.8
A
1. (a) T & C or C & T

(c) T → −C

(e) −B → A

(g) B → C

(i) −C → −T

(k) A → −B

(m) T ∨ −C or −C ∨ T

2. (a) If Bob goes to the store then Alice does not.


or
Bob goes to the store only if Alice does not.

(c) Either Bob doesn’t go to the store or Cheryl goes to work.

(e) If Alice does not go to the store then Bob does.


or
Alice does not go to the store only if Bob does.

(g) Either Alice does not go to the store or Tad does not stay home.

(i) Tad stays home only if Bob goes to the store.

Exercises 2.3.1
A
1. (a) F

(c) T

(e) T

119
120 SOLUTIONS TO SELECTED EXERCISES

2. (a) T

(c) F

(e) T

Exercises 2.6.1
A
1. (a) (B & T ) → (A → C)

(c) C ↔ T

(e) −T → (A ∨ B)

2. (a) If Alice and Bob go to the store and Cheryl goes to work, then Tad does’t stay home.

(c) If Alice goes to the store then if Bob goes to the store Cheryl goes to work.
or:
If Alice goes to the store then Bob goes to the store only if Cheryl goes to work.

(e) If Alice goes to the store only if Bob goes too, then Cheryl goes to work only if Tad
stays home.

Exercises 2.7.1
A
1. (b) −C ∨ −T

(d) −(C → T )

2. (b) Bob going to the store does not imply either Cheryl not going to work or Tad staying
home.

(d) Alice goes to the store if and only if Bob does not.

Solutions for Chapter 3


Exercises 3.3.1
A

1. (a) P, P → −Q ` −Q
1 (1) P A
2 (2) P → −Q A
1,2 (3) −Q 1,2 MP

(c) P & Q, P → R ` R
121

1 (1) P &Q A
2 (2) P →R A
1 (3) P 1 &E
1,2 (4) R 2,3 MP

(e) − − P ` P ∨ R
1 (1) −−P A
1 (2) P 1 DN
1 (3) P ∨R 2 vI

1. (a) P → (Q & R), P ` R ∨ S


1 (1) P → (Q & R) A
2 (2) P A
1,2 (3) Q&R 1,2 MP
1,2 (4) R 3 &E
1,2 (5) R∨S 4 vI

(c) P ∨ Q, Q → R ` P ∨ R
1 (1) P ∨Q A
2 (2) Q→R A
3 (3) P A (vE)
3 (4) P ∨R 3 vI
5 (5) Q A (vE)
2,5 (6) R 2,5 MP
2,5 (7) P ∨R 6 vI
1,2 (8) P ∨R 1,3,4,5,7 vE

(e) P → Q, Q → R, −R ` −P
1 (1) P →Q A
2 (2) Q→R A
3 (3) −R A
4 (4) P A (RAA)
1,4 (5) Q 1,4 MP
1,2,4 (6) R 2,5 MP
1,2,3,4 (7) R& −R 3,6 &I
1,2,3 (8) −P 4,7 RAA
122 SOLUTIONS TO SELECTED EXERCISES

Exercises 3.7

2. (a) P, Q ` P → Q
1 (1) P A
2 (2) Q A
2 (3) P →Q 1,2 CP

(c) P → (Q ∨ R), −(Q ∨ R) ` −P


1 (1) P → (Q ∨ R) A
2 (2) −(Q ∨ R) A
1,2 (3) −P 1,2 MT

(e) (Q → R), −(Q → R) ` R


1 (1) Q→R A
2 (2) −(Q → R) A
1,2 (3) R 1,2 ExF

(g) P → (Q → R), (Q → R) → T ` P → T
1 (1) P → (Q → R) A
2 (2) (Q → R) → T A
1,2 (3) P →T 1,2 HS

(h) −(P & − Q) ` −P ∨ Q


1 (1) −(P & − Q) A
1 (2) −P ∨ − − Q 1 deM
1 (3) −P ∨ Q 2 DN

(j) (X ∨ Y ) → Q ` (−X & − Y ) ∨ Q


1 (1) (X ∨ Y ) → Q A
1 (2) −(X ∨ Y ) ∨ Q 1 Imp
1 (3) (−X & − Y ) ∨ Q 2 deM

(l) −(X → Y ) → −P, P ` (X → Y )


1 (1) −(X → Y ) → −P A
2 (2) P A
1 (3) P → (X → Y ) 1 Trans
1,2 (4) (X → Y ) 2,3 MP
123

(n) P & (Q → R) ` (P & − Q) ∨ (P & R)


1 (1) P & (Q → R) A
1 (2) P & (−Q ∨ R) 1 Imp
1 (3) (P & − Q) ∨ (P & R) 2 Dist

1. (a) P → Q, R → S, −Q ∨ −S ` −P ∨ −R (DD)
1 (1) P →Q A
2 (2) R→S A
3 (3) −Q ∨ −S A
4 (4) −Q A (vE)
1,4 (5) −P 1,4 MT
1,4 (6) −P ∨ −R 5 vI
7 (7) −S A (vE)
2,7 (8) −R 2,7 MT
2,7 (9) −P ∨ −R 8 vI
1,2,3 (10) −P ∨ −R 3,4,6,7,9 vE

(c) P → Q ` −(P & − Q)


1 (1) P →Q A
1 (2) −P ∨ Q 1 Imp
1 (3) −P ∨ − − Q 2 DN
1 (4) −(P & − Q) 3 deM

(d) −(P & − Q) ` P → Q


1 (1) −(P & − Q) A
1 (2) −P ∨ − − Q 1 deM
1 (3) −P ∨ Q 2 DN
1 (4) P →Q 3 Imp

(g) P ∨ (Q & R) ` (P ∨ Q) & (P ∨ R)


1 (1) P ∨ (Q & R) A
2 (2) P A (vE)
2 (3) P ∨Q 2 vI
2 (4) P ∨R 2 vI
2 (5) (P ∨ Q) & (P ∨ R) 3,4 &I
6 (6) Q&R A (vE)
6 (7) Q 6 &E
6 (8) R 6 &E
6 (9) P ∨Q 7 vI
124 SOLUTIONS TO SELECTED EXERCISES

6 (10) P ∨R 8 vI
6 (11) (P ∨ Q) & (P ∨ R) 9,10 &I
1 (12) (P ∨ Q) & (P ∨ R) 1,2,5,6,11 vE

(i) P → (Q → R) ` (P & Q) → R (Exp)


1 (1) P → (Q → R) A
2 (2) P &Q A (CP)
2 (3) P 2 &E
1,2 (4) Q→R 1,3 MP
1 (5) Q 2 &E
1,2 (6) R 4,5 MP
1 (7) (P & Q) → R 2,6 CP

(k) (P → Q) & (P → R) ` P → (Q & R)


1 (1) (P → Q) & (P → R) A
2 (2) P A (CP)
1 (3) P →Q 1 &E
1 (4) P →R 1 &E
1,2 (5) Q 2,3 MP
1,2 (6) R 2,4 MP
1,2 (7) Q&R 5,6 &I
1 (8) P → (Q & R) 2,7 CP

(m) −(P & − Q) ` P → Q


1 (1) −(P & − Q) A
1 (2) −P ∨ − − Q 1 deM
1 (3) −P ∨ Q 2 DN
1 (4) P →Q Imp

2. (a) P, Q ` P ↔ Q
1 (1) P A
2 (2) Q A
2 (3) P →Q 1,2 CP
1 (4) Q→P 1,2 CP
1,2 (5) (P → Q) & (Q → P ) 3,4 &I
1,2 (6) P ↔Q 5 Def↔

(c) (P → R), (Q → R) ` (P & Q) → R


1 (1) P →R A
2 (2) Q→R A
125

3 (3) P &Q A (CP)


3 (4) P 3 &E
1,3 (5) R 1,4 MP
1 (6) (P & Q) → R 3,5 CP

(e) R, R ↔ P, −Q → −P ` Q
1 (1) R A
2 (2) R↔P A
3 (3) −Q → −P A
3 (4) P →Q 3 Trans
2 (5) (R → P ) & (P → R) 2 Def↔
2 (6) R→P 5 &E
2,3 (7) R→Q 4,6 HS
1,2,3 (8) Q 1,7 MP

(g) P ↔ S, P ∨ Q, P ∨ R, −Q ∨ −R ` S
1 (1) P ↔S A
2 (2) P ∨Q A
3 (3) P ∨R A
4 (4) −Q ∨ −R A
5 (5) −Q A (vE)
2,5 (6) P 2,5 DS
7 (7) −R A (vE)
3,7 (8) P 3,7 DS
2,3,4 (9) P 4,5,6,7,8 (vE)
1 (10) (P → S) & (S → P ) 1 Def↔
1 (11) P →S 10 &E
1,2,3,4 (12) S 9,11 MP

(i) P → (Q ∨ R), (P → Q) → X, (P → R) → Y ` X ∨ Y
1 (1) P → (Q ∨ R) A
2 (2) (P → Q) → X A
3 (3) (P → R) → Y A
1 (4) (P → Q) ∨ (P → R) 1 Dist↔
5 (5) P →Q A (vE)
2,5 (6) X 2,5 MP
2,5 (7) X ∨Y 6 (vI)
8 (8) P →R A (vE)
3,8 (9) Y 3,8 MP
3,8 (10) X ∨Y 6 vI
1,2,3 (11) X ∨Y 1,5,7,8,10 vE
126 SOLUTIONS TO SELECTED EXERCISES

Solutions for Chapter 4

Exercise 4.1.1 (A)


1. Identify the rule used in each of the following deductive steps.
(a) MP
(b) MT
(c) vI
(d) HS
(e) Imp

Exercise 4.2.1 (A)


1. State the justifications for each of the following steps.
(a) n Imp(×2)
(b) n Imp, Dist
(c) n Imp, DN
(d) n Imp, Dist
(e) n, n+1 MT, DN
(f) n Assoc, Comm.

Exercise 4.5.1 (B)

1. (a) ` (P & − P ) → Q
1 (1) P & −P A (CP)
1 (2) Q 1 ExF
(3) (P & − P ) → Q 1,2 CP

(b) ` P → (−P → Q)
1 (1) P A (CP)
2 (2) −P A (CP)
1,2 (3) P & −P 1,2 &I
1,2 (4) Q 3 ExF
1 (5) −P → Q 2,4 CP
(6) P → (−P → Q) 1,5 CP
Alternate proof:
(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ Q 1 vI
(3) −P ∨ (P ∨ Q) 2 Assoc, Comm
(4) P → (− − P ∨ Q) 3 Imp, DN
(5) P → (−P → Q) 4 Imp
127

(c) ` P ∨ (P → Q)
(1) −P → (P → Q) TI (Ques. 1(a))
(2) − − P ∨ (P → Q) 1 Imp
(3) P ∨ (P → Q) 2 DN

Alternate proof:

(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ Q 1 vI
(3) P ∨ (−P ∨ Q) 2 Assoc
(4) P ∨ (P → Q) 3 Imp

(d) ` (P → Q) ∨ (Q → P )
(1) P ∨ −P ExM
(2) (P ∨ −P ) ∨ (Q ∨ −Q) 1 vI
(3) (−P ∨ Q) ∨ (−Q ∨ P ) 2 Assoc (×2), Comm (×3)
(4) (P → Q) ∨ (Q → P ) 3 Imp (×2)

(e) ` (P → Q) ∨ (Q → R)
(1) Q ∨ −Q ExM
(2) −P ∨ (Q ∨ −Q) 1 vI
(3) (−P ∨ (Q ∨ −Q)) ∨ R 2 vI
(4) (−P ∨ Q) ∨ (−Q ∨ R) 3 Assoc (×2)
(5) (P → Q) ∨ (Q → R) 4 Imp (×2)

(f) ` (−P → P ) → P
1 (1) −P → P A (CP)
2 (2) −P Q (RAA)
1,2 (3) P 1,2 MP
1,2 (4) P & −P 2,3 &I
1 (5) P 2,4 RAA, DN
(6) (−P → P ) → P 1,5 CP

Alternate proof:

1 (1) −P → P A (CP)
1 (2) −−P ∨ P 1 Imp
1 (3) P ∨P 2 DN
1 (4) P 3 Idem
(5) (−P → P ) → P 1,4 CP

Exercise 4.5.1 (C)

1. (a) ` P ∨ −P
128 SOLUTIONS TO SELECTED EXERCISES

1 (1) −(P ∨ −P ) A (RAA)


2 (2) P A (RAA)
2 (3) P ∨ −P 2 vI
1,2 (4) (P ∨ −P ) & − (P ∨ −P ) 1,3 &I
1 (5) −P 2,4 RAA
1 (6) P ∨ −P 5 vI
1 (7) (P ∨ −P ) & − (P ∨ −P ) 1,6 &I
(8) P ∨ −P 1,7 RAA, DN

This elegant proof is due to Lemmon (his Sequent 44.) [17].

Exercise 4.6.1 (B)

1. (a) Given P, Q ` R, show ` (P & Q) → R


1 (1) (P & Q) A (CP)
1 (2) P 1 &E
1 (3) Q 1 &E
1 (4) R 2,3 SI
(5) (P & Q) → R 1,4 CP

(b) Given ` (P & Q) → R, show P, Q ` R


1 (1) P A
2 (2) Q A
1,2 (3) P &Q 1,2 &I
(4) (P & Q) → R TI
1,2 (5) R 3,4 MP

Exercise 4.7.1 (B)

1. (a) ` (P ↔ Q) ↔ (−(P → −Q) ∨ −(−Q → P ))


(P ↔ Q) ↔ ((P & Q) ∨ (−P & − Q)) Equiv
↔ ((P & − −Q) ∨ (−Q & − P )) DN, Comm
↔ (−(P → −Q) ∨ −(−Q → P )) Neg→ (×2)

(b) ` ((Q ∨ R) → P ) ↔ ((Q → P ) & (R → P ))


((Q ∨ R) → P ) ↔ (−(Q ∨ R) ∨ P ) Imp
↔ ((−Q & − R) ∨ P ) deM
↔ ((−Q ∨ P ) & (−R ∨ P )) Dist
↔ ((Q → P ) & (R → P )) Imp(×2)

(c) ` (P → (Q ∨ R)) ↔ ((P → Q) ∨ (P → R))


129

(P → (Q ∨ R)) ↔ (−P ∨ (Q ∨ R)) Imp


↔ ((−P ∨ −P ) ∨ (Q ∨ R)) Idem
↔ (−P ∨ Q) ∨ (−P ∨ R)) Assoc, Comm
↔ (P → Q) ∨ (P → R) Imp

(d) ` (P → (Q & R)) ↔ ((P → Q) & (P → R))


(P → (Q & R)) ↔ (−P ∨ (Q & R)) Imp
↔ ((−P ∨ Q) & (−P ∨ R)) Dist
↔ ((P → Q) & (P → R)) Imp(×2)

(e) ` (P ∨ (Q → R)) ↔ ((Q → P ) ∨ (−P → R))


(P ∨ (Q → R)) ↔ (P ∨ (−Q ∨ R) Imp
↔ ((P ∨ P ) ∨ (−Q ∨ R)) Idem
↔ ((−Q ∨ P ) ∨ (− − P ∨ R)) Comm, Assoc, DN
↔ ((Q → P ) ∨ (−P → R)) Imp(×2)

Solutions for Chapter 5

Review Exercises on Chapter 5

1. Write the complete truth table for each of the following wffs:

(a)

− (P ∨ Q) & − (P & Q)
F T T T F F T T T
F T T F F T T F F
F F T T F T F F T
T F F F T T F F F

(b)

− (− (P → Q) & − (Q → P ))
T F T T T F F T T T
T T T F F F F F T T
T F F T T F T T F F
T F F T F F T F T F
130 SOLUTIONS TO SELECTED EXERCISES

(c)

− (P & (Q & R))


F T T T T T
T T F T F F
T T F F F T
T T F F F F
T F F T T T
T F F T F F
T F F F F T
T F F F F F
(d)

P → (Q → R)
T T T T T
T F T F F
T T F T T
T T F T F
F T T T T
F T T F F
F T F T T
F T F T F

2. Evaluate the following wffs, when P = T and Q = F .


(a)

− (P → Q)
T T F F
(b)

− (P ↔ Q)
T T F F
(c)

(− P ↔ Q)
F T T F
(d)

(− P & − Q) ∨ (P & Q)
F T F F T T T T T
F T F T F F T F F
T F F F T F F F T
T F T T F T F F F

3. Use truth tables to characterize the following wffs as tautologous, inconsistent, or


contingent.
131

(a)

P → P
T T T Tautology
F T F
(b)

− P → P
F T T T Contin-
T F F F
gent
(c)

P → (− P → P )
T T F T T T Tautology
F T T F F F
(d)

(P ∨ −P ) → (P & − P )
Contradiction
T F F

4. Check the following sequents for validity. Use short-cuts where they are suitable.
(a)

P ∨ Q − Q P
F F F T F F
This is valid, since making the conclusion F forces a premiss
to be F.
(b)

P → Q − Q − P
T F F T F F T
Valid
(c)

P → Q P → (P & Q)
T F F T F T F F
Valid
(d)

P → Q − P − Q
F T T T F F T
Invalid, since there is a line in the truth table in which the
premisses are T and the conclusion F.
132 SOLUTIONS TO SELECTED EXERCISES

1. Use truth tables to classify the following wffs as either tautologous, inconsistent, or
contingent.

(a)

P ∨ (Q & R)
F F F F F
T T T T T
Contingent; to demonstrate this, all we have to do is show
that there is at least one line that comes out T, and one that
comes out F.

(b)

P → (R & S)
T F F F
T T T T T
Contingent

(c)

P → (Q → P )
T T T T
Tautology

(d)

(P → R) → (P → (P & R))
T F F T T F T F F
Tautology; since the only way to make the consequent F
makes the antecedent F as well.

(e)

(P → (Q → P )) → R
T T T
T F F

Contingent; we know by (c) that the antecedent is always


T, so the value depends on R, which can be T or F.
133

(f)

(P ↔ Q) ↔ (Q ↔ − P)
T T T F T F F T
T F F F F T F T
F F T F T T T F
F T F F F F T F
Contradiction
(g)

P → (R & − R)
T F F
F T F
Contingent
(h)

(P → R) → − P
T T T F F T
F T T T F
Contingent
(i)

(P → (Q & R)) & (P → (Q & − R))


F T T F T
T T F F F
Contingent
(j)

− (P → Q) & − (Q → P)
F T T T F F T T T
T T F F F F F T T
F F T T F T T F F
F F T F F F F T F
Contradiction
2. Use truth tables to check the following sequents for validity.
(a)

P → Q Q → R P → R
T F F F T F T F F
Valid
134 SOLUTIONS TO SELECTED EXERCISES

(b)

P ↔ Q Q ↔ R P ↔ R
T F F F T F T F F
F F T T T T F F T
Valid

(c)

P → Q R → Q P → R
T T T F T T T F F
Invalid

(d)

P → Q − Q → P
F T F T F F F
Invalid

(e)

P → Q R → S − Q ∨ − S − P ∨ − R
T T T T T T F T F F T F T F F T
Valid

(f)

P ∨ Q P → R Q → R R
F F F F T F F T F F
Valid

(g)

P → Q P → (P ∨ Q)
F T T T T T
Valid, since the conclusion is a the-
orem.

(h)

P → Q P → (P & Q)
T F F T F T F F
Valid
135

(i)

P ∨ R P → (R ∨ S) − S − P
T T T T T T T F T F F T
Invalid

(j)

P → (M & N ) M → R N → S − (R ∨ S) − P ∨ M
T F F F F F T F F T F T F F F F T F F
Valid

Solutions for Chapter 6

Exercise 6.7

1(b) P ∨ QX
−P The tree closes; therefore the sequent is valid.
−Q
@
@
P Q
× ×

1(c) P ∨ QX
P → RX The tree closes; therefore the sequent is valid.
Q→SX
−(R ∨ S) X
−R
−S
@
@
−Q S
@ ×
@
−P R
@ ×
@
P Q
× ×

1(e) P → QX
−(P → (P & Q)) X The tree closes; therefore the sequent is valid.
P
−(P & Q) X
@
@
−P −Q
×
@
@
−P Q
× ×
136 SOLUTIONS TO SELECTED EXERCISES

1(g) P
−(P → Q) X The tree is open; therefore the sequent is invalid.
P
−Q

1(j) P → (Q & R) X
−Q The tree is closed; therefore the sequent is valid.
P
@
@
−P Q&RX
× Q
R
×

Solutions for Chapter 7


Section 7.3
1.(a) ∀x∀y((Ax & F y) → Exy)

(b) ∀x∀y(Cy → (Exy → M x))

OR

∀x∀y((Cy & Exy) → M x)

(c) ∀x(Ax → (Sx ↔ −V x))

(d) ∀x∀y((Lx & T y) → (Exy ↔ Ax))

(e) ∀x(M x → −V x)

(f) T mn

2.(a) Any mongoose from the Transvaal has a strange diet.

(b) Some cobras and Mongooses like to sleep in the sun.

(c) All mongooses and aardvarks like to eat ants.

(d) No cobra or mongoose is from the Transvaal.


137

Solutions for Chapter 8


Section 8.14
A
1. (a) 1 (1) Nr A
2 (2) ∃xN x 1 EI
2
(b) 1 (1) Apc A
1 (2) ∃xApx 1 EI
2
(c) 1 (1) ∀x(U x → N x) A
2 (2) Ur A
1 (3) Ur → Nr 1 UE
1,2 (4) Nr 2,3 MP
2
(d) 1 (1) ∀x(U x → −Rx) A
2 (2) Rt A
1 (3) U t → −Rt 1 UE
1,2 (4) −U t 2,3 MT
2
(e) 1 (1) ∀x(U x → M x) A
2 (2) ∀x(M x → W x) A
3 (3) Ur A
1 (4) Ur → Mr 1 UE
2 (5) Mr → Wr 2 UE
1,2 (6) Ur → Wr 4,5 HS
1,2,3 (7) Wr 3,6 MP
2

2. (a) 1 (1) Fa&Fb A


1 (2) Fa 1 &E
1 (3) ∃xF x 2 EI
2
(b) 1 (1) ∀x(M x → W x) A
2 (2) M a & Ra A
1 (3) Ma → Wa 1 UE
2 (4) Ma 2 &E
1,2 (5) Wa 3,4 MP
2
(c) 1 (1) ∀x(F x → Gx) A
2 (2) ∃xF x A
3 (3) Fa A (EE)
1 (4) F a → Ga 1 UE
1,3 (5) Ga 3,4 MP
138 SOLUTIONS TO SELECTED EXERCISES

1,3 (6) ∃xGx 5 EI


1,2 (7) ∃xGx 2,3–6 EE
2
(d) 1 (1) ∀x(F x → −Gx) A
2 (2) ∃xGx A
3 (3) Ga A
1 (4) F a → −Ga 1 UE
1,3 (5) −F a 3,4 MT
1,3 (6) ∃x − F x 5 EI
1,2 (7) ∃x − F x 2,3–6 EE
2
(e) 1 (1) ∀x − (F x ∨ Gx) A
1 (2) ∃x(F x ∨ Gx) 1 Duality
2

B
1. (a) 1 (1) ∀x(Ahx → −∃yT xy) A
2 (2) Ahb A
1 (3) Ahb → −∃yT yb 1 UE
1,2 (4) −∃yT yb 2,3 MP
1,2 (5) ∀y − T yb 4 Duality
1,2 (6) −T sb 5 UE
2
(b) 1 (1) ∀x(U x → N x) A
2 (2) U r & Br A
2 (3) Ur 2 &E
2 (4) Br 2 &E
1 (5) Ur → Nr 1 UE
1,2 (6) Nr 3,5 MP
1,2 (7) Br & N r 4,6 &I
1,2 (8) ∃x(Bx & N x) 7 EI
2
(c) 1 (1) Apc A
2 (2) Dc A
1,2 (3) Apc & Dc 1,2 &I
1,2 (4) ∃x(Apx & Dx) 3 EI
2
(d) 1 (1) ∀x(U x → N x) A
2 (2) ∃x(Bx & U x) A
3 (3) ∀x(U x → M x) A
4 (4) Ba & U a A (EE)
4 (5) Ba 4 &E
4 (6) Ua 4 &E
3 (7) Ua → Ma 3 UE
139

3,4 (8) Ma 6,7 MP


1 (9) Ua → Na 1 UE
1,4 (10) Na 6,9 MP
1,3,4 (11) M a & Ba & N a 5,8,10 &I
1,3,4 (12) ∃x(M x & Bx & N x) 11 EI
1,2,3 (13) ∃x(M x & Bx & N x) 2,4–12 EE
2
(e) 1 (1) ∀x(U x → M x) & ∀x(Lx → M x) A
2 (2) ∀x(U x → −Ax) A
3 (3) ∃x(U x & Cxk) A
4 (4) U a & Cak A (EE)
4 (5) Ua 4 &E
2 (6) U a → −Aa 2 UE
2,4 (7) −Aa 5,6 MP
1 (8) ∀x(U x → M x) 1 &E
1 (9) Ua → Ma 8 UE
1,4 (10) Ma 5,9 MP
1,2,4 (11) M a & − Aa 7,10 &I
1,2,4 (12) ∃x(M x & − Ax) 11 EI
1,2,3 (13) ∃x(M x & − Ax) 3,4–12 EE
2
2. (a) i. ∃x(P & F x) ` P & ∃xF x
1 (1) ∃x(P & F x) A
2 (2) P &Fa A
2 (3) P 2 &E
2 (4) Fa 2 &E
2 (5) ∃xF x 4 EI
2 (6) P & ∃xF x 3,5 &I
1 (7) P & ∃xF x 1,2–6 EE
2
ii. This could be a good exam question!
(b) i. 1 (1) ∀x(P & F x) A
1 (2) P &Fa 1 UE
1 (3) P 2 &E
1 (4) Fa 2 &E
1 (5) ∀xF x 4 UI
1 (6) P & ∀xF x 3,5 &I
2
ii. This could be a good exam question!
(c) i. 1 (1) ∃x(P ∨ F x) A
2 (2) P ∨ Fa A
3 (3) P A
3 (4) P ∨ ∃xF x 3 vI
5 (5) Fa A
5 (6) ∃xF x 5 EI
140 SOLUTIONS TO SELECTED EXERCISES

5 (7) P ∨ ∃xF x 6 vI
2 (8) P ∨ ∃xF x 2,3–4,5–7 vE
1 (9) P ∨ ∃xF x 1,2–8 EE
2
ii. This could be a really good question for the exam! Hint: it is basically the
reverse of the above, with EE’s inside vE subproofs.
(d) i. 1 (1) ∀x(P ∨ F x) A
1 (2) P ∨ Fa 1 UE
3 (3) P A (vE)
3 (4) P ∨ ∀xF x 3 vI
5 (5) Fa A (vE)
5 (6) ∀xF x 5 UI
5 (7) P ∨ ∀xF x 6 vI
1 (8) P ∨ ∀xF x 2,3–4,5–7 vE
2
ii. Another possible exam question!
(e) i. 1 (1) ∃x(P → F x) A
2 (2) P A
3 (3) P → Fa A
2,3 (4) Fa 2,3 MP
2,3 (5) ∃xF x 4 EI
1,2 (6) ∃xF x 1,3–5 EE
1 (7) P → ∃xF x 2–6 CP
2
ii. 1 (1) P → ∃xF x A
2 (2) −∃x(P → F x) A (RAA)
2 (3) ∀x − (P → F x) 2 Duality
2 (4) −(P → F a) 3 UE
2 (5) P & − Fa 4 Neg→
2 (6) P 5 &E
2 (7) −F a 5 &E
2 (8) ∀x − F x 7 UI
2 (9) −∃xF x 8 Duality
1,2 (10) ∃xF x 1,6 MP
1,2 (11) ∃xF x & − ∃xF x 9,10 &I
1 (12) ∃x(P → F x) 2–11 RAA
2
(f) i. 1 (1) A
2 (2) A
1 (3) 1 UE
1,2 (4) 2,3 MP
1,2 (5) 4 UI
1 (6) 2–5 CP
2
ii. This is left as an exercise. Hint: it’s an EE inside an RAA.
141

(g) i. 1 (1) ∃x(F x → P ) A


2 (2) ∀xF x A
3 (3) Fa → P A
2 (4) Fa 2 UE
2,3 (5) P 3,4 MP
1,2 (6) P 1,3–5 EE
1 (7) ∀xF x → P 2–6 CP
2
ii. This is left as an exercise. Hint: it’s a pretty straightforward RAA.

3. (a) Sequent 54 is a straightforward application of UI; it is left as an exercise.


(b) Sequent 55 is a fairly easy vE; use UE and IU within each subproof.
(c) Prove ∃x(F x ∨ Gx) a` ∃xF x ∨ ∃xGx
i. 1 (1) ∃x(F x ∨ Gx) A
2 (2) F a ∨ Ga A (EE)
3 (3) Fa A (vE)
3 (4) ∃xF x 3 EI
3 (5) ∃xF x ∨ ∃xGx 4 vI
6 (6) Ga A (vE)
6 (7) ∃xGx 6 EI
6 (8) ∃xF x ∨ ∃xGx 7 vI
2 (9) ∃xF x ∨ ∃xGx 2,3–5,6–8 vE
1 (10) ∃xF x ∨ ∃xGx 1,2–9 EE
2
ii. The converse proof is very similar, except that you do EE subproofs inside
each vE subproof.
(d) Prove ∃x(F x & Gx) ` ∃xF x & ∃xGx
Note: this is one that you really should be able to do for the exam.
1 (1) ∃x(F x & Gx) A
2 (2) F a & Ga A
2 (3) Fa 2 &E
2 (4) Ga 2 &E
2 (5) ∃xF x 3 EI
2 (6) ∃xGx 4 EI
2 (7) ∃xF x & ∃xGx 5,6 &I
1 (8) ∃xF x & ∃xGx 1,2–7 EE
2
(e) 1 (1) ∀x(F x → Gx) A
2 (2) ∀xF x A (CP)
1 (3) F a → Ga 1 UE
2 (4) Fa 2 UE
1,2 (5) Ga 3,4 MP
1,2 (6) ∀xGx 5 UI
1 (7) ∀xF x → ∀xGx 2–6 CP
2
142 SOLUTIONS TO SELECTED EXERCISES

Note: a question similar to this could easily be on the exam!


(f) 1 (1) ∃x(F x → Gx) A
1 (2) ∃x(−F x ∨ Gx) 1 Imp
1 (3) ∃x − F x ∨ ∃xGx 2 Dist∃ (Sequent 56)
1 (4) ∀xF x ∨ ∃xGx 3 Duality
1 (5) ∀xF x → ∃xGx 4 Imp
2

C
1. Prove ∃xF x a` −∀x − F x

(a) 1 (1) ∃xF x A


2 (2) ∀x − F X A
3 (3) Fa A
2 (4) −F a 2 UE
2,3 (5) Fa& − Fa 3,4 &I
1,2 (6) Fa& − Fa 1,3–5 EE
1 (7) −∀x − F x 2–6 RAA
2
(b) (1) −∀x − F x A
(2) −∃xF x A
(3) Fa A
(4) ∃xF x 3 EI
(5) ∃xF x & − ∃xF x 2,4 &I
(6) −F a 3–5 RAA
(7) ∀x − F x 6 UI
(8) ∀x − F x & − ∀x − F x 1,7 &I
(9) ∃xF x 2–8 RAA
2

2. Prove ∀x − F x a` −∃xF x

(a) 1 (1) ∀x − F x A
2 (2) ∃xF x A
3 (3) Fa A
1 (4) −F a 1 UE
1,3 (5) −F a & F a 3,4 &I
3 (6) −∀xF x 1–5 RAA
2 (7) −∀xF x 2,3–6 EE
1,2 (8) ∀xF x & − ∀xF x 1,7 &I
1 (9) −∃xF x 2–8 RAA
2
(b) 1 (1) −∃xF x A
2 (2) A
2 (3) ∃xF x 2 EI
1,2 (4) ∃xF x & − ∃xF x 1,3 &I
143

1 (5) 2–4 RAA


1 (6) ∀x − F x 5 UI
2

3. Prove ∃x − F x a` −∀xF x

(a) 1 (1) ∃x − F x A
2 (2) ∀xF x A (RAA)
3 (3) −F a A (EE)
2 (4) Fa 2 UE
2,3 (5) Fa& − Fa 3,4 &I
3 (6) −∀xF x 2–5 RAA
1 (7) −∀xF x 1,2–6 EE
2
(b) 1 (1) −∀xF x A
2 (2) −∃x − F x A
3 (3) −F a A
3 (4) ∃x − F x 3 EI
2,3 (5) ∃x − F x & − ∃x − F x 2,4 &I
2 (6) Fa 3–5 RAA
2 (7) ∀xF x 6 UI
1,2 (8) ∀xF x & − ∀xF x 1,7 &I
1 (9) ∃x − F x 2–8 RAA
2
144 SOLUTIONS TO SELECTED EXERCISES

Solutions for Chapter 9


A
1. (a) p 6= a
(b) a = s
(c) Aar ∨ Apr
(d) Apr & Apt

2. (a) Plato was not The Stagirite.


(b) The Republic is not the Timaeus.
(c) Plato either did or did not write Republic.

3. (a) 1 (1) Aae A


2 (2) a=s A
1,2 (3) Ase 1,2 =E
(b) 1 (1) Aar → (a = p) A
2 (2) a 6= p A
1,2 (3) −Aar 1,2 MT

4. (a) 1 (1) F a & Ga A


2 (2) a=b A
1,2 (3) F a & Gb 1,2 =E
(b) 1 (1) F a → Ga A
2 (2) a=b A
1,2 (3) F a → Ga 1,2 =E
(c) 1 (1) ∀x(F ax → Gbx) A
2 (2) a=b A
1,2 (3) ∀x(F bx → Gbx) 1,2 =E
(d) 1 (1) F aa → (a 6= a) A
(2) a=a =I
1 (3) −F aa 1,2 MT

B
1. (a) Apr & ∀y(Ayr → ∀y(Ayr → (y = p))
(b) −(Aar & ∀y(Ayr → (y = a)))
This is equivalent to
−Aar ∨ −∀y(Ayr → (y = a))
and this is equivalent to
−Aar ∨ ∃y(Ayr & (y 6= a)).

2. (a) The author of the Republic was the author of the Timaeus.
(b) Plato was the author of the Republic and the Timaeus.
145

3. 1 (1) ∃x(ps & (x = l)) A


2 (2) P a & (a = l) A (EE)
2 (3) Pa 2 &E
2 (4) a=l 2 &E
2 (5) Pl 3,4 =E
1 (6) Pl 1,2–5 EE

1 (1) Pl A
(2) l=l =I
1 (3) P l & (l = l) 1,2 &I
1 (4) ∃x(P x & (x = l)) 3 EI

4. 1 (1) Kp A
2 (2) a 6= p A
3 (3) ∀x∀y((Kx & Ky) → (x = y)) A
3 (4) (Kp & Ka) → (p = a) 1 UE (× 2)
2,3 (5) −(Kp & Ka) 2,4 MT
2,3 (6) −Kp ∨ −Ka 5 deM
1,2,3 (7) −Ka 1,6 DS
146 SOLUTIONS TO SELECTED EXERCISES
Bibliography

[1] Benaceraff, Paul, and Putnam, Hilary (eds.) Philosophy of Mathematics: Selected
Readings. Englewood Cliffs, NJ: Prentice-Hall, 1964.

[2] Bergmann, M., Moor, J., and Nelson, J. The Logic Book. Second Edition. New
York: McGraw Hill, 1990.

[3] Birch, David. Illustrations by Devis Grebu. The King’s Chessboard . New York:
Penguin, 1988.

[4] Boolos, George,and Jeffrey, Richard Computability and Logic Second Edition.
Cambridge: Cambridge University Press, 1980.

[5] Brown, George. S. The Laws of Form. London: George Allen and Unwin, 1969.

[6] Copi, I. M., and Cohen, C. Introduction to Logic Tenth Edition. Upper Saddle
River, NJ: Prentice-Hall, 1998.

[7] DeVidi, David, and Solomon, Graham, “On Confusions About Bivalence and Ex-
cluded Middle”, Dialogue 38(4), 1999, 785–799.

[8] Govier, T. A Practical Study of Argument. Third Edition. Belmont, CA:


Wadsworth, 1992.

[9] Halmos, P. R. Naive Set Theory. New York: Van Nostrand Reinhold, 1960.

[10] Hofstadter, Douglas. Gödel, Escher, Bach: An Eternal Golden Braid.New York:
Basic Books, 1979.

[11] Hunter, Geoffrey. Metalogic. London: Macmillan, 1971.

[12] Hughes, R. I. G. “Quantum Logic,” Scientific American 245(4) (October 1981),


202–213.

[13] Jacquette, Dale Symbolic Logic. Belmont, CA: Wadsworth, 2001.

[14] Jeffrey, R. Formal Logic, Its Scope and Limits. Third Edition. New York: McGraw-
Hill, 1991.

[15] Klemke, E. Theory of Sets. New York: Dover, 1950.

[16] LeBlanc, J. Thinking Clearly: A Guide to Critical Reasoning. New York: W. W.


Norton, 1998.

147
148 BIBLIOGRAPHY

[17] Lemmon, E. J. Beginning Logic. Indianapolis, IN: Hackett, 1978. First published
1965, Van Nostrand Reinhold.

[18] Mates, Benson Elementary Logic. Second Edition. New York: Oxford University
Press, 1972.

[19] Rucker, Rudy Infinity and the Mind: The Science and Philosophy of the Infinite.
New York: Bantam Books, 1983.

[20] Russell, Bertrand. Introduction to Mathematical Philosophy. London: Allen & Un-
win, 1918.

[21] Russell, Bertrand. My Philosophical Development. London: Allen & Unwin, 1959.

[22] Russell, Bertrand, and Whitehead, A. N. Principia Mathematica. Second Edition.


Cambridge: Cambridge University Press, 1927.

[23] Schumm, G. F. A Teaching Companion to Lemmon’s Beginning Logic. Indianapo-


lis, IN: Hackett, 1979.

[24] Smullyan, Raymond. What is the Name of This Book? Englewood Cliffs, NJ:
Prentice-Hall, 1978.

Vous aimerez peut-être aussi