0 Votes +0 Votes -

54 vues139 pagesMar 21, 2013

© Attribution Non-Commercial (BY-NC)

PDF, TXT ou lisez en ligne sur Scribd

Attribution Non-Commercial (BY-NC)

54 vues

Attribution Non-Commercial (BY-NC)

- The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
- Sapiens: A Brief History of Humankind
- The Alchemist
- How To Win Friends and Influence People
- Good Omens
- Alexander Hamilton
- Same Kind of Different As Me: A Modern-Day Slave, an International Art Dealer, and the Unlikely Woman Who Bound Them Together
- America's First Daughter: A Novel
- The Mother Tongue: English and How it Got that Way
- The Screwtape Letters: Annotated Edition
- The Secret
- Principles: Life and Work
- Principles: Life and Work
- The Intelligent Investor, Rev. Ed
- Never Split the Difference: Negotiating As If Your Life Depended On It
- The Things They Carried
- Jesus Calling

Vous êtes sur la page 1sur 139

TYPES OF LOGIC

Logic is primarily concerned with distinguishing correct reasoning from reasoning that is incorrect. It is most closely related to rhetoric which also deals with the reasoning process. Rhetoric, however, unlike logic, is chiefly interested in the difference between persuasive reasoning and reasoning that is not persuasive. As we will soon discover, although persuasive reasoning is often correct it is unfortunately also quite common for reasoning that is persuasive to be incorrect. Moreover, frequently reasoning that is correct may nevertheless not be very persuasive. Admittedly a salesperson, whose primary concern probably lies with convincing his clients, might find rhetoric a more useful subject to study than logic. Perhaps most of us, however, are more interested in using reasoning to aid us in discovering truth and avoiding error. If we are, logic is undoubtedly an important subject for us to study. There are several different ways of subdividing logic. First, there are two different types of reasoning processes and, as a result, two main branches of logic. Some reasoning processes are supposed to establish the claim being argued for with certainty (assuming the evidence is correct). Others are only supposed to establish the claim being argued for with a greater or lesser degree of likelihood. Deductive logic studies those reasoning processes in which the claim being reasoned to is supposed to follow with certainty from the evidence presented. Inductive logic, on the other hand, studies those reasoning processes in which the claim being reasoned to is only supposed to follow with likelihood from the evidence presented. Besides this distinction between deductive logic and inductive logic we can also distinguish between two different ways of doing logic. We can either do logic in a formal or informal way. Formal logic always starts by translating the reasoning process from English into a symbolic language. It then manipulates those symbols in various ways to find out how good that reasoning process is. With informal logic, in contrast, no such translation process is involved. We simply examine and evaluate the reasoning process in ordinary English. Thus, it is possible to distinguish four different types of logic: (1) formal deductive logic, (2) informal deductive logic, (3) formal inductive logic, and (4) informal inductive logic. One other distinction between different types of logic is also worth introducing here. We can distinguish between a kind of logic that is, perhaps, best called "standard logic," and a vast array of other kinds of logics. We'll collectively refer to all these other kinds of logics as "nonstandard logics." We can characterize standard logic, and any of the nonstandard logics we might want to examine, in terms of the basic assumptions presupposed by it. Shortly we will consider what these assumptions are, at least with standard logic. For now, all we need to know is that we will be doing standard logic throughout this text.

By now it should be clear that logic has something to do with discovering truth. What, however, are these things that we call "true" and "false?" This is an old philosophical issue and one I at least briefly need to discuss. We say things like, "It is true that Bush is President" and "It's false that Quayle is President." What, however, is it that we are saying is true in the one case and false in the other? What, in other words, are the bearers of truth and falsity? Philosophers have proposed many different "answers" to this difficult philosophical question. Three of these are especially worth examining. Some philosophers have claimed that indicative sentences are the bearers of truth and falsity, while others have contended that propositions (i.e., the meanings of these sentences) are the bearers of truth and falsity. Finally, still others have suggested that statements are the bearers of truth and falsity. Before we can evaluate these views we first need to understand the difference between an indicative sentence, a proposition, and a statement. A sentence is always a part of a language and always consists in words. Thus, the sentence, "It is raining," is English and contains three words. Though it may mean the same

thing as "Es regnet," these are different sentences. For not only do they exist in different languages, one contains fewer words than the other does. A proposition, on the other hand, is not a part of any language and doesn't contain words. Moreover, the same sentence is sometimes used to express two or more propositions. For example, the sentence, "The prospector didn't get any gold from a bank," could either express the proposition that he did not withdraw any gold from a financial institution, or the proposition that he did not find any gold at the edge of a river. On the other hand, two different sentences can also sometimes express the same proposition. Thus, "John loves Mary," and "Mary is loved by John," though different sentences express the same proposition. A statement is a claim a person makes, regardless of how he makes it. Statements are different from both sentences and propositions. They are different from sentences because the same sentence sometimes expresses more than one statement (e.g., "The prospector didn't get any gold from a bank."), and because different sentences sometimes express the same statement (e.g., "John loves Mary," and "Mary is loved by John.") In these ways, they resemble propositions. Statements, however, also differ from propositions in several respects. Sometimes we make statements without uttering any sentence, and so, without expressing any proposition. Furthermore, sometimes a sentence that expresses only one proposition nevertheless expresses more than one statement. Thus, we use the sentence, "John loves Mary," to express many different statements, depending on whom we are referring to by "John" and "Mary." Usually when a person says something to understand his statement we must not only understand the meaning of his sentence, we also need to know when he said it and to whom he is referring. (This is the most important difference between statements and propositions.) In a moment we will return to the philosophical question we have been talking about. We will try to explain why we believe statements are the bearers of truth and falsity, and not sentences or propositions. First, however, we need to return to another topic that was briefly set aside, namely, the assumptions made by standard logic. Of the fundamental assumptions made by standard logic two stand out as especially important, viz., the law of the excluded middle, and the law of non-contradiction. What, exactly, do these two laws say? The law of the excluded middle asserts that whatever the bearers of truth and falsity are, every one is either true or false. The law of non-contradiction, in contrast, asserts that no bearers of truth and falsity are both true and false. While one might question these "laws," we are not going to do so. For they are the building blocks of standard logic, and, as we mentioned before, that is the logic we will be learning in this text. Now that we have a better understanding of the assumptions on which standard logic rests, let's return to our philosophical question, "What are the bearers of truth and falsity?" Perhaps we can answer it. First, could sentences be the bearers of truth and falsity? The answer to this question is that they could not be. They could not be because this view conflicts with the law of non-contradiction. To see that it conflicts with this law consider the following case: Suppose there is a prospector who has recently returned from the bank of the Snake River where he found some gold. He walked in his local financial institution where, although he deposited the gold he had, he didn't obtain any from the bank itself. Now consider the sentence "The prospector got some gold from a bank." Is this sentence true, or is it false? Evidently, we must say here that the sentence is simultaneously true and false. It is true because the prospector did get some gold from the bank of the Snake River. However, it is also false, because he obtained no gold while he was at any financial institution. Yet this is exactly what the law of non-contradiction tells us cannot happen. Perhaps now you can see why some philosophers have suggested that propositions instead of sentences are the bearers of truth and falsity. For it is clear here that the sentence "The prospector got some gold from a bank could mean two different things. The proponent of the view that a proposition is the bearer of truth and falsity will simply say that one of these meanings is true and not false, while the other is false and not true. Unfortunately, although the proponent of the propositional view can respond to our prospector example in this manner, there are other examples that show his view also conflicts with the law of noncontradiction. Imagine, for instance, two people, one in New York and the other in San Diego. Suppose both utter the sentence "It's raining," simultaneously. Finally, suppose that it is raining in New York but not San Diego. If we ask how many propositions are involved here we evidently must say one. (Surely the sentence, "It's raining," does not mean something different when said in New York than it means in San Diego.) If so, however, then it must be both true and false. Unfortunately, however, this violates the law of noncontradiction.

How does the view that statements are the bearers of truth and falsity help us here? To understand what claim a person is making when he utters the sentence, "It's raining," we need to know when and where that sentence was uttered. In other words, we need to know about the context of utterance. Given the way our language works it is not even possible for us to utter the same sentence and make the same claim that the person in New York makes when he says, "It's raining," unless we are standing in approximately his vicinity at roughly the time he utters the sentence. Instead, to make this claim we need to say something like, "It's raining in New York," or "It was raining in New York yesterday." If you don't understand all of this it really doesn't matter much. What does matter is that when we want to talk about the bearers of truth and falsity, we are going to call them "statements," instead of "sentences" or "propositions." Moreover, because we are doing standard logic, which presupposes the laws of the excluded middle and non-contradiction, we are committed to holding that every statement is either true (and if true, not false), or false (and if false, not true). Strangely enough, however, in logic we do not ordinarily worry about whether a particular statement is true or false. (In fact, we often use the term "truth-value" when we want to speak of a statement's truth or falsity, but don't care whether it is true or false.) Instead, we are usually concerned with whether the statement is logically true, logically false, or logically indeterminate. What do we mean when we say that a statement is "logically true?" What we mean is that it is true as a matter of logic alone. As we will see, it's best to view logic as a collection of methods. Sometimes, when we use one of these methods on a single statement we get the result that the statement we are examining must be true. We refer to such statements as logically true. As we will discover, the statement that either it's raining or it isn't raining, provides an example. A statement is "logically false," on the other hand, when it is false as a matter of logic alone. The statement that it's both raining and it isnt is an example of a logically false statement. It cannot be true and logic alone can show this. Finally, we call a statement "logically indeterminate" when logic alone cannot determine which of the two truth-values (whether true or false) that statement has. To be sure, the statement is either true or false. Logic alone is just not able to find out which it is. The statement that it is raining is an example. As we will see, when we come to a single statement the question we will usually be asking is, "Is it logically true, logically false, or logically indeterminate?" There is only one other concept that applies to single statements that we will be interested in, and that is the concept of the negation of a statement. Every statement has a negation, and its negation is also a statement. Where S is any statement, the negation of S will be It is not true that S. Thus, the negation of the statement that it is raining is the statement that it is not raining, while the negation of the statement that all men are mortal is that not all men are mortal, or in other words that some men are not mortal.

SETS OF STATEMENTS

It is sometimes useful to consider several claims as a unified whole. When we do, we are considering a set of statements. What, however, are these things we call "sets?" What are "sets of statements?" Finally, what are the terms we apply to sets of statements? Sets in logic and mathematics and sets in everyday life differ in many important respects. First, sets in daily life at least frequently have a color. (Thus, we can intelligibly say things like, "My set of dishes is beige.") Logical and mathematical sets, in contrast, have no color. Second, sets in daily life have a spatial location. (We can, for example, ask, "Where is your set of dishes?") In contrast, sets in logic and mathematics have no spatial location. Third, sets in daily life can undergo changes in their membership. If we break a plate we can go to the store, buy another one, and still have the same set of dishes we had before. Logical and mathematical sets, however, cannot. A set in logic and math is completely determined by its members. Finally, sets in daily life must have at least several members. (Imagine asking to see our set of dishes and our telling you that it's in the cupboard. Yet when you look in the cupboard, all you see is one lonely plate. Would you be happy calling that a set of dishes?) In logic and math, however, a set can have as few as one member. Actually, there is a very special set called "the empty set" which has no members. In this chapter we are going to be concerned with sets of statements. A set of statements is just some statements that we have decided to view together as one unit. There might be as few as one statement, or as many as you like, in a particular set. To show that we are considering a set of statements, we simply surround those sentences that express the statements we wish to include in the set with curly braces. For example, we represent the set consisting in

the statements that John loves Mary, Mary loves Bill, and Bill loves John, as: {John loves Mary; Mary loves Bill; Bill loves John}. Once we have specified exactly what set of statements we are talking about, the problem of evaluating that set can then begin. Unlike single statements, we never evaluate sets of statements as true or false, or as logically true, logically false, or logically indeterminate. Instead, the only terms we ever apply are "consistent" and "inconsistent." Every set of statements will be either consistent or inconsistent, and no set will ever be both. What do we mean when we say that a set of statements is consistent? What we mean, and all we mean, is that there is a possibility that all of the statements in that set are true together. A set is inconsistent, on the other hand, if it is impossible for all of the statements in the set to be true together. The set, {John loves Mary; Mary loves Bill; Bill loves John}, is an example of a consistent set of statements, since it is possible for all three of the statements in the set to be true together. (Note that in saying that this set is consistent we are not saying that the statements in the set are true.) On the other hand, the set {John is taller than Mary; Mary is taller than Bill; Bill is taller than John} is inconsistent, since we cannot imagine all of the statements in this set true simultaneously. How is this notion of a set of statements and the question whether a given set is consistent or inconsistent, useful? We sometimes want to know if all of the claims someone has made even could be true. If we discover that they could not all be true together, then we know that at least one claim he has made is false. In other words, if we discover that the set of statements someone has made is inconsistent we know he has made a mistake somewhere. This might be worth discovering.

ARGUMENTS

Although single statements and sets of statements are important objects of study, by far the most important entities that logicians study are arguments. As a working example of an argument, consider the following: Since all men are mortal, and Socrates is a man, It follows that Socrates is mortal. This argument resembles every other argument in one very important respect. Some statements are presented as evidence in an attempt to establish another statement. (Thus, the claims that all men are mortal and that Socrates is a man are presented to establish the claim that Socrates is mortal.) ARGUMENTS AND EXPLANATIONS Arguments are close relatives of, but are nonetheless distinct from explanations. We argue to convince people of a claim. However much or little someone says, an argument is being presented if and only if the individual making the claims is trying to convince his audience of something. On the other hand, a person is providing an explanation when he is attempting to account for something he thinks his audience takes to be a fact. The author is not trying to convince them of this "fact." He takes it as obvious they agree with him to this extent. Yet he also assumes that they find this "fact" surprising and he wants to provide them with an understanding of why it has occurred. To see the difference between arguments and explanations compare the following: ARGUMENT Since all men are mortal and Socrates is a man, Socrates is mortal. EXPLANATION Because he drank hemlock, and hemlock is a poison, Socrates died.

Although these passages resemble each other quite closely you should view the one on the right as an explanation, not an argument. In it the author is not trying to convince us that Socrates died. Instead, he is assuming that we know this but we feel puzzled about why Socrates died.

Admittedly, the distinction between an argument and an explanation is not an easy one. Indeed, there may be occasions when it is virtually impossible for us to tell which we are dealing with. Yet it is important nevertheless, if only because we treat them quite differently in logic. PREMISES AND CONCLUSIONS Once we have decided we are dealing with an argument and not an explanation our job as logicians can begin. Our first task in analyzing any argument is to figure out exactly how it goes. To do this, we need to distinguish its premises from its conclusion. In any argument we call the single statement, the one being argued for, "the conclusion of the argument." On the other hand, we call each statement that provides evidence to establish the conclusion "a premise of the argument." Every argument must have least one premise and exactly one conclusion. The conclusion is always the claim being argued for, and each premise is a statement that is supposed to contribute something toward establishing that conclusion. Together, the total group of premises (i.e., the set of statements containing all the argument's premises) is the arguer's evidence for that conclusion. Although the person arguing usually formulates the conclusion of his argument after he has stated its premises, this is not mandatory. Often enough the arguer begins by stating his conclusion first. So instead of saying, "Since all men are mortal and Socrates is a man, it follows that Socrates is mortal" he might say, "Socrates is mortal, because all men are mortal and he is a man." However he might even sandwich the conclusion between premises. For the person presenting the argument might express it by saying, "Since all men are mortal, Socrates is mortal, because he is a man." Ordinarily the author of an argument uses words and phrases that help us distinguish the premises of his argument from its conclusion. Words like "if," "for," "as," "since," and "because," commonly function as premise-indicators, and serve to inform us that the claim immediately following them is a premise of the argument. While words like, "hence," "thus," "so," and "therefore," are often used as conclusion-indicators, and serve to inform us that the conclusion of the argument will be presented next. In the example we have been considering -- "Since all men are mortal, and Socrates is a man, it follows that Socrates is mortal." -- the word "since" is functioning as a premise-indicator. While the expression, "it follows that" is serving as a conclusion-indicator. In representing this argument, instead of including premise-indicators and conclusion-indicators, the normal procedure is simply to list the premises above the conclusion, and separate them with a line. Thus, however the argument is expressed in English, we formulate it as follows: All men are mortal. Socrates is a man. __________________ Socrates is mortal. Alternately, in this text, we will represent the argument on one line by separating its premises with semicolons, surrounding them with curly braces, and then writing '/' followed by the conclusion. So we will write the argument as follows: {All men are mortal; Socrates is a man}/Socrates is mortal. The comments we have made so far may make it seem that locating the argument and distinguishing its premises from its conclusion is not an especially difficult task. It can, however, be extremely challenging. Not only do people sometimes not use any premise or conclusion indicators, they occasionally fail to state premises, or even the conclusion of the argument they are presenting. (We call these "suppressed premises" and "suppressed conclusions.") Thus, instead of saying, "Since all men are mortal, and Socrates is a man, it follows that Socrates is mortal" someone might simply say, "Since all men are mortal, Socrates is mortal." When representing the argument we want to include any suppressed premises or conclusions in our formulation of it. The principle to use in deciding whether to include a particular premise as a suppressed premise is: We should include it if its inclusion would make the argument better than it would otherwise be and it seems likely that the author of the argument intended it to be a part of his argument. (This principle of charity -- try to make the other fellow's argument as good as you can -- is only common courtesy.) Another factor that frequently makes the task of locating an argument and identifying its premises and conclusion more difficult is that people often present several interrelated arguments in a single passage. When this happens, we not only need to understand what those arguments are, we also need to see exactly

how the various arguments relate to each other. The section of this chapter on Diagramming explains a technique you can use to represent these sorts of passages. EVALUATING ARGUMENTS Once we have clearly formulated the argument, we can begin evaluating it. To decide how to evaluate it, however, we first need to know whether the argument in question is inductive or deductive. This is so because we use different terms in evaluating deductive arguments than we use in evaluating inductive arguments. If the argument is deductive (i.e., if the arguer thinks that the conclusion of his argument follows with certainty from its premises), the evaluative terms used are "valid" and "invalid," or "sound" and "unsound." While if the argument is inductive (i.e., if the arguer thinks that the conclusion of his argument follows only with likelihood from its premises) the terms used are "stronger" and "weaker." Of all the terms we have discussed in this chapter, perhaps "valid" and "invalid" are the most important. What do we mean when we say that an argument is valid? There are two ways of defining this concept: 1) An argument is valid =DF. It cannot have all true premises and a false conclusion. 2) An argument is valid =DF. The set of statements consisting in the argument's premises and the negation of its conclusion is inconsistent. To use the first definition, it is easiest to simply draw a little box in front of the argument. Place a T in the box directly left of each premise and an F left of the conclusion. Then decide whether the combination of all true (=T) premises and a false (=F) conclusion is possible. If it is not possible the argument is valid, while if it is possible the argument is invalid. Let's use this definition on our Socrates example to find out whether it is valid or invalid.

T T F

We see here that we cannot imagine it true that all men are mortal, and true that Socrates is a man, but false that Socrates is mortal. The combination of T's and F's in the box is not possible. Therefore, our definition tells us that this argument is valid. Now let's consider our second definition of valid. An argument is valid =DF. The set of statements consisting in the argument's premises and the negation of its conclusion is inconsistent. To use this definition, we first must convert the argument into a set of statements. The set of statements we need to consider consists in the premises of the argument and the negation of its conclusion. The set will be: {All men are mortal; Socrates is a man; Socrates is not mortal}. Even a brief glance at this set of statements should suffice to show that it is inconsistent. Therefore, the definition tells us the argument above is valid. Usually when we say that an argument is valid this amounts to saying that it has a good structure. (One thing it does not say is that the premises of the argument are true. A valid argument can have false premises.) There are, however, examples of arguments we would in daily life evaluate as structurally defective, but which our definitions commits us to saying are valid. One such example is the following:

It's raining. It's not raining. _______________ The moon is made of green cheese. Although most of us would count this a terrible argument, since its conclusion has nothing to do with its premises, it is, nevertheless, valid on both of our definitions of "valid" because it cannot have all true premises and a false conclusion (since both of its premises cannot be true), and so, on our first definition it is valid. Moreover, since the set consisting in the argument's premises and the negation of its conclusion, viz., {It's raining; It's not raining; The moon is not made of green cheese}, is inconsistent, the argument is also valid on our second definition. So although the word "valid" comes close to meaning good structure, there are cases where a valid argument does not have what we would normally think of as good structure. The concept of soundness comes much closer than validity to what we would ordinarily think of as a good (deductive) argument. We call an argument "sound" if it is valid and it has all true premises. On the other hand, an argument is unsound just in case it is not sound. So every invalid argument is unsound, and every argument that has any false premises is also unsound. Although the concept of soundness comes much closer to what we usually mean when we say an argument is good, it isn't a very important notion in logic. This is so for a very simple reason. Often, to tell whether an argument is sound, we must decide if its premises are really true. This, however, involves looking at the world, and that is not the business of the logician. As we mentioned before, we evaluate inductive arguments differently than deductive ones. Instead of using the terms "valid" and "invalid," or "sound" and "unsound," we evaluate inductive arguments as "stronger" or "weaker." As these terms are clearly relational in nature, and imply a comparison between two arguments, we must here be comparing arguments with each other. When we say one argument is stronger than another, what this means is that its conclusion is more likely to be true, given its premises, than the other argument.

PAIRS OF STATEMENTS

There is only one other concept that we will be occasionally using. This concept applies to two statements. Two statements are said to be "logically equivalent" if and only if they must have the same truthvalues. Thus, the statement that it is raining is logically equivalent to the statement that it isn't not raining because one of these cannot be true and the other false. Although these are not the only concepts used in logic, they are at least the most important ones. We will introduce other notions when we need them.

DIAGRAMMING

Frequently passages contain multiple arguments. When this happens we need to know how the various arguments in the passage interrelate. In this section we will learn one method that will help us understand how these arguments intertwine. Let's begin with a very simple passage so that we can illustrate several salient features of diagramming. Consider the following example: I saw Bill looking at Sandy's paper during the exam. He must have cheated, because they got the same questions wrong. We cannot tolerate cheating. So someone should discipline him. This passage contains five statements that function as either premises or conclusions. The diagramming process begins by simply identifying and numbering each of these claims. Doing this we get:

1

I saw Bill looking at Sandy's paper during the exam. Bill must have cheated. Bill and Sandy got the same 4 5 questions wrong. We cannot tolerate cheating. Someone should discipline Bill.

What is the main point of this passage? Clearly, it is trying to establish that someone should discipline Bill. Although it contains more than one argument, this is the main conclusion of its main argument. We'll represent this by writing a "5" down and drawing an arrow to it. Thus, we get: 5 Now why does the arguer think that someone should discipline Bill? He believes this because he believes that Bill cheated (which we have identified as claim 2), and because he thinks that we cannot tolerate cheating (which we have labeled claim 4). Both these claims combined are evidently required to get to the conclusion in question. So let's put them both above the arrow and draw a line under them. In this way we can represent the fact that the author of the argument we are diagramming believes both these premises are needed to obtain this conclusion. 2 4 5 So far so good, but we still need to worry about claims 1 and 3, however. What role do they play in the passage? One thing at least is clear. Claim 3 must be a premise, since it is preceded by the word "because," which we know is a premise-indicator. However, which claim is claim 3 supposed to be providing support for? Surely it is supposed to be supporting claim 2. The arguer thinks Bill must have cheated because Bill and Sandy got the same questions wrong. Claim 3 alone, however, is obviously not sufficient to establish claim 2, since Sandy might have been the one who was cheating, instead of Bill. Why does the author of the argument think it was Bill? Obviously he believes this because he saw Bill looking at Sandy's paper during the exam. So, claims 1 and 3 together are intended to support claim 2. We will represent this by putting 1 and 3 above 2, drawing a line under both of them, and then drawing an arrow from them to 2. The completed diagram will read: 1 3 2 4 5 Let's try a different example. Consider the following passage: Sandy would not have cheated on the test, because she already knew the material, as she amply proved by tutoring other students in it last week. Moreover, she didn't need a good grade on it, since she already had a guarantee of an A in the course. This passage also contains five claims. They are:

1 2

Sandy would not have cheated on the test. She already knew the material. 3 She was tutoring other students in it last week. 4 She didn't need a good grade on it. 5 She already had a guarantee of an A in the course. Now what is the main claim the arguer is attempting to establish in this passage? Evidently it is that Sandy would not have cheated on the test. (Unlike the last example, where the main conclusion was presented at the end of the passage, here it comes at the very beginning.) This time we should put "1" at the bottom of our diagram. But it would be a mistake to draw a single arrow to it because the arguer has two independent reasons for thinking that claim 1 is true. First, she wouldn't have cheated because she already knew the material. Second, she would not have cheated since she didn't need a good grade on it. What we will do here, then, is to draw two separate arrows, one leading from 2 to 1, and the other going from 4 to 1. In this way we can illustrate how the passage in question has two distinct main arguments that just happen to have the same conclusion.

2 4 1 All we need to do now is to figure out the role played, in the passage, by claims 3 and 5. This, however, is easy. Claim 3 is preceded by the premise-indicator, "as she amply proved by," and is clearly intended to support claim 2; while claim 5 is supposed to support claim 4. Thus, the completed diagram should read: 3 5 2 4 1 Let's consider one more example: Either Jill went to the beach or she went to the movie. However, she never goes to the beach on Sundays. Moreover, the only movie she hasn't seen is at the Strand Cinema. So she must be there. If we number these claims in the order in which they occur, we get:

1 2

Either Jill went to the beach or she went to the movie. Jill never goes to the beach on Sundays. 3 The only movie she hasn't seen is at the Strand Cinema. 4 Jill must be at the Strand Cinema. Now clearly, the claim the person presenting the argument is attempting to establish is that Jill must be at the Strand Cinema (i.e., claim 4). Why does this person think that she must be at the Strand? Because that is the only movie she hasn't seen (viz., claim 3). But so what? If the arguer didn't think Jill went to a movie at all, or thought she might go to movies she had already seen, the claim that Jill must be at the Strand wouldn't be very plausible. In fact, these two added points are suppressed premises, and we really should include them in our representation of the argument. Let's label them "5" and "6." To make it clear, however, that they are suppressed premises we will place brackets around these numbers. Doing this we get: [5] Jill went to the movie. [6] Jill doesn't go to movies she has seen before. From the combination of claims 3, [5], and [6], the arguer thinks claim 4 follows. Let's now represent what we have learned so far. 3 [5] [6] 4 This is all clear enough, but how then do claims 1 and 2 fit in? Surely they are designed to establish claim [5]. In other words, claim [5] is not only a suppressed premise of the main argument; it is also the suppressed conclusion of a secondary argument. At first glance, this argument proceeds: "Either Jill went to the beach, or she went to the movie. However, she never goes to the beach on Sundays. So, she went to the movie." There is a problem with this interpretation, however. The argument as thus formulated is invalid. Unless today is Sunday, claims 1 and 2 don't provide good support for claim [5]. It seems from the context, however, that the arguer is assuming this. So let's add a seventh claim to our list as a suppressed premise. The claim will simply read: [7] Today is Sunday.

We can then finish diagramming the argument as follows: 1 2 [7] 3 [5] [6] 4 Although the diagramming method we have been learning may be time consuming, and may even seem like a waste of time, it is not. Not only is it frequently useful in allowing us to find flaws in other people's arguments, it sometimes even helps us understand exactly what they are saying.

QUESTIONS (true/false)

1. Deductive logic studies those processes of reasoning in which the claims we are reasoning about follow with certainty from the evidence presented. 2. Rhetoric studies the principles of correct and incorrect reasoning. 3. Standard logic presupposes both the laws of non-contradiction and of the excluded middle. 4. The law of non-contradiction asserts that every bearer of truth and falsity (i.e., every statement) is either true or false. 5. The view that sentences are the bearers of truth and falsity conflicts with the law of non-contradiction. 6. The negation of the statement that some cows are brown is that some cows are not brown. 7. If a statement is logically true, then its negation will be logically false. 8. The following set of statements is consistent: {Bill was at the party if and only if Sarah wasn't; Sarah was at the party; Bill and Tom were both at the party.} 9. Every set of statements containing a statement that is logically false will be inconsistent. 10. Suppose that a set contains exactly two statements, one of which is logically true, and the other logically indeterminate: The set must be consistent. 11. Every set of statements containing exactly two logically indeterminate statements must be a consistent set. 12. Every argument has at least one premise and exactly one conclusion. 13. "McDuff isn't doing this tutorial because he's flying to Egypt for a LONG vacation." This is an example of an inductive argument. 14. "Since the sun always rises in the east, we'll be able to figure out where we are in the morning." This is an inductive argument. 15. All valid arguments are sound. 16. All sound arguments are valid. 17. Every sound argument has a true conclusion. 18. The following statements are logically equivalent: 1) Bush is President and Cheney is Vice President. 2) Cheney is Vice President and Bush is President. 19. The following statements are logically equivalent: 1) Bush is President. 2) Cheney is Vice President.

PROBLEMS

1. Instructions: Determine whether each of the following passages contains an argument or an explanation. If it contains an argument, add any suppressed premises or conclusions that are needed, and identify its conclusion and then state whether the argument is inductive or deductive. A. The price of gold should rise. Worldwide production is down and the Chinese are purchasing more of it than they have in the past. B. Bill didn't go to the dance because he had a broken leg at the time. C. Either the butler or the maid committed the crime. But the butler couldn't have done it. D. The teacher gave everyone in the class an A. So Mildred must have gotten one too.

E. Of course it was a well-acted movie with a weak plot. It was a Golan-Globus production. 2. Instructions: Diagram the following passages, each of which contains more than one argument. A. 1Dick saw Spot if and only if Jane didn't. However, 2Dick saw Spot only if Spot used a fire hydrant. 3 4 5 6 Spot didn't use a fire hydrant. He used the carpet. So Jane saw Spot. But if Jane saw Spot, she 7 8 punished him. Moreover, if Spot used the carpet then Jane punished him. So Jane punished Spot. B. Dracula must be a bat because he is a vampire and all vampires are bats. Moreover, he must be 5 6 hungry because he has not eaten in three days, and anyone who has not eaten in three days is hungry. 7 If Dracula is a hungry bat, tourists will wake up drained in the morning if they stay the night. 8The 9 10 tourists are idiots because they are staying the night. So they are going to wake up drained in the 11 morning. Unfortunately, anyone who wakes up drained in the morning will stay forever. C. Bloodless Charity is afraid of vampires because she's a hemophiliac and all hemophiliacs are afraid of vampires. But, 4all vampires are bats. 5So she must be afraid of bats. 6Don't expect her to go in the cave. 7 There are bats in it. D. MADD gets mad at any organization that advocates the consumption of alcoholic beverages. But 2 3 4 beer is an alcoholic beverage, and SETA advocates the consumption of beer over milk. So MADD 5 gets mad at SETA. According to SETA, beer is less harmful to the health of the drinker than milk, and 6 7 people ought to care about their health when selecting what to eat and drink. Moreover, milk harms 8 other animals--namely cows--whereas beer does not, and SETA maintains that we ought to care about 9 the health of other animals when selecting what to eat and drink. MADD wont get any money of 10 mine because no organization that gets mad at SETA will get my money. E. Everything old Mack Schnell did, he did in a hurry. One night he drove his car on Hairpin Alley. He must have driven it in a hurry. 4But those who drive their cars in a hurry on Hairpin Alley are not only 5 6 dead meat, that isn't moot, they're fools to boot. So Mack Schnell died a fool. Unfortunately, dead fools, 7 I'm told, are headed for hell. From this I surmise that old Mack Schnell is bound for hell. 8But my guess is that if he ever gets there, he's sure to get there in one quick hurry. F. 1Rex is a miserable, pathetic, unloved dog. 2All of those at the party who were children and wanted 3 4 5 6 donuts got them. Rex wanted donuts, and he was at the party, but he didnt get any. So Rex wasnt 7 8 a child. Besides Momma Lina and the kids, the only one at the party was a dog. Rex must have been 9 10 a dog because he sure wasnt Momma Lina. Dogs that dont get donuts are miserable, pathetic, and unloved. G. 1 The United States should forcibly remove Hussein from office because 2he is a cruel dictator and 3 4 he is a menace not only to his neighbors and to other nations, but also, he is a clear and present danger to the United States. 5 He used chemical weapons on the Kurds in Northern Iraq when they rose up against his dictatorship, and 6he had many members of his own parliament, some of whom were completely innocent, summarily shot when there was an attempted coup against him. 7 He attacked other nations, notably Iran and Kuwait, and this shows that 3he is a menace not only 8 to his neighbors but to other nations as well. Moreover, he used chemical and/or biological weapons 9 during the Iran/Iraq War, and this indicates that he would not hesitate to use WMD on the United 10 States if he has the opportunity. Also, he was attempting to develop nuclear weapons before the First 11 Gulf War, and there is no reason to believe that he has stopped attempting to develop such weapons 12 13 since then. In fact, there is reason to believe that he is still perusing this policy because he 14 attempted to purchase aluminum tubes from Niger, and these tubes could only be used to make nuclear weapons. 15 If we allow Hussein to continue his WMD programs there is no doubt that he will eventually 16 have the opportunity to use them on the United States. For even if he cannot deliver the weapons himself, he has contacts with members of Al-Qaida, and he would not hesitate to provide members of

1 2 3 1 1 2 3 1 2 3 4

that organization with these weapons, which they surely would use.17 Eliminating Hussein will also be beneficial to the people of Iraq and it will allow us to transfer military bases from Saudi Arabia, where 18 they engender local antagonism, to Iraq. A United States attack will also be cost effective because we can use Iraqi oil to pay for our war expenses. 3. Instructions: Determine who committed the crime, how it was done, and what the motive was. THE SITUATION Lord Mumbleton has been found dead at his desk in his study; the attending physician diagnoses arsenic poisoning. Two empty glasses of wine are on a tray on his desk, along with a half-eaten crumb cake. The only people on the estate were Lord and Lady Mumbleton, Doxy the maid, Flo Main the cook, Stodgson the butler, and Shiftless the chauffeur. The inspector has just told you that there are two entrances to Lord Mumbleton's study; one door leads to his and Lady M.'s bedroom that was locked from the study side, while the other door leads into the hallway. A large window looks out onto the garden. Because Scotland Yard is baffled as to motive, means, and opportunity, your assistance is requested. Under cross-examination the following facts become apparent: THE EVIDENCE 1. Either Lady M. was locked in her bedroom by Lord M. to keep her from spying on him, or else, she stole the butler's keys. 2. Only Lord M. eats crumb cake. 3. Either Lady M. is lying, or Lord M. hasn't seen his lawyer in years. 4. Flo Main killed Lord M. only if Doxy is her natural daughter or Lord M. made improper suggestions about her tarts. 5. Shiftless was walking in the garden at the time and he saw Stodgson enter the study. 6. If Lady M. went to a convent, she would never lie. 7. If the diamond necklace he gave her was a fake, Doxy killed Lord M. in a fit of pique. 8. Last night, Shiftless heard Doxy and Lord M. giggle about what Flo Main could do with her tarts. 9. Stodgson poisoned Lord M. if and only if Lord M. gave Stodgson two weeks notice. 10. Doxy poisoned the cakes only if she helped bake them. 11. Lord M. didn't damage the 1947 Daimler because he canceled all of his insurance to save money. 12. Stodgson swears that his keys are always in his possession and that Doxy had a private meeting with Lord M. every night. 13. The cook is very proud of her work and never lets anyone help her bake. 14. Stodgson entered the study through the hallway and saw Lord M. open the bottle of wine and pour out two glasses of it. 15. Lord M. paid three thousand pounds for Doxy's necklace. 16. Lady M. went to school in a convent in Switzerland. 17. Doxy's natural mother is a pawnbroker in the village. 18. Either Stodgson poisoned the wine or the arsenic was in the crumb cake. 19. Shiftless killed him only if Lord M. damaged the 1947 Daimler. 20. Lord M. gave Stodgson two weeks notice only if Lord M. had a new will drawn up at his lawyer's office. 21. Unless there is a lot of insurance money, Lady M. didn't kill her husband; but she wanted to. 22. The entire household heard Lord M. and Doxy giggling last night.

TYPES OF DISPUTES

Despite how we feel about them disputes are a reality. Although there are occasions however when they are undoubtedly both important and unavoidable often we can eliminate them, or reduce them, or at least, significantly clarify them. Among other things, in this chapter we will provide some suggestions about how to minimize disputes, and how to handle various kinds of disputes once they arise. We also discuss several important points about definitions and language and the role definitions play in disputes. DISPUTES IN BELIEF AND DISPUTES IN ATTITUDE One way of classifying disputes is by determining whether they involve disagreements in belief, or disagreements in attitude. (Roughly, a disagreement in belief is a disagreement about the way the world is, while a disagreement in attitude is a disagreement about how we feel about things.) Although some disputes involve disagreements in both belief and attitude, others involve disagreements in belief only, and still others involve disagreements in attitude only. When Jones says, "Unfortunately, Graham Chapman died of cancer," and Smith retorts, "No, he died of a heart attack, and besides, who cares?" clearly Jones and Smith are disagreeing in both belief and attitude. However when Jones says, "Unfortunately, Graham Chapman died of cancer," and Smith responds "No, he didn't, thank goodness," it is obvious that the two are disagreeing in belief only. Finally, if Jones says, "Unfortunately, Graham Chapman died of cancer," and Smith counters with, "I know, but so what?" the two are apparently disagreeing in attitude only. Disputes in attitude arise largely because of the use of emotionally charged words. Instead of using a word like "government official," which is emotionally neutral, we'll use the term "public servant" which expresses a favorable attitude toward a government official. Alternately, we'll use "bureaucrat," which expresses an unfavorable attitude toward the same individual. Although it might be rhetorically effective to use emotionally charged words, when we are trying to be logical it is better to use terms with less emotive impact. By doing this we can often curtail disputes in attitude. Besides this, it is wise to remember that we not only frequently do have, but also, we are not contradicting each other when we have, different attitudes toward things. When, for example, Jones says, "I like X," and Smith responds, "I dislike X," the two are not contradicting each other since both of their claims can be true. VERBAL DISPUTES Let's call the disputes in belief we have been discussing "genuine disputes in belief." We can contrast them with another type of dispute in belief which we will identify as "verbal disputes." Verbal disputes often arise when the disputants simply mean something different by a particular word or phrase. Unfortunately however, each person fails to recognize that the other fellow simply means something else. Thus, suppose Jones says, "The prospector got some gold from the bank," meaning that he found some gold at the edge of a river, while Smith responds, "No. He never goes to banks," by which he means that the prospector never visits financial institutions. When this type of verbal dispute in belief arises it does so because a word or phrase is ambiguous. (A word or phrase is "ambiguous" when it has two or more distinct meanings. Words like "bar," "bank," and "man" are ambiguous.) Once we have had an opportunity to examine lexical definitions we will see how to resolve these sorts of verbal disputes. A second type of verbal dispute arises when a word or phrase is vague. Words and phrases are "vague" when there are borderline cases where it is impossible to decide whether they apply, because their meanings do not have sharp boundaries. Words like "mountain," "tall," and even "bachelor," are vague. (To see this, ask yourself how high a mound of earth must be to be a mountain, or how old a male who is unmarried needs to be before he is a bachelor.) After we have discussed prcising definitions, we will briefly examine one way to avoid these sorts of verbal disputes. Once we begin noticing different types of disputes we are apt to misconstrue a certain type of genuine dispute as a verbal dispute, or, as a dispute in attitudes. This kind of dispute often involves evaluative terms. Suppose, for example, Jones' claims that Robinson is a good father because he works hard for his

family and tries to provide them with what they need financially. Smith, on the other hand, disagrees because Robinson pays little attention to their emotional needs. In these sorts of cases the two disputants are using different sets of criteria. (In this instance, Jones' criteria for being a good father are quite different from Smith's.) While we should view these sorts of disputes as genuine disputes in belief, they are among the most intractable kinds of disputes there are.

In our language not only do some words and phrases have a meaning they also function as referring expressions. These include common nouns like "bachelor," "dog," and "unicorn," and proper names like "Joan," and "Bill." Also, however, definite descriptions (e.g., the "first President of the U. S.") and indefinite descriptions (e.g., "a happy camper"), function in this way. We say of such words and phrases that they have not only an intension (or a connotation), but also an extension (or a denotation). Their intension (or their connotation) is their meaning, and their extension (or their denotation) is the set of objects they refer to. Thus, the intension of "bachelor" is an unmarried man, while its extension is the set of all bachelors in the world. Although it might seem odd to speak of proper names like "Joan" and "Bill" as having an intension (or meaning), they do play a role in our language. At least they are not meaningless in the way in which words like "snicker-snack" and "bandersnach" are. Moreover, it may seem peculiar for us to speak of the extension of words like "unicorn," and "dragon." After all, they don't succeed in picking out any objects in our world. They are, nonetheless, referring expressions. (In these sorts of cases we will say that the extension of the term is the empty set. This is merely a fancy way of saying that the term is a referring expression, though it does not succeed in referring to any existing object.) Several points about intension and extension are, perhaps, worth noting here. First, the intension of a term determines its extension. In other words, the meaning of a word identifies which objects if any it picks out in the world. Second, the extension of a term does not determine its intension. In other words, terms which pick out the same objects in the world might still have different meanings (e.g., "George Washington" and "the first President of the U.S."). Third, when we list words in order of increasing intension we are also normally listing them in order of decreasing extension. That is to say, as words get more complex in meaning, they tend to refer to fewer objects. Thus, the list of terms, "bachelor," "fat bachelor," and "fat happy bachelor," is a list both in order of increasing intension and decreasing extension.

KINDS OF DEFINITIONS

DENOTATIVE DEFINITIONS We sometimes try to explain the meaning of a word by mentioning at least several objects it denotes. Although we might not view these strictly as definitions, they are, nevertheless, frequently called "denotative definitions." Among denotative definitions, ostensive definitions stand out as especially common and useful. Ostensive definitions are definitions by pointing. When a young child wants to know the meaning of the word dog" we are apt to point to a dog and call out the word "dog." This is an example of an ostensive definition. A second type of denotative definition worth mentioning is a definition by partial enumeration. Definitions by partial enumeration are simply lists of objects, or types of objects, to which the word refers. The list, "beagle," "cocker spaniel," "dachshund," "greyhound," "poodle," provides an example of a definition by partial enumeration. While denotative definitions might not really seem much like definitions, they do ultimately attempt to convey the meaning of a word, at least indirectly. For the hope is that by citing the objects the word refers to, the people we are talking with will come to see what that word means. However, let's turn now to definitions in the more ordinary sense of the term. CONNOTATIVE DEFINITIONS Connotative definitions are usually formulated in the following three ways:

1) X is Y. (Example: A bachelor is an unmarried man.) 2) [The word] "X" means Y. (Example: The word "Bachelor" means unmarried man.) 3) X =DF. Y. (As an example: Bachelor =DF. unmarried man.) In all these cases the term on the left ("bachelor" in the above examples) is the one being defined, and we call it the "definiendum." While we refer to the terms used to define this word ("unmarried man" in our example), collectively as the "definiens." Among connotative definitions, perhaps five different kinds are worth mentioning, (1) persuasive definitions, (2) theoretical definitions, (3) prcising definitions, (4) stipulative definitions, and (5) lexical definitions. 1. Persuasive Definitions: The purpose of a persuasive definition is to convince us to believe that something is the case and to get us to act accordingly. Frequently definitions of words like "freedom," "democracy," and "communism," are of this type. (E.g., taxation is the means by which bureaucrats rip off the people who have elected them.) While these sorts of definitions might be emotionally useful, we should avoid them when we are attempting to be logical. 2. Theoretical Definitions: Theoretical definitions are designed to explain a theory. Whether they are correct or not will depend, largely, on whether the theory they are an integral part of is correct. Newton's famous formula "F = ma" (i.e., Force = mass x acceleration), provides a good example of such a definition. 3. Precising Definitions: Precising definitions attempt to reduce the vagueness of a term by sharpening its boundaries. For example, we might decide to reduce the vagueness in the term "bachelor" by defining a bachelor as an unmarried man who is at least 21 years old. We often encounter prcising definitions in the law and in the sciences. Such definitions do alter the meaning of the word they define to some extent. This is acceptable, however, if the revised meaning they provide is not radically different from the original. Sometimes by providing prcising definitions we can reduce the potential for verbal disputes that are based on a term's vagueness. When Martha and McDuff begin arguing about whether a bicycle is a vehicle (Cf. "Questions," below) we might try to get them to recognize that the term "vehicle" contains some vagueness. Once they have seen this, we might even get them to agree to reduce this vagueness by providing a prcising definition. 4. Stipulative Definitions: Stipulative definitions are frequently provided when we need to refer to a complex idea, but there simply is no word for that idea. A word is selected and assigned a meaning without any pretense that this is what that word really means. (E.g., by "a blue number" we mean any number greater than 17 but less than 36.) While we cannot criticize stipulative definitions for being incorrect (and so, the objection, "But that isn't what the word means" is inappropriate); we can criticize them as unnecessary, or too vague to be useful. 5. Lexical Definitions: Unlike stipulative definitions, lexical definitions do attempt to capture the real meaning of a word and so can be either correct or incorrect. When we tell someone that "intractable" means not easily governed, or obstinate, this is the kind of definition we are providing. Roughly, lexical definitions are the kinds of definitions found in dictionaries. (Here it needs to be born in mind, though, that dictionaries are often concerned only with giving us an approximate meaning of the word.) Frequently words that are first introduced in the language as stipulative definitions become, over time, lexical definitions. (Consider, for example, Winston Churchill's famous use of the expression "iron curtain.") Besides synonymous definitions, definitions by genus and difference are perhaps the most common type of lexical definition. The essential characteristic of these definitions is we are defining the definiendum by using two terms in the definiens. For example, in the definition, "a bachelor is an unmarried man," we are defining the word "bachelor" in terms of "unmarried" and "man." In this definition the term "unmarried" is the difference, while the term "man" is the genus. (The difference, or difference term, qualifies, or says what kind of thing, the genus is.) a. Resolving Verbal Disputes based on Ambiguity: As we noted earlier, sometimes disputes arise when the parties to the dispute use a word or phrase in different senses. We can settle these kinds of disputes by appealing to lexical definitions. Thus, in the example of the dispute between Martha and McDuff about whether Martha's 1985 Mercedes is a new car (Cf. "Questions," below), all we need to do is to point out that the word "new" has two different lexical meanings: (1) recently purchased, and (2) this year's model. While Martha is using the term in sense (1), McDuff is using it in sense (2). If we are right about the word in

question the dispute should vanish. (If, however, we find only one party to the dispute using the term in a correct lexical sense, then we should recognize him as victorious.) b. Rules for Evaluating Lexical Definitions: Lexical definitions can be faulted when they violate any of the following rules: (1) The definition must be neither too broad nor too narrow. A definition is too broad when it includes objects in its definiens that it excludes from its definiendum. So if we define a bachelor as an unmarried person, our definition is too broad. On the other hand, a definition is too narrow when the definiens excludes objects included in the definiendum. If we define a bachelor as an unmarried man over 30, our definition is too narrow. Oddly enough, a definition can be both too broad and too narrow. (E.g., a bachelor is an unmarried person over 30.) (2) The definition must not be circular. A definition is circular when we use, in the definiens, the term we are trying to define in the definiendum. (E.g., A bachelor is a bachelor.) Alternately, although we do not use this term directly in the definiens, we use it when we attempt to define a term employed in the definiens. (We are violating this rule if we first define a brother as any male who has a sibling and then define a sibling as anyone who has a brother or sister.) The rule that definitions should not be circular is of some philosophical interest. For it compels us to admit that we cannot adequately define every word in the language. Thus, if we define "P" in terms of "Q" and "R," the rule now prohibits us from defining "Q" in terms of either "P" or "Q." If we then go on to define "Q" in terms of "R" and "S," our definition of "S" cannot contain either "P," "Q," or "S," etc. (3) A definition must state what is essential and not what is accidental. If we define "puberty" as the time in life when the two sexes first begin to get acquainted we are violating this condition. For while this is typically so, it is not an essential characteristic of puberty. (4) A definition should not be negative when it can be affirmative. Whenever possible, we should not define by saying what something isn't but by saying what it is. However, it is permissible to define a term in a negative way when it is the negative of a term that we have already defined in a positive way. Thus, we can define "invalid" as not valid, if we have already defined the term "valid" positively. (5) A definition should be clear and precise, not obscure or ambiguous. This rule only makes sense. We define to provide others with an explanation of the meaning of a word. If our definition is ambiguous or obscure those for whom we are defining it won't find our definition very helpful.

Often enough it seems all but impossible to find an adequate lexical definition for a term. When we are unable to construct such a definition, we frequently resort to looking for necessary, and/or sufficient, conditions for the application of the term. A condition X is a necessary condition for Y just in case the occurrence of Y requires the occurrence of X. Being a male, for example, is a necessary condition for being a bachelor. In other words, in order for a thing to be a bachelor it must be a male. In contrast to this, a condition X is a sufficient condition for Y just in case the occurrence of X guarantees the occurrence of Y. So, for example, being a whale is a sufficient condition for being a mammal. Except for the rule about a definition's neither being too broad nor too narrow the same principles that apply to constructing lexical definitions also apply to formulating necessary and sufficient conditions: The condition should not be circular. It should state only what is essential and not what is accidental. It should not be negative when it can be affirmative. Finally, it should always be expressed in clear and precise language. There is an important connection between providing necessary and sufficient conditions and providing a lexical definition. While we may form a set of conditions that is necessary but not sufficient, or vice versa; if we should manage to construct a set of conditions that are both necessary and sufficient, that set will be a lexical definition. Therefore, the task of searching for necessary and sufficient conditions is weaker than the task of constructing a lexical definition.

Whenever anyone proposes either a necessary or a sufficient condition, or a lexical definition, they commit themselves to a claim that this is always true. Thus, the claim that "bachelor" means unmarried man

implies that all bachelors are unmarried men. Such claims are open to the possibility of being refuted by a counterexample. A counterexample is simply an example that runs counter to a universal claim. Suppose someone asserts that all swans are white. A counterexample to this claim would be a swan that was not white. If a universal claim merely asserts that this is how our world is, a counterexample must be an existing object. We cannot, for example, refute the claim that all swans are white by pointing out that it is possible to imagine a nonwhite swan. An imaginary swan that isn't white will not do. The universal claims lexical definitions and necessary and sufficient conditions imply, on the other hand, are supposed to be necessary. When we define a bachelor as an unmarried man we don't just mean that it is a fact that all bachelors are unmarried men. We mean that all bachelors must be unmarried men. Because of this an imaginary object may suffice as a counterexample. To refute the definition all we need to do is imagine a bachelor who is married, or a bachelor who is not a man.

QUESTIONS

A. TRUE/FALSE: 1. While the intension of a term determines its extension, its extension does not determine its intension. 2. Although the terms "unicorn" and "dragon" have different intensions, they have the same extension. 3. An ostensive definition is a "definition" given by means of pointing. 4. In the definition, "bashful" means shy, "bashful" is the term identified as the definiens. 5. The definition, "light is a form of electromagnetic radiation," is an example of a theoretical definition. 6. The purpose of a persuasive definition is to influence attitudes. 7. The definition, "man is a featherless biped," is poor because it violates one of our five rules. 8. A necessary condition for an object's being a rectangle is that it has four equal angles and four equal sides. 9. A sufficient condition for an argument's being valid is that it is sound. 10. A sufficient condition for a set of statements being inconsistent is that it contains a statement that is logically false. 11. An office building is a counterexample to the claim that being a dwelling is a sufficient condition for being a house. 12. Being a male duck is not only a necessary and sufficient condition but also an adequate lexical definition for being a drake. B. MULTIPLECHOICE: 1. MARTHA: How do you like my new Mercedes McDuff? MCDUFF: That isn't a new Mercedes, Martha. It appears to me to be an '85 model. a. A genuine dispute in belief only. b. A dispute in attitude only. c. Both a genuine dispute in belief and a dispute in attitude. d. A dispute in belief that is verbal, and is based on an ambiguity. e. A dispute in belief that is verbal, and is based on a vague term. 2. MARTHA: I wish those children wouldn't ride their vehicles on my property, McDuff. MCDUFF: You mean their bicycles, Martha? They don't own any vehicles. a. A genuine dispute in belief only. b. A dispute in attitude only. c. Both a genuine dispute in belief and a dispute in attitude. d. A dispute in belief that is verbal, and is based on an ambiguity. e. A dispute in belief that is verbal, and is based on a vague term. 3. MARTHA: That rat Bullfinch cheats at games. Last night we were playing hangman and he tried to use "syzygy." MCDUFF: He doesn't cheat at all. "Syzygy" is a real word. He is just a good hangman player Martha. a. A genuine dispute in belief only. b. A dispute in attitude only. c. Both a genuine dispute in belief and a dispute in attitude.

d. A dispute in belief that is verbal, and is based on an ambiguity. e. A dispute in belief that is verbal, and is based on a vague term. 4. MARTHA: Wasn't that a terrible film we saw at the Strand Cinema last night, McDuff? MCDUFF: Well, admittedly it wasn't a good movie, but I really enjoyed it anyway. a. A genuine dispute in belief only. b. A dispute in attitude only. c. Both a genuine dispute in belief and a dispute in attitude. d. A dispute in belief that is verbal, and is based on an ambiguity. e. A dispute in belief that is verbal, and is based on a vague term. 5. MARTHA: Your cat went to the bathroom on my new carpet again last night McDuff. MCDUFF: It wasn't my cat at all. It was your dog, and I saw him do it. a. A genuine dispute in belief only. b. A dispute in attitude only. c. Both a genuine dispute in belief and a dispute in attitude. d. A dispute in belief that is verbal, and is based on an ambiguity. e. A dispute in belief that is verbal, and is based on a vague term.

EXERCISES

A. DISPUTES Instructions: Identify the type of dispute involved in each of the cases below. 1. Bill: Barry Lyndon was a fine film. The scenery was breathtaking, the costumes were elegant, and the acting was superb. Harry: I don't see how you can say that. I've seen poor corpses that had more of a plot. 2. "I don't know what you mean by 'glory,'" Alice said. Humpty Dumpty smiled contemptuously. "Of course you don't--till I tell you. I meant 'there's a nice knock-down argument for you!'" "But 'glory' doesn't mean 'a nice knock-down argument,'" Alice objected. "When I use a word," Humpty Dumpty said, in a rather scornful tone, "it means just what I choose it to mean--neither more nor less." "The question is," said Alice, "whether you can make words mean so many different things." "The question is," said Humpty Dumpty, "which is to be master--that's all." Alice was too much puzzled to say anything; so after a minute Humpty Dumpty began again. "They've a temper, some of them--particularly verbs: they're the proudest--adjectives you can do anything with, but not verbs--however, I can manage the whole lot of them! Impenetrability! That's what I say!" "Would you tell me please," said Alice, "what that means?" "Now you talk like a reasonable child," said Humpty Dumpty, looking very much pleased. "I meant by 'impenetrability' that we've had enough of that subject, and it would be just as well if you'd mention what you mean to do next, as I suppose you don't mean to stop here all the rest of your life." (Lewis Carroll, Through the Looking Glass) 3. Phil: Osama bin Laden is a patriot. Larry: No he isn't. He is a terrorist.

B. DEFINITIONS Instructions: Identify the rule(s) that the definitions below are violating. 1. A decanter is a container that holds liquids. 2. Living means not dead. 3. 'True' means something that is true, truly, in a true manner, truthfully. 4. A banana is something the incumbent party says that the economy slips on, during a recession. 5. 'Innocent' means not guilty.

CHAPTER 3 FALLACIES

A fallacy is a frequently committed mistake in reasoning. We can roughly classify fallacies into three main groups: Fallacies of Irrelevance, Fallacies of Presumption, and Fallacies of Ambiguity. Of these, the Fallacies of Irrelevance are the simplest to understand. They present evidence that is not really relevant in establishing the claim for which they are arguing. Before you have completed this chapter, you will have studied ten of the most common Fallacies of Irrelevance. The Fallacies of Presumption make unwarranted assumptions in their premises. With these fallacies the problem is not that the evidence has no bearing on the claim we are trying to establish. Instead, it is that we are presuming something we shouldn't be presuming. By the end of this chapter you will have studied fifteen Fallacies of Presumption. Our final group of fallacies, the Fallacies of Ambiguity, is the most difficult to recognize. They all involve a mistake in reasoning that is based on a misunderstanding about meaning. Given their difficulty, you can be thankful that you will only be studying five Fallacies of Ambiguity. The fallacies you will be studying in this chapter are: FALLACIES OF IRRELEVANCE 1. Argumentum ad Hominem 2. Argumentum ad Baculum 3. Argumentum ad Populum 4. Tu Quoque Fallacy 5. Fallacy of Poisoning the Well 6. Argumentum ad Ignorantiam 7. Argumentum ad Misericordiam 8. Fallacy of Denying the Antecedent 9. Fallacy of Affirming the Consequent 10. Red Herring Fallacy FALLACIES OF PRESUMPTION 11. Fallacy of Composition 12. Fallacy of Division 13. Fallacy of Hasty Generalization 14. Fallacy of Accident 15. Fallacy of Bifurcation 16. Argumentum ad Verecundiam 17. Masked Man Fallacy 18. Straw Man Fallacy 19. Begging the Question Fallacy 20. Fallacy of Complex Question 21. Slippery Slope Fallacy 22. Fallacy of False Analogy 23. False Cause Fallacy 24. Special Pleading Fallacy 25. Gamblers Fallacy FALLACIES OF AMBIGUITY 26. Fallacy of Equivocation 27. Fallacy of Amphiboly 28. Fallacy of Accent 29. Fallacy of Hypostatization 30. Quantifier Fallacy

1. Argumentum ad Hominem: The Argumentum ad Hominem is an easy fallacy to recognize. It consists in an attack (i.e., an insult) on the person who disagrees with us. The Latin translates as "an argument to the man." We prefer, however, to call it an attack on the man, or against the man. AN EXAMPLE BULLFINCH: MARTHA: I believe logic is an extremely important and useful subject. That is because you're just an idiot, Bullfinch.

NOTE: To see how irrelevant Martha's evidence is here, suppose the premise is true (i.e., suppose Bullfinch is an idiot). Does it follow from this that logic is not an important subject? Surely it does not. 2. Argumentum ad Baculum: Like the Argumentum ad Hominem, the Argumentum ad Baculum is an attack on an individual or group of individuals. Instead of verbal abuse, however, the appeal here is to force. The structure of the fallacy is Premise: A threat. So, Conclusion: The claim I am arguing for is correct. The force the person who is committing the fallacy is appealing to might be physical in nature, or it might be economic. AN EXAMPLE BULLFINCH: MARTHA: I think I could write a better book on birds than yours, Martha. That is just bull, Finch! You want your right arm don't you? You want your head to stay attached to your shoulders don't you? For the sake of your own well-being, I really think you should worry about something else.

3. Argumentum ad Populum: The Argumentum ad Populum consists in an attempt to justify a claim by appealing to sentiments that large groups of people have in common. Three versions of this fallacy are especially important. The first we call "Flag Waving." It appeals to the sentiment of nationalism (or patriotism). The second version of this fallacy is "Snob Appeal." It plays on our desire to be a little superior to, or better than, others. Finally, the third version we call "Bandwagoning." It appeals to our feeling of wanting to belong to the crowd. AN EXAMPLE BULLFINCH: MARTHA: Did you really need such an expensive computer, Martha? Of course. Everybody else in the neighborhood has one. [Bandwagoning] Besides, it's the American thing to do. [Flag waving]

4. Tu Quoque Fallacy: We usually commit the Tu Quoque (or You're Another) when we are trying to get ourselves off the moral hook. The form it takes is You do it too. So it's okay for me to do it. In some ways it resembles the Argumentum ad Hominem, Poisoning the Well, and Bandwagoning. However, although many logicians are inclined to include it as a version of one of these other fallacies, we think it is important enough to label separately. Politicians whose hands were in the cookie jar often commit it. People also frequently use it in the work place when they are engaging in minor theft. AN EXAMPLE BULLFINCH: MARTHA: I really thought you were a little rude to Mrs. Bainbright. Why are you criticizing me? She was rude to me first.

5. Fallacy of Poisoning the Well: The Fallacy of Poisoning the Well occurs when we try to prevent another person from contributing anything to the discussion, because of the circumstances in which they find themselves. We reason, for example, that he is a general in the Army, so he will naturally have a certain bias. We can, therefore, discard his testimony.

This fallacy is often used along with the ad Hominem; for example, "You are just a stupid kid." Some logicians even describe it as a circumstantial ad Hominem. We believe, however, it is best to identify it as a separate fallacy. AN EXAMPLE BULLFINCH: MARTHA: I think logic is at least as useful, for most people, as mathematics. You would say that. You've been listening to Master McFluff too long. Besides, you are a logician, and you need to make a living.

6. Argumentum ad Ignorantiam: The form the Argumentum ad Ignorantiam, or Argument from Ignorance, has is: No one has ever proven that it's this way. Therefore, it must be the other way. For example, "No one has ever established that Fermat's Last Theorem is really a theorem. So it must not be one." There may be one circumstance in which we can allow this kind of reasoning, though even that is open to debate. It might be permissible in reasoning to the nonexistence of something. For example, "No one has ever confirmed that there is an abominable snowman, so there isn't one," may be acceptable. On the other hand, it is a commission of this fallacy to reason the other way around. We cannot legitimately argue that no one has ever shown that there isn't an abominable snowman, so there must be one. EXAMPLES (1) In spite of all the investigating that reporters did during the Watergate scandal, no one has found any hard evidence showing that Nixon ordered the break in. So he didn't. (2) MARTHA: Do you want to know what I think, Bulldog? I think McDuff isn't in Egypt at all. I think he's just afraid to show his face around here after the last rotten tutorial he gave. After all, when we went to his apartment, did we see any signs of plane reservations, or any books about Egypt? Did any of the neighbors we talked to mention one thing about his going to Egypt?

7. Argumentum ad Misericordiam: The Argumentum ad Misericordiam, or Appeal to Pity, is surely an easily recognized fallacy. Its premises simply consist in verbal crying. From this we are supposed to conclude whatever the arguer asks us to. AN EXAMPLE BULLFINCH: MARTHA: You can't have a cigarette now, Martha. The hospital has a rule against smoking when you're in an oxygen tent. You've just got to let me have one, Bullfinch. You can't believe what those doctors have done to me. My life the last three days has been a living nightmare.

8. Fallacy of Denying the Antecedent: A conditional statement is a statement that sets a condition (called the "antecedent") down, and then goes on (in the "consequent") to talk about what is the case if that condition is met. The Fallacy of Denying the Antecedent has the following form: One premise asserts a conditional statement. Another premise simply denies the antecedent of that conditional statement. From this the denial of the consequent is supposed to follow (i.e., "If P, then Q. Not-P. Therefore, not-Q.") AN EXAMPLE MARTHA: I guess I wasn't learning much from the course I was taking. The teacher said that if we did better on the second test than we did on the first, it meant we had learned something. I did worse on the second test, however.

9. Fallacy of Affirming the Consequent: Like the Fallacy of Denying the Antecedent, the Fallacy of Affirming the Consequent contains one premise that sets a condition (viz., the "antecedent") down, and then goes on (in the "consequent") to talk about what happens if the antecedent is met. Here, however, instead of the premise going on to deny the antecedent of the condition, it affirms the consequent. Thus, the form of this fallacy is: "If P, then Q. Q. Therefore, P." AN EXAMPLE BULLFINCH: MARTHA: I'm really sorry about the ballooning accident, Martha. You don't make a good liar, Bullfinch. When you lie, your face gets red. And it's red now.

10. Red Herring Fallacy: The Red Herring Fallacy is often a difficult fallacy to spot. In order for it to be committed, the premises of the argument must really be doing some work toward establishing something. However, the trouble is that the person presenting the argument does not draw that conclusion. Instead he comes up with a completely different one, and one for which the premises provide no support. The fallacy probably gets its name from the fact that escaped convicts smear herring on their bodies, in an attempt to throw the dogs off the scent. AN EXAMPLE MARTHA: The newspaper pointed out that he had been convicted of burglary on three earlier occasions. The editor must not like him very much.

11. Fallacy of Composition: The Fallacy of Composition is committed when we mistakenly reason that what is true of the parts must, therefore, be true of the whole. No doubt we can sometimes correctly reason in this way. For example, we may correctly reason that since the dog's head is brown, and his back and legs are brown, the dog is brown. In such cases, we are not committing any fallacy. It is, however, a presumption to think that this kind of reasoning is always correct. Frequently people have trouble identifying the Fallacy of Composition. They are likely to confuse it with the Fallacy of Division, or the Fallacy of Hasty Generalization. Composition and Division are alike in the sense that they are both concerned with reasoning about parts and wholes. While Division goes from whole to parts, however, Composition reasons from a thing's parts to its whole. Composition and Hasty Generalization both reason from something specific to something general. However, with Hasty Generalization the reasoning is not about parts and wholes. (Instead, Hasty Generalization involves faulty generalizations based on specific and uncharacteristic situations.) AN EXAMPLE BULLFINCH: MARTHA: BULLFINCH: MARTHA: What is that awful looking sandwich you are eating, Martha? Peanut butter, strawberry jelly, and bananas. I think I'm going to be sick. Why? You like peanut butter don't you? You like strawberry jelly don't you? Moreover, I happen to know you like bananas too. What are you complaining about?

12. Fallacy of Division: The Fallacy of Division is committed when we mistakenly reason that what is true of the whole must, therefore, be true of the parts. We sometimes can correctly reason in this way (e.g., "The building contains nothing but bricks, and since the whole building is red, the bricks must also be red.") Often, however, this type of reasoning is fallacious. AN EXAMPLE MARTHA: BULLFINCH: MARTHA: Why do you go to that club, instead of the one that is close? It has been around since the turn of the century. Really Bullfinch, you should spend more time with people who are your own age.

13. Fallacy of Hasty Generalization: The Fallacy of Hasty Generalization, as its name suggests, occurs when we reason that what was true in a weird special case must, therefore, be generally true. We believe this fallacy is among the reasons people develop prejudices. They meet an unpleasant person from one group, and then generalize in a hasty way about everyone in that group. No doubt there are instances where this fallacy overlaps with the Fallacy of Composition. It is important, however, to keep the two fallacies distinct. AN EXAMPLE MARTHA: Never do I get in another balloon with you, Bullwench. That was the most harrowing experience I have ever had in my life.

14. Fallacy of Accident: The Fallacy of Accident is committed when we reason from a general principle to a weird special case. It resembles the Fallacy of Hasty Generalization. Instead of reasoning from specific to general, however, as the Fallacy of Hasty Generalization does, the Fallacy of Accident goes from something general to something specific. We often confuse this fallacy with the Fallacy of Division. Though there are cases where it overlaps with the Fallacy of Division, the Fallacy of Accident does not deal with reasoning from whole to parts. AN EXAMPLE BULLFINCH: MARTHA: What are you doing trying to walk around without crutches, Martha? What is everybody making such a fuss about? I never needed crutches before.

15. Fallacy of Bifurcation: The Fallacy of Bifurcation -- also sometimes called "the Black and White Fallacy," "the Fallacy of False Dichotomy," or "the Either/Or Fallacy" -- is another of the very few fallacies that are valid arguments. It has the form: "Either P, or Q. Not-P. Therefore, Q." The fallacy is committed when the first premise is false because there is another alternative the arguer has failed to consider. Ordinarily this fallacy occurs when the "Either P, or Q" claim involved considers only extremes and fails to take a third (or the middle) choice into account. AN EXAMPLE MARTHA: Let me put it this way, Bullflinch. You're either a genius or an idiot. After the balloon incident, however, I know you're no genius.

16. Argumentum ad Verecundiam: In our complex world, we obviously can't be experts on every subject. As a result, we frequently resort to reasoning that something is true because someone said so. This may never be an especially great argument, since anyone, despite his or her expertise, can make a mistake. It is not a fallacy, though, unless the person we appeal to as an authority is either not one at all, or not one in the appropriate area. When we do reason in this way, we are committing the fallacy known as Argumentum ad Verecundiam, or Appeal to (false) Authority. AN EXAMPLE MARTHA: I'm really worried about the "Greenhouse Effect" Bullfinch. Merv Griffin says that the polar icecaps will be melting soon.

17. Masked Man Fallacy: The Masked Man Fallacy is primarily a child's fallacy. It has the form: "I know who (or what) X is. I don't know who (or what) Y is. So X and Y must be different." AN EXAMPLE MARTHA: My neighbor can't be the person who wrote the nasty note to me. I know who my neighbor is, but I don't know who wrote the nasty note complaining about my dog barking all night.

18. Straw Man Fallacy: The Straw Man Fallacy is committed when we try to argue for our own view by attacking the opposing positions. It occurs when we distort the other person's view, whether intentionally or not. It often is a favorite tactic of politicians during campaigns. Thus, Johnson used it successfully against Goldwater during the 1964 Presidential campaign by making it appear that a vote for Goldwater was a vote for nuclear war. AN EXAMPLE MARTHA: I'm opposed to consumer rights groups. I don't care how many air bags and seat belts you install in a car. If you drive it at a brick wall at 150 M.P.H., it isn't going to be a pleasant experience for the passengers.

19. Begging the Question Fallacy: The Fallacy of Begging the Question (also called "Petitio Principii," and frequently described as arguing in a circle) is, like Bifurcation, another example of a fallacy that is a valid argument. The arguer commits it when he presupposes exactly the claim he is arguing for. AN EXAMPLE MARTHA: BULLFINCH: MARTHA: Of course I know that the operation was successful. How do know that? The doctor told me so, and he wouldn't have told me that, if it wasn't successful.

20. Fallacy of Complex Question: The Fallacy of Complex Question occurs when, within the context of arguing, we raise a question that makes a presupposition that doesn't hold, and then reason based on this. The most famous example of a complex question is "Have you stopped beating your wife yet?" This question makes several assumptions, perhaps the most important of which is that you used to beat her. You cannot really answer the question unless the presupposition is correct. If you imagine this sort of question being asked within the context of an argument, then the Fallacy of Complex Question is being committed. AN EXAMPLE MARTHA: He must be guilty. When I asked him why he did it, he didn't answer me.

21. Slippery Slope Fallacy: There are two different versions of the Slippery Slope Fallacy. One version of it, sometimes called the "Domino Theory," consists in a sequence of unjustified causal claims of the sort, "P causes Q. Q causes R. R causes S." From these, the arguer concludes that things are going to go to heck in a hand basket, since P is going to cause S. The argument would be acceptable, if not for the fact that the causal claims in the premises are unsubstantiated. AN EXAMPLE MARTHA: If we let the communists take over El Salvador, the next thing you know they'll be in Mexico. Once they take over Mexico, however, they'll head for Texas and it is just a matter of time before we are all communists.

The other version of the fallacy is different. It reasons that since you can't make a sharp distinction between a pair of overlapping ideas (e.g., "mountain" and "foothill"), there is no difference between them. EXAMPLES (1) You can't make a sharp distinction between a mountain and foothill. So Mt. Everest is just a gigantic foothill. (2) BULLFINCH: Brother, am I upset. That thief just stole my wallet.

MARTHA:

We're all thieves, Bullwrench. We've all stolen something at sometime in our lives. What is the difference? One theft more, or less, can't make a difference between a thief and someone who isn't one.

22. Fallacy of False Analogy: The Fallacy of False Analogy proceeds by reasoning that since such and such applies to P, it will apply to Q as well, because Q is like P. The trouble is that P and Q are not analogous, as the arguer suggests. AN EXAMPLE BULLFINCH: MARTHA: I heard you finally quit smoking, Martha. Why? I looked at the yellow stain on my finger yesterday and thought, if it does that to your fingers, imagine what it does to your lungs.

23. False Cause Fallacy: The False Cause Fallacy (also called "post hoc," or "post hoc, ergo propter hoc") has the form: "Phenomenon X has occurred, after which Y occurred. Therefore, X caused Y." As we will see, the fact that two phenomena have been found to occur in nature together is relevant in deciding that one of these is a cause of the other. It is a presumption, however, to believe that this is all the evidence we need to establish this claim. AN EXAMPLE BULLFINCH: MARTHA: How are you feeling today, Martha? Every time you come by to see me, I feel worse. I might start feeling better if you stayed away for a few weeks.

24. Fallacy of Special Pleading: Like the Slippery Slope Fallacy, the Fallacy of Special Pleading can be viewed as two fallacies in one. One version of it is committed when a person argues for a view, or a course of action, while ignoring countervailing factors that he also needs to consider. The other version of the Fallacy of Special Pleading occurs when a person applies a different set of standards to himself than he applies to others. AN EXAMPLE MARTHA: You really shouldn't be drinking that stuff, Fullflinch. Why don't you leave it with me? I have a strong stomach and I can handle booze better than you can.

25. Gamblers Fallacy: The Gamblers Fallacy involves a mistake in reasoning about odds. More specifically, we commit this fallacy when we reason that what has happened in the past has an effect on the odds, in a way in which it doesn't. Though this is clearly a common fallacy among gamblers, others often commit it as well. AN EXAMPLE MARTHA: I've purchased eight lottery tickets in the last two weeks and I haven't had a winning ticket yet. Since the chances of winning are one in nine, my next ticket will most likely be a winner.

26. Fallacy of Equivocation: The Fallacy of Equivocation occurs when, within the context of an argument, we use a word or phrase that has two or more meanings first to mean one thing and then another. The argument will seem valid if we fail to notice the shift in meaning. Of all of the fallacies, this is the most difficult one to spot. AN EXAMPLE MARTHA: BULLFINCH: MARTHA: How did McDuff ever come up with the title, "Master McDuff?" Have you ever heard of Master Bateing, Martha? I'm ashamed of you, Bullfinch. Your mind is always in the gutter.

27. Fallacy of Amphiboly: This fallacy resembles the Fallacy of Equivocation and the Fallacy of Amphiboly in involving a misunderstanding about meaning. With Amphiboly, however, the problem is due to poor sentence construction. The classic example of this fallacy concerns the ancient Greek king of Croesus. He went to the Oracle of Delphi for advice about whether he should go to war. When he asked for the Oracle's advice the Oracle responded, "If you go to war, you will destroy a mighty kingdom." Based on this advice Croesus went to war, but unfortunately, lost. When he complained, the Oracle's response was, "We told you that you would destroy a mighty kingdom and you did, your own." AN EXAMPLE BULLFINCH: MARTHA: Why are you drinking in the middle of the afternoon? I thought you said you were going to do some gardening. My book on gardening says that these flowers are to be planted only after being potted.

NOTE: One might also view this as a commission of the Fallacy of Equivocation, since a shift in the meaning of the word "potted" occurs. This kind of overlap between fallacies is very common. 28. Fallacy of Accent: Some sentences can be interpreted differently when certain words are accented, instead of others. When this occurs in an argument, the Fallacy of Accent is committed. So, like the Fallacies of Equivocation and Amphiboly, the Fallacy of Accent involves a shift in meaning within the context of arguing. AN EXAMPLE MARTHA: My brother must have been fooling around on his wife because in his letter he says, "I don't really love her now."

NOTE: Martha is accenting the word "her." Try reading it again, but this time accent the word "really." Then read it again, but accent the word "now." 29. Fallacy of Hypostatization: The Fallacy of Hypostatization occurs when we treat a common noun, or an abstract word, as if it referred to an existing object in the same way that a proper noun does. For example, we treat a word like "nature," as if it referred to a thing. ("Nature always looks out for the young. She also works miracles.") AN EXAMPLE MARTHA: Russia is an evil empire, playing off the hearts of the poor and unhappy. She has a thirst for conquest, but does not care at all about those she conquers. Fortunately, like the rest of us, she must also one day grow old and die.

30. Quantifier Fallacy: The Quantifier Fallacy is involved when we reason that because everything is related to at least one thing (or exactly one thing), there must be at least one thing (exactly one thing) that everything is related to. Thus, for example, from the claim that everyone has exactly one mother, we might mistakenly conclude that there must be exactly one mother of us all. AN EXAMPLE BULLFINCH: MARTHA: Why are you so sure that God exists, Martha? Well, even you admit that everything that happens has a cause. Surely, however, it follows from this that there must be a cause of everything. What else could that be but God?

This provides at least a brief sketch of the thirty fallacies we are going to be concerned with. It may seem like a lot to you, but in a sense, we have barely scratched the surface. There are many more fallacies that we might have mentioned. There are even entire books devoted to the topic. Of necessity, our discussion of the fallacies chosen has been brief.

EXERCISES

Instructions: Identify any fallacy or fallacies committed in each of the following passages. 1. Kings and Queens are short because they are rulers and rulers are only twelve inches. 2. You shouldn't vote for Clinton because he is a bozo. 3. If you do poorly in Dr. Jacob's class, you will start doing poorly in other classes too. The first thing you know, you'll end up on probation, and then you will get kicked out of college. Without a college degree, you won't get a good job, and you'll starve to death. So you had better do well in Dr. Jacob's class. 4. I'm moving to Connecticut because it is the richest state in the nation and I'm tired of being poor. 5. Argument in favor of the California school voucher amendment: "Fifteen years ago, Californians spent nine billion dollars on public schools. Today, we spend nearly $29 billion. Can anyone claim that parents, kids, and taxpayers are $20 billion better off today?" 6. I've always reckoned that looking at the new moon over your left shoulder is one of the carelessest and foolishest things a body can do. Old Hank Bunker done it once, and bragged about it; and in less than two years he got drunk and fell off of the shot tower, and spread himself out so that he was just a kind of a layer, as you may say; and they slid him edgeways between two barn doors for a coffin, and buried him so, so they say, but I didn't see it. Pap told me. But anyway it all come of looking at the moon that way, like a fool. (Mark Twain, The Adventures of Huckleberry Finn) 7. If you want to lose your job and end up on the streets begging for work, just keep voting for Republicans. 8. Of course Mr. Sophisticate will like my new pink Chablot Merlis. He likes both red and white wines, doesn't he? 9. When I asked if I could have some tea they said, "You can't. You're just a pelican." When I asked if I could come and play they said, "You can't. You're just a pelican." So when they asked if I would come and fish, I said, "To hell I can. I'm just a pelican." Now their stomachs can't get food, but anytime it wants my belly can. 10. Turendot, your brain is not. It's turned to rot. You don't know squat, oh Turnedot.

CHAPTER 4 TRANSLATION

SIMPLE AND COMPOUND STATEMENTS

Let's begin with some distinctions that should, hopefully, help you learn to translate effectively. Then we will show you how to translate. We will classify statements into two groups. Some statements are compound and others are simple. A statement is compound if it is possible to view it as made up of another statement, or other statements. A statement is simple if it not possible to view it as made up of any other statements. The statement, "Shakespeare wrote Romeo and Juliet," is a simple statement. We cannot view it as made up of any other statements. On the other hand, the statement, "Either Milton or Shakespeare wrote Romeo and Juliet," is compound. For we can view this statement as made up of the statements, "Milton wrote Romeo and Juliet," and "Shakespeare wrote Romeo and Juliet." We can further subdivide compound statements into two types. Some compound statements are truth-functionally compound, while others are non-truth-functionally compound. A statement is truth-functionally compound if the truth or falsity of its component statements is sufficient to determine whether it is true or false. Otherwise, it is non-truth-functionally compound. To see the difference between these, compare the following two statements: 1. It is not the case that Milton wrote Romeo and Juliet. 2. Martha thinks that Milton wrote Romeo and Juliet. Both these statements are compound because both contain the simple statement, "Milton wrote Romeo and Juliet." While Statement 1 is truth-functionally compound, however, Statement 2 is non-truth-functionally compound. Statement 1 is truth-functionally compound because, to know that it is a true claim, all we need to know is that the component "Milton wrote Romeo and Juliet," is a false claim. Statement 2, on the other hand, is non-truth-functionally compound. For even if we do know that the statement "Milton wrote Romeo and Juliet," is false, this will not suffice to tell us whether the statement, "Martha thinks that Milton wrote Romeo and Juliet" is true or false. To know this we would need to know what Martha thinks -- and it's doubtful that even Martha knows that. Let's try one more example. Compare the two following statements: 1. Although Shakespeare wrote Romeo and Juliet, Milton wrote Paradise Lost. 2. Milton wrote Paradise Lost after Shakespeare wrote Romeo and Juliet. Each of these two statements is compound, but only the first one is truth-functionally compound. Statement 1 is truth-functionally compound, because if both of the component claims, "Shakespeare wrote Romeo and Juliet," and "Milton wrote Paradise Lost," are true, then it will be true. On the other hand, if either of the component claims is false, then statement 1 will be false. In contrast, Statement 2 is non-truth-functionally compound. This is so because the fact that both component claims are true does not decide the truth or falsity of Statement 2. To figure out the truth or falsity of Statement 2, we also need to know which work was written later. As you will see, we are going to be focusing on those statements that are truth-functionally compound, and we will be treating them differently from both the simple and the non-truth-functionally compound statements. In fact, for purposes of translation it won't much matter whether a statement is simple or non-truth-functionally compound. So let's group both of these into one category and call them "atomic statements" and we will call all and only truth-functionally compound statements "molecular statements." Besides containing at least one component statement, every compound statement contains a word or an expression that is not itself a statement, but hooks onto statements to build more complex statements. Such expressions are called "connectives." Connectives come in two varieties. Some are unary connectives, while others are binary connectives. A unary connective is a word or expression that hooks onto a single statement to form another statement. On the other hand, a binary connective connects two statements together to form a third statement.

In the examples we have been considering, the phrases "It is not the case that" and "Martha thinks that" function as unary connectives. When we attach them to the left of a statement, we get a larger statement. In our first example, we attached the connective, "It is not the case that" to the left of the atomic statement, "Milton wrote Romeo and Juliet," to form the molecular statement, "It is not the case that Milton wrote Romeo and Juliet." Obviously, we could just as easily have attached it to the left of any statement to form another statement. In our later examples we used the binary connectives, "although" and "after" to glue two statements together. Thus, we used "although" to connect "Shakespeare wrote Romeo and Juliet," and "Milton wrote Paradise Lost," to form the statement, "Although Shakespeare wrote Romeo and Juliet, Milton wrote Paradise Lost." While we used the connective "after" to construct "Milton wrote Paradise Lost after Shakespeare wrote Romeo and Juliet" from these two simple statements.

QUESTIONS

Consider the statement: In spite of the fact that Fred failed because he didn't study enough, he did learn a lot from his Ancient Greek History class. 1. Is it simple or compound? 2. Is the statement truth-functionally compound or is it non-truth-functionally compound? 3. How many connectives are contained in it? 4. How many simple statements are contained in it? 5. Is the statement, "Fred failed because he didn't study enough," truth-functionally compound? 6. Is the statement "He (Fred) didn't study enough," simple or compound? 7. Is the statement "He (Fred) didn't study enough," truth-functionally compound?

THE SYMBOLS

If you were going to teach someone how to play chess you might begin by picking each piece out of the box, holding it up, and labeling it. Although the person you were teaching the game to would not know how these pieces functioned, at least he or she would have an idea of what the game looked like. You might then begin to teach him what counted as a legal move in the game. Before we actually begin learning to translate, we want to do something very much like this. We want to name the various symbols we will be using in the game we will be playing, and tell you what counts as a legal move in that game. Although this may not be terribly exciting, and we suspect you want us to begin teaching you to translate as quickly as possible, we are convinced that this material is extremely important. One type of symbol we will be using in our game of logic is called "an atomic statement letter." An atomic statement letter is any capitalized letter in the alphabet. The letter "A," is, therefore, an atomic statement letter, and so is "B." A second type of symbol we will be using is called "a connective." In the system we will be working with there are five different connectives. Only one of these is an unary connective. The other four are all binary connectives. The only unary connective we will be using is called "a tilde" (pronounced "tilda"). In most books it is represented as ~, but we prefer to use the negative sign, -, instead. One of the binary connectives we will be using is called "a dot." While some logicians have recently begun using an ampersand, &, to represent this connective, we will use . to represent it, since this is both easy to type and write. Another of the four binary connectives we will be using is called "a wedge." This connective is virtually always represented v, and we will also be representing it in this manner. We call the third of our four binary connectives "a horseshoe." In many logic texts this connective is represented as a backward c, though recently, some logicians have begun using a right arrow. We have decided, however, to use the greater than symbol, >, because it is not only familiar but also both easy to type and write.

Our fourth binary connective has often been represented as a triple bar or a double arrow. We will, however, use an equal sign, =, to represent this connective. Besides atomic statement letters and connectives, our symbolic language also contains punctuation marks. Unlike English, however, which uses a variety of punctuation marks (e.g., commas, semicolons, colons, and periods), our symbolic language uses only two symbols of punctuation. One of these is a left parenthesis, (, while the other is a right parenthesis, ). The only other symbols we will be using can be referred to as special symbols. These are the left and right curly braces, viz., { and }, a backslash, and a semicolon. ATOMIC STATEMENT LETTERS A B C...Z CONNECTIVES - . v > = PUNCTUATION ( ) SPECIAL SYMBOLS { } / ;

WELL-FORMED FORMULAE

In English, certain groupings of symbols are meaningful, while others are not. Thus, "Milton wrote after Shakespeare died," makes sense, while "Wrote Milton Shakespeare after died," does not. The same is true of our symbolic language. No doubt the rules for constructing a meaningful grouping of symbols in English are incredibly complicated, but fortunately this is not true of our symbolic language. In fact, only four rules are required to explain what constitutes a meaningful grouping of symbols (called a well-formed formula): 1. Any atomic statement letter is a well-formed formula. 2. If is a well-formed formula, so is: - 3. If and are well-formed formulae, so are: (a) (b) (c) (d) (.) (v ) ( > ) (=) Note that, unlike the tilde, which never causes parentheses, one pair of parentheses surrounds each of these connectives.

4. No other groupings of symbols are well-formed formulae. Let's see how these rules work. Rule 1 tells us, for example, A, and C, are both well-formed formulae. Moreover, since A is a well-formed formula, Rule 2 tells us that -A is also a well-formed formula. However, if we apply Rule 2 again, but this time to the well-formed formula -A, we get the result that --A is also a well-formed formula. We have now discovered that --A, and C, are both well-formed formulae. Rule 3 (b) implies that, since this is so, (--AvC) is also a well-formed formula. However, because this is so, and since -A is a well-formed formula, Rule 3(c) informs us that ((--AvC)>-A) is also a well-formed formula. Finally, Rule 2 establishes that, since this is so, -((--AvC)>-A), is also a well-formed formula. Rule 4 simply tells us that no other strings of symbols are well-formed formulae. So it tells us, for example, ((A.B.C)>-)D is meaningless. Before continuing, let's briefly review what we have learned about the symbolic language.

QUESTIONS

Consider the following concatenation of symbols: -(-A=-(B.-(Cv--D))) 1. Is this a well-formed formula? 2. How many tildes occur in the formula?

3. How many binary connectives occur in it? Consider the following collection of symbols: -(-(-G)v-H) 4. Is it a well-formed formula? Now consider the following group of symbols: (L-M) 5. Is it a well-formed formula? What about this group of symbols? (B.C.E) 6. Is it a well-formed formula?

We are finally ready to begin learning how to translate from English into our symbolic language. The translation process always begins with a translation key. In this key, each atomic statement in the passage we want to translate is assigned one atomic statement letter. Thus, suppose for example, we want to translate the following passage into symbols: Neither Bob nor Joe will go camping unless Diane goes too. But Diane won't go if either Bob or Helen goes. So Joe won't go camping. The translation key for this passage would look like this: Translation Key: B: Bob will go camping. D: Diane will go camping. H: Helen will go camping. J: Joe will go camping. Once we have set up our translation key, all we need to do is to express the claims made in symbols. If the statement made is atomic it is expressed by simply writing down the appropriate atomic statement letter. So if we want to express the claim that Bob will go camping, we simply write B. On the other hand, if the claim we want to express is molecular, not only will it contain at least one connective in English, its symbolic representation must also contain at least one of our symbolic connectives. Suppose the claim we want to translate into symbols is, "Bob won't go camping." As we saw earlier, this claim can be rewritten as, "It is not the case that Bob will go camping." When it is rewritten in this way we can see not only that it is truth-functionally compound, but that it contains the atomic statement "Bob will go camping," plus the unary connective, "It is not the case that." Now unlike the atomic statement letters, which vary in meaning from context to context, our five connectives always mean the same thing. As you may already have guessed, our only unary connective, the tilde, is always used to mean "It is not the case that." Since this is so, to represent "Bob won't go camping," all we need to do is write -B. In ordinary English, double negatives are frowned on. But our symbolism permits them. Even ----B is permitted. It might be translated, "It is not true that it isn't the case that Bob won't not go camping." (Quite a mouthful, isnt it?) The primary meaning of the dot is "and." So "Bob and Helen will go camping," is represented as (B.H). Notice that, unlike the tilde, but like all of the other connectives, the dot always has a formula on both its left and right sides, and is always surrounded by a pair of parentheses. Besides "and," the dot is also used to represent a host of other connectives in English. Of these connectives the following are among the more common ones: "but," "however," "although," "while," "moreover," "too," "also," "in addition to," and "as well as." The dot represents the idea of conjunction. We use it whenever we want to say this plus that.

Suppose we wanted to say that Bob will go camping, but Helen won't. We would represent this in symbols as (B.-H). While the claim, "Bob won't go camping, but Helen will," would be represented: (-B.H). Compare this claim with the claim, "It is not true that both Bob and Helen will go camping," and you will see why the parentheses are important. This latter claim is not only different in English from the claim that "Bob won't go camping, but Helen will," it is also different in symbols. For it is represented: -(B.H). Suppose we wanted to represent the statement, "Bob, Joe, and Helen will all go camping." We might be tempted to express this in symbols as (B.J.H). Oddly enough, this is wrong. The mistake is a technical one, but it's very important. Since the formula contains two binary connectives, it must also have two left and two right parentheses. Unfortunately, (B.J.H), isn't a well-formed formula. We must represent the claim as either: ((B.J).H), or (B.(J.H)). Our next binary connective is the wedge. The primary meaning of this connective is "or." Unfortunately, however, in English there are two different senses of the word "or," and the conditions under which they are true vary slightly. In one of these senses, called "the exclusive sense of 'or'," the word "or" has the force of "either, or, but not both." When we go to a restaurant and see "soup or salad" on the menu, this is the sense of "or" that is being used. There is, however, also an inclusive sense of this word. Unlike the exclusive sense, the inclusive sense of "or" means to allow the possibility of both disjuncts being true. It has the force of "and/or." When it states in an insurance policy that the company pays in the event of death or disability, it is this sense that is being used. We cannot use the same connective to represent both of these senses of "or," since, unlike "and" and "but," for example, they vary in the conditions under which they are true. In every logic text we are aware of the wedge is always used to mean the inclusive sense of "or" only, and that is also the way we are going to use it. So "v" means "and/or." (We will see how to represent the exclusive sense of "or" shortly.) In everyday life it is not always entirely clear when someone says "or," which of the two senses is intended, and this could create translating nightmares. For our purposes, however, this really won't be a problem at all. Whenever logicians say "or" you should assume that they are using the inclusive sense (unless, of course, they explicitly say, "or, but not both."), and just translate it as a wedge. The only other major expression in English that is translated as a wedge is "unless." This confuses many people because they think "unless" and "or" have quite different meanings. In fact, however, "unless" and "or" are quite similar. If this translation of "unless" puzzles you, for the present you might just try to remember that "unless" is a wedge. Later, once we have developed the symbolic system more, we will be able to justify this interpretation. Before going on to the last two connectives let's practice some translations using the three connectives we have discussed.

QUESTIONS

For this set of exercises we will provide the translation key. We will also give you the statement to be translated. You represent the statement in symbols. Translation Key: B: The butler committed the crime. G: The gardener committed the crime. M: The maid committed the crime. S: The secretary was asleep. 1. Both the butler and the gardener committed the crime. 2. The butler and the gardener committed the crime while the secretary was asleep. 3. In spite of the fact that the secretary wasn't asleep, the maid committed the crime. 4. Either the butler or the maid committed the crime. 5. Neither the butler nor the maid committed the crime. 6. Either the butler committed the crime, or both the maid and the gardener did it. 7. The crime was committed by either the butler or the maid, but not both. 8. Either the crime was committed by both the butler and the gardener or both the butler and the maid. 9. It isn't true that unless the secretary wasn't asleep both the maid and butler committed the crime.

We'll try some more exercises translating shortly. First, however, let's learn how to use our two remaining connectives. Of all the connectives, the horseshoe is the most difficult. It is used to represent a conditional statement. A conditional statement is a statement that sets down a condition, and then goes on to talk about what is the case if that condition is met. The condition is frequently referred to as "the antecedent," while the part of the statement that goes on to say what is true, given that the antecedent is met, is called "the consequent." With respect to the horseshoe, the claim that sets the condition down (i.e., the antecedent) is always put on its left side, while the consequent is always placed on its right side. This is extremely important. The primary meaning of the horseshoe is, "If . . . then . . . " Thus the statement, "If Shakespeare wrote Romeo and Juliet, then Milton wrote Paradise Lost," will be represented as (S>M). Note that the antecedent of this conditional statement is that Shakespeare wrote Romeo and Juliet, and it, therefore, belongs on the left side of the horseshoe. The symbolic claim, (M>S) would say, "If Milton wrote Paradise Lost, then Shakespeare wrote Romeo and Juliet." Unfortunately, all sorts of complications arise with the horseshoe. First, the word "then" may not occur in the statement. For it might read, "If Shakespeare wrote Romeo and Juliet, Milton wrote Paradise Lost." Insofar as the symbolic translation is concerned, this doesn't matter. The claim is still translated: (S>M). Second, the consequent might be expressed in the sentence before the antecedent. Thus, someone might say, "Milton wrote Paradise lost, if Shakespeare wrote Romeo and Juliet." This is still translated as (S>M), because the antecedent is still that Shakespeare wrote Romeo and Juliet. To make matters even worse, there are many other words besides "if" that set a condition down, words like: "since," "because," "when," "for," and "provided that," to mention at least a few. (You should start keeping a list of these words.) A third difficulty arises with two specialized cases. Many people have troubles with the expression, "only if." While they recognize that this connective in English needs to be translated as a horseshoe (since it sets down a condition) they choose the wrong side as the antecedent. They represent "Shakespeare wrote Romeo and Juliet only if Milton wrote Paradise Lost," as (M>S). While this would be a correct translation of "Shakespeare wrote Romeo and Juliet, if Milton wrote Paradise Lost," it is not correct for "only if." The claim should be represented: (S>M). (An easy way for you to remember how to represent "only if is just to remember you should place the horseshoe where the connective "only if" occurs.) For some reason, the expression, "if and only if," may also cause you problems. Often enough, people try to represent it as a horseshoe. Thus, they translate "Shakespeare wrote Romeo and Juliet if and only if Milton wrote Paradise Lost," as (S>M). Actually, this is not a strong enough claim. "If and only if" expresses a condition going in both directions. If anything, it should be translated as: ((M>S).(S>M)). For we can see it as saying, "Shakespeare wrote Romeo and Juliet if Milton wrote Paradise Lost, and Shakespeare wrote Romeo and Juliet only if Milton wrote Paradise Lost." As you will see shortly, however, there is a much simpler way of treating "if and only if." One further problem with translating conditional statements might be worth at least briefly mentioning. There are, in fact, a number of different kinds of conditional statements in English; not all of which are translated with a horseshoe, because not all of them express truth-functionally-compound statements. Frequently the expression, "if . . . then . . . ." has the force of, "if this then afterwards that." Unfortunately, the system of logic we are constructing is not powerful enough to express connectives that suggest a temporal sequence. To make matters worse, there is a use of expressions like "if . . . then . . ." that suggests a causal connection (e.g., "If the bridge collapsed, then there was too much weight on it.") This kind of causal conditional is also too sophisticated for the system we are building. Technically, it should be treated as atomic and translated by assigning an atomic statement letter to it. Fortunately, from the point of view of this text, or any translation exercises you might work on in other logic texts, our discussion of the different kinds of conditional statements can be pretty well ignored. When the author of the exercises uses "if . . . then . . . " you will probably be on safe grounds if you just translate the claim as a horseshoe. The last of our five connectives is, perhaps, the easiest one of all. The = represents the idea of a condition going in both directions. It is rarely used in English. When it is used, however, it is almost always expressed either as "if and only if," or as "just in case." When you see these expressions, just use =. Otherwise, its probably a horseshoe you want.

There are just two further points we want to make before we start practicing. The first concerns punctuation marks in the English sentence. You should pay very careful attention to any commas or semicolons that occur. They indicate major breaks in the sentence, and they are frequently immediately followed by a connective. Moreover, a semicolon is much more significant than a comma. If the word immediately following the semicolon is a connective, then the symbolic connective you use to represent it should have fewer parentheses around it than any other connective in the formula. The last point may not be one you want to hear. It is this: Translation is something of an art. It requires practice. As you practice translating into symbols more, you will become better at it. When you make a mistake try to see what you did wrong, and try to remember not to make that mistake again. Above all, be patient. Without more ado, let's try some more exercises. These involve the use of all five of our connectives.

QUESTIONS

As before, we will provide the translation key, and you translate each of the claims below into symbolic notation. Translation Key: B: The butler committed the crime. G: The gardener committed the crime. M: The maid committed the crime. S: The secretary was asleep. 1. If the secretary was asleep, then both the butler and gardener committed the crime. 2. If both the maid and gardener committed the crime, then the secretary was asleep. 3. The maid committed the crime only if the gardener didn't. 4. The maid committed the crime if the gardener didn't. 5. The butler committed the crime if and only if the gardener didn't. 6. It is not true that if the butler committed the crime then the gardener didn't. 7. If not both the butler and gardener committed the crime, then the maid did it while the secretary was asleep. 8. Although, the butler didn't commit the crime if the maid did, he did commit it if the gardener did. 9. If neither the maid nor the gardener did it, then the butler did it while the secretary was asleep. 10. It isn't true that the maid did it provided that both the butler and the gardener did it. 11. If the secretary was asleep, then the maid didn't commit the crime if the butler did it. 12. It's false that if the secretary wasn't asleep the maid committed the crime only if the butler didn't.

So far our only concern has been with translating single statements. But we also need to learn to translate sets of statements and arguments. Before concluding this chapter, perhaps we should at least briefly discuss these. Some time ago, when we were discussing the various symbols in the symbolic language, we mentioned several symbols, which were identified as special symbols. These included the left and right curly braces, { and }, the backslash, /, and the semicolon, ;. These special symbols are used when we are representing sets of statements and arguments. A set of statements is, as you may recall, simply a collection of claims that we have decided to view as a unit. We need a way of representing these which will distinguish them from both single statements and arguments. We have decided to represent a set of statements by simply surrounding the statements in the set with a pair of curly braces and separating the statements with semicolons. So, for example, suppose the set we are interested in contains the statements: (P>-Q), (R=--S), and -(-TvG). We will express this in symbols as: {(P>-Q);(R=--S);-(-TvG)}.

We will represent an argument in a similar manner. We will glue the premises together with semicolons and surround them with a set of curly braces, just as we did with a set of statements. We will then use a backslash, and write the conclusion immediately after the backslash. Thus, for example, we will represent it as: {(P>(Q.R));(Q>-T);T}/-P. That's all there is to it.

PROBLEMS

Instructions: Using the translation key provided, translate the argument below into symbolic notation. Translation Key: A: The attendant has a coronary. F: The anti-gravity device fails. G: Alien Bob gets gas. M: Alien Bob has his MasterCard. W: The attack on Washington will succeed. Alien Bob gets gas assuming that he has his MasterCard and the attendant doesnt have a coronary. If Alien Bob gets gas the attack on Washington will succeed if the anti-gravity device doesnt fail. So if Alien Bob has his MasterCard and the attendant doesnt have a coronary, the attack on Washington will succeed unless the anti-gravity device fails. Instructions: For each of the arguments below construct a translation key and translate the argument into symbolic notation. 1. If Paula goes to the beach, she won't go to the movies. If she goes to the movies only if she doesn't go to the beach, Ronald will take her shopping if and only if she gets home early. But she won't get home early. So Ronald won't take her shopping. 2. Unless Fred joins the club, neither Harry nor Bill will join. But if Harry doesn't join, Paula will become the new president only if Jake resigns. However, Jake won't resign. So, if Fred doesn't join the club, Paula won't become the new president. 3. Sally will go to dinner with Bob just in case he gets his BMW out of the shop but doesn't charge the limit on his Visa card. If Bob doesn't get his BMW out of the shop, he won't charge the limit on his Visa card; and he will charge the limit on his Visa card if he does get the BMW out of the shop. Provided that Sally doesn't go out to dinner with Bob, she'll go out with Oscar; however, if she goes out with Oscar, Oscar will charge the limit on his MasterCard. So if Bob doesn't get the BMW out of the shop, Oscar will charge the limit on his MasterCard. 4. Bill made it to class on time provided that his car was working and he didn't stay up too late the night before. But if he made it to class on time, then unless he fell asleep he learned about Freud's life. It isn't true that he either fell asleep or didn't stay up too late the night before. So, Bill didn't learn about Freud's life only if his car wasn't working. 5. In spite of the fact that Lynn is a hard worker, her boss doesn't pay her well. But she can't get ahead unless she's both a hard worker and her boss pays her well. And she can't buy the new car she wants if she can't get ahead. So she's not going to be able to buy the new car she wants. 6. Dracula ate well only if the tourists visited the castle and stayed the night. However, the tourists visited the castle and stayed the night if and only if they were either idiots or their car broke down. Therefore, since the tourists were not idiots, Dracula did not eat well unless their car broke down.

7. The Reverend Mantis had time to start praying only if Mrs. Mantis didn't have dinner after sex. However, unless Mrs. Mantis had dinner after sex, neither the black Widow nor Lady Bug had dessert. Therefore, since both Lady Bug and the black Widow had dessert, the Reverend Mantis didn't have time to start praying.

LOCATING AND NUMBERING CONNECTIVES

In this section we will learn how to number connectives. Perhaps we should begin with a complicated formula like the following one: -(--(P.-Q)v(R=(T>--S))). There are ten connectives in this formula. When we are finished we will have a number from 1 to 10 listed above each of these connectives. There are several different schemes for numbering connectives. The one we prefer proceeds according to the following three simple rules: 1. We always start with those connectives that are inside the innermost set of parentheses, and proceed from inside out. 2. We always do tildes first. 3. We work from the right to the left. Of these rules, the first is the most important. To follow it, all we need to know is how many pairs of left and right parentheses surround each connective. In the case we are examining, the tilde on the far left of the formula is inside no pairs of parentheses. The two tildes to its right are in one pair of parentheses, namely, the left and right ones at the far left and right ends of the formula. The next two connectives over -- the dot and the tilde -- are inside two pairs of parentheses. On the other hand, the wedge is inside only one pair of parentheses. While the "=" is in two pairs of parentheses, and the three remaining connectives are all located inside three pairs of parentheses. Our first rule tells us that we should start with the three connectives located on the far right end of the formula. These are, the symbol we use to represent the horseshoe, ">," and the two tildes. After we have located these connectives our second rule comes into play. This rule, recall, tells us to do the tildes before doing the horseshoe, or any other binary connective. Which tilde should we do first, however? It is at this point that we use our third rule. It tells us that we should start at the right end of the formula and work left. So we should identify the tilde to the direct left of S "1," and we should label "2" the tilde to its immediate left. 21 - ( - - (P . - Q) v (R = (T > - - S))) The only connective we have not yet done in three pairs of parentheses is the >. So we will do it next, and we will label it "3." We then get: 3 21 - ( - - (P . - Q) v (R = (T > - - S))) Now that we've completed all those connectives embedded in three sets of parentheses, we turn next to the ones embedded in two pairs of parentheses. These are, the dot, the tilde next to Q, and the =. The second rule tells us to do tildes first, so the tilde to the direct left of Q must be 4. With respect to the dot and the =, since our third rule instructs us to work from right to left, 5 is =, and 6 is the dot. 64 5 3 21 - ( - - (P . - Q) v (R = (T > - - S))) We turn next to those connectives in only one set of parentheses. These include the two tildes to the left of (P.-Q) and the wedge in the middle of the formula. Here our second rule, however, tells us to do tildes before doing other connectives. Therefore, the two tildes in front of (P.-Q) need to be done before doing the wedge. Since our third rule tells us to work from right to left the tilde to the direct left of (P.-Q) should be 7. The tilde to its left will be 8, and the wedge will be 9.

87 64 9 5 3 21 - ( - - (P . - Q) v (R = (T > - - S))) The only connective we haven't numbered yet is the tilde on the far-left end of the formula. All we have to do is identify it as 10, and we are finished. 10 87 64 9 5 3 21 - ( - - (P . - Q) v (R = (T > - - S))) The last connective we do, i.e., the one with the highest number, we call "the main connective." It is the most important connective in the entire formula. As you have probably already noticed, what we have been doing is really not any different from what is done in mathematics. For example, if we want to find the value of -(2+3)*(4/2), we first get the value of 2+3 (i.e., 5), and 4/2 (i.e., 2). Then we multiply negative 5 by 2. We have been going through the same procedure here. So numbering the connectives should be easy from now on. You might also note that what we are doing here makes good sense in English. Clearly there is a big difference in meaning between the following claims: Either both Albert and Barbara are happy, or Charles is happy. Albert is happy, and besides that, either Barbara or Charles is happy. To decide whether the first claim is true, we must first obtain the values of the claims, Albert and Barbara are happy, and Charles is happy. To decide whether the second claim is true, we first need to decide whether the claims that Albert is happy, and that Barbara or Charles is happy, are true. Notice also, that we represent these two claims in symbols differently. The first claim we represent as, ((A.B)vC), and we should view it as an or-claim. On the other hand, the second claim is expressed in symbols as, (A.(BvC)), and it is an and-claim.

A PROBLEM

Instructions: Number the connectives in the formula below. ( - ( - P. (Q v - R)) = (S > T))

In setting up a truth table, the first thing we need to do is to find out how many different letters occur in the formula. For example in the formula, ((P.-Q)>-(-R=-(P.R))), only three different letters occur, namely P, Q, and R. We list these letters in alphabetical order from left to right. Immediately after we have done this we list the formula we want to test. If that formula is a single statement we list it to the direct right of the letters. If, instead, it is a set of statements, we replace the first semicolon with a dot and surround the first and second set members with a pair of parentheses. Once we have done this, we then take this unit and replace the semicolon to its immediate right (if there is one) with a dot. Then, we surround it and the next set member with a pair of parentheses. We continue this process until we have conjoined all the set members with dots and surrounded them with parentheses. The last step is to delete the curly braces at the beginning and end of the formula. This formula we then put directly to the right of the letters we have listed. Suppose, for example, we want to test the following set of statements: {(P>-Q); (-R=T); --S}. The process outlined suggests that we go through the following steps: 1. {(P > - Q) . ( - R = T); - - S} 2. {((P > - Q) . ( - R = T)); - - S} 3. {((P > - Q) . ( - R = T)) . - - S} 4. {(((P > - Q) . ( - R = T)) . - -S)} 5. (((P > - Q) . ( - R= T)) . - - S)

The formula we have obtained by going through this process will be a single statement. It will be placed to the right of the letters in our table. We treat arguments similarly to sets of statements. Thus, we convert the semicolon that separates the first two premises into a dot, and we surround these two premises by a pair of parentheses. We continue this process until all the premises have parentheses around them, and they are all conjoined with dots. Finally, we remove the curly braces at the beginning and end of the premises. Once we have finished this, we replace the backslash with a >, and we then surround the entire formula with a pair of parentheses. This is the statement we will place to the right of the letters and test. Suppose, for example, we want to test the following argument: {(P>(Q.R)); (Q=-T); --T} / (-R>-P). We do the following: 1. 2. 3. 4. 5. 6. 7. {(P > (Q . R)) . (Q = - T); - - T} / (- R > - P) {((P > (Q . R)) . (Q = - T)); - - T} / (- R > - P) {((P > (Q . R)) . (Q = - T)) . - - T} / (- R > - P) {(((P > (Q . R)) . (Q = - T)) . - - T)} / (- R > - P) (((P > (Q . R)) . (Q = - T)) . - - T) / (- R > - P) (((P > (Q . R)) . (Q = - T)) . - - T) > (- R > - P) ((((P > (Q . R)) . (Q = - T)) . - - T) > (- R > - P))

Why are we doing this? This question must wait for an answer until later. For the moment, we need only know that the statement we are going to test tells us something important about the set of statements or argument we are examining. Once we have listed the different letters and the formula, our next task is to decide how many rows in the table we need to build. This will depend entirely on how many different letters we have listed. To find out how many rows to build, all we need to do is use the formula 2n, where n is the number of different letters. 3 Thus, if three letters occur in the statement, the formula tells us to build 2 (= 8) rows. If it contains four 4 letters, the formula tells us to build 2 (= 16) rows, etc. (Note: The number of rows doubles every time.) We build the rows in the following way. In the rows directly under the leftmost letter, we begin by listing T's, and continue listing them until we have filled half the rows. We then switch to F's, and fill the remaining rows. We then turn to the column under the next letter to the right. In the rows under it we list half as many T's as we did before, followed by half as many F's. This process, we then repeat until we complete all of the rows. Let's see a practical example of this. Suppose the formula we want to test is, ((R v - G) > - ( - S . R)). The table should be set up as follows: GRS ((Rv - G)>(- S . R)) TTT TTF TFT TFF FTT FTF FFT FFF What have we done here, and why have we done it? Each row in our table represents one way the world might be. In the first row, for example, where P, Q, and R are all true, we are representing the possibility that all three of our atomic statements are true. Suppose the atomic statement P represents the claim that Paul is happy, while Q stands for Quincy is happy, and R is Reginald is happy. This row represents the possibility that all three of these individuals are happy. The last row, on the other hand, represents the possibility that none of them are happy. By building the table in the way we have, we will have shown every possible way the world might be with respect to these three individual statements.

QUESTIONS:

Consider the following set of statements: {- - P; (L > - (T . U)); (L v - T)} 1. How many different letters are contained in this set? 2. List those letters in alphabetical order, and translate it into a single statement. 3. How many rows will we have to build in this table? 4. Beneath the letter L how many rows of T's will occur before an F occurs? 5. But what are we to do about P? How many rows under it will contain T's before we find an F?

By now you probably can assign numbers to the connectives and set the truth table up. All you need to learn is how to fill the values for the various connectives in, for each row of the table, and how to interpret the result. In this section, we will try to teach you how to fill the values in for each connective. Once you have the basics down, doing more sophisticated problems should be easy. So let's begin with a simple case. 76 432 5 1 PQ - - (- - - P . - Q) TT TF FT FF We have set the table up in the manner suggested earlier. Unfortunately, the hardest part of constructing it still lies ahead of us. Before we are finished we need to have a set of values listed in each row of the table under every connective. We begin with the first column. The connective in this column is a tilde. Tildes flip values. Moreover, it's clear that since this one is to the immediate left of Q it flips the value of Q. As we can see, however, in the first row Q has the value of T. So we assign the value F to -Q. 76 432 5 1 PQ - - (- - - P . - Q) TT F TF T FT F FF T In the second row Q has the value F, so -Q has the value T. We therefore place a T in column 1, row 2. What value should we place under column 1, in row 3, and what about the last row in column 1? What is its value? If you answered "F" to the first question and "T" to the second you are right. Now it's your turn to complete a column. What values should we list in rows 1-4 of column 2? If you answered "FFTT," you got it, and the table will then look like this: 76 432 5 1 PQ - - (- - - P . - Q) TT F F TF F T FT T F FF T T

Now let's turn to column 3. It is also a tilde. Moreover, it is to the direct left of column 2. It therefore flips the values we just got under column 2. Therefore, we get the same values in column 3 that we had under P when we started. A double negative is, as you might expect, the same thing as the affirmative. 76 432 5 1 PQ - - (- - - P . - Q) TT TF F TF TF T FT FT F FF FT T How would you complete the values for column 4? You should have answered "FFTT," and your table will then look like this: 76 432 5 1 PQ - - (- - - P . - Q) TT FTF F TF FTF T FT TFT F FF TFT T Next we come to column 5. It's a dot, not a tilde. Dots differ significantly from tildes. They are binary connectives, and so, unlike tildes that have only a right side, they have both left and right sides. Also, unlike tildes, dots always carry a pair of parentheses with them. In the formula we are considering, the dot causes a pair of parentheses. Now dot claims are true only when both sides of those claims are true. (Just as in English, "Albert is happy and Barbara is happy," is true only if both the claims that Albert is happy and that Barbara is happy are true.) What, however, is the claim on the left side of the dot, and what is the claim on the right side of the dot? Clearly the claim on the left side is ---P, while -Q is the claim on its right side. So the values on both these sides are the values we must use in determining whether the dot claim is true or false. In both cases, the highest numbered connective provides the values for that formula. To find the value for the left side of the dot just locate the highest number left of the dot until you come to the left parenthesis (viz., 4). To find the value for the right side of the dot locate the highest number on the right of the dot until you hit the right parenthesis (viz., 1). The dot claim will be true, in a row, only when both these sides are true. If you look at the values of the two sides in row 1 you will see that they are both false in this row. Therefore, the value of the dot is F in this row. In row 2, the value of the left side of the dot is again F, but this time, the value of its right side is T. Still, this is not a case where both of the sides are T. So the dot is F in row 2. In row 3, while the claim on the left side of the dot is T, its right side is F. Here too then, the value of the dot is F. What about the last row, however? Which value does it have? I'll give you a hint: Its either a T or F. Do you understand this? You should have answered "T," and the table should now look like this: 76 432 5 1 PQ - - (- - - P . - Q) TT FTF FF TF FTF FT FT TFT FF FF TFT TT All that remains to be done are the tildes at the left end of the formula. Tildes always flip values. It should be clear that 7 flips the value of 6, but what value does 6 flip? When tildes occur to the direct left of a left parenthesis they flip the highest numbered connective within that set of parentheses. Since the dot is the

highest numbered connective within the parentheses, column 6 flips the value of column 5 in each row. Consequently, the completed table should look like this: 76 432 5 1 PQ - - (- - - P . - Q) TT FT FTF FF TF FTFTF FT FT FTTFT FF FF TFTFT TT We still need to consider the other connectives. First, however, let's try one more problem. 5 1 2 43 PQ - (( - P . Q) . - Q) TT TF FT FF

The connective under column 1 is a tilde, and tildes flip values. So the value listed under column 1, in each row, should be the opposite of the value of P in that row. Therefore, column 1's values should read: 5 1 2 43 PQ - (( - P . Q) . - Q) TT F TF F FT T FF T

The connective in column 2 is a dot. The formula on the left side of the dot is -P, and column 1 contains its values. Meanwhile, the formula on the right side of the dot is Q, and the column under the initial Q lists these values. Since dot claims are true only when both sides of the dot are true, the only row in which both sides of the dot are true is row 3. So, the values listed in column 2 should be: 5 1 2 43 PQ - (( - P . Q) . - Q) TT F F TF F F FT T T FF T F We now need to figure out the values in column 3. Like column 1, however, column 3 is a tilde, and tildes flip values. Obviously, then, in each row of column 3 the values listed should be the opposite of the values listed for Q in that row. Thus, we get: 5 1 2 43 PQ - (( - P . Q) . - Q) TT F F F TF F F T FT T T F FF T F T

Next, we need to do column 4. It's a dot, and dots are binary connectives. So the dot must have a left and a right side. The formula on the left side of the dot is (-P.Q) and the values of this formula are listed under the highest numbered connective in that formula, namely, column 2. The formula on the right side of the dot is -Q and the values of this formula are listed under column 3. Now we know that dotted claims are true only when both of their sides are true. So, column 4 will be true only when both columns 2 and 3 are true. However, there are no rows in which columns 2 and 3 are both true. Therefore, all of the values in column 4 should be false. Once we fill those values in our table will look like this: 5 1 2 43 PQ - (( - P . Q) . - Q) TT F F F F TF F F F T FT T T F F FF T F F T Column 5 is a tilde, and tildes flip values. However, what column's values is column 5 supposed to flip? It flips the values of the highest numbered connective in the parentheses, namely 4. Thus, the completed table should read: 5 1 2 43 PQ - (( - P . Q) . - Q) TT T F F F F TF T F F F T FT T T T F F FF T T F F T The next connective is the wedge. Like the dot, the wedge is a binary connective, and so, it has a left and a right side. Wedge claims are false, however, only when both sides are false. In all other cases the wedge claim is true. We'll move on soon. First, however, let's look at one quick problem that involves this connective. 1 32 AN ((A . N) v - N) TT TF FT FF Column 1 and 2 should be easy. The connective in column 1 is a dot, and it glues the atomic letters A and N together. Since dots are true only when both sides are true, we should get the following values under column 1: 1 32 AN ((A . N) v - N) TT T TF F FT F FF F We turn now to column 2, which is obviously a tilde. Since tildes flip values, the values in column 2 should read:

1 32 AN ((A . N) v - N) TT T F TF F T FT F F FF F T We can now do column 3. It's a wedge, and wedges are false only when both of their sides are false. Since the only row in which this happens is row 3, however, the values under column 3 should read:

1 32 AN ((A . N) v - N) TT T TF TF F TT FT F FF FF F TT

The next connective is the =. Like the dot and wedge, = is a binary connective and so has both a left and right side. It is true, however, only when both of its sides have the same value. In those cases where its two sides have different values the = gets the value F. This connective is easy. So let's try a quick example here. 2 3 1 KL ((L v K) = (K = L)) TT TF FT FF Clearly we need to do column 1 first. The values in it should be: 2 3 1 KL ((L v K) = (K = L)) TT T TF F FT F FF T Next we do column 2. When we have completed it our table will look like this: 2 3 1 KL ((L v K) = (K = L)) TT T T TF T F FT T F FF F T Finally, we can now figure out the values in column 3. Our completed table will then look like this: 2 3 1 KL ((L v K) = (K = L)) TT T T T TF T F F FT T F F FF F F T

Now for the last, but trickiest, of the connectives: Like . , v, and =, > is a binary connective and so has a left and a right side. It is only false in one case, however, namely when its left side is T and its right side is F. It is T in all other cases. (Check your text, or ask your teacher if you want to know why we evaluate it in this way.) Let's look at an example that uses this connective. Then you will have all of the fundamentals for constructing tables. 2 43 1 PQ ((Q > P) > - (P > Q)) TT TF FT FF Here we begin with column 1. Remember it is false only when the formula on its left side is true and the formula on its right side is false. Reading down the column, what values do we get? 2 43 1 PQ ((Q > P) > - (P > Q)) TT T TF F FT T FF T Now do column 2. Here we must be careful, however. Q occurs on the left side of the > and P on the right. So (Q>P) is false only when Q is true and P false, and that occurs only in the third row. 2 43 1 PQ ((Q > P) > - (P > Q)) TT T T TF T F FT F T FF T T We must now do column 3, and its values obviously reverse the values in column 1. So we should get: 2 43 1 PQ ((Q > P) > - (P > Q)) TT T F T TF T T F FT F F T FF T F T Finally, we can do column 4. It uses the values of column 2 on the left and column 3 on the right. Once we have done this column, the completed table should look like this: 2 43 1 PQ ((Q > P) > - (P > Q)) TT T FF T TF T TT F FT F TF T FF T FF T

As far as the fundamentals of constructing truth tables are concerned that is it. The only thing you may not know yet is how to interpret them.

1. EVALUATING SINGLE STATEMENTS: Once you have completed the table evaluating it is easy. If the formula is a single statement it will be logically true if it contains only T's in its highest numbered column. While if it has nothing but F's under this connective, it will be logically false. Finally, if there is a mixture of Ts and F's, it will be logically indeterminate. The reason for this is simple enough to understand. Each row in the table represents one way the world might be. The value under the highest numbered connective tells us whether the formula is true or false in that possible world. However, in our table we have considered all the ways the world could be. If the formula is true in all these cases then it must be true. On the other hand, if it is false in all these cases then it must be false. While if there is a mixture of T and F's the statement could be true and it could be false. Logic alone cannot tell which it is. EXAMPLES PQ (P > (Q > P)) ((P . Q) . - Q) (P = - Q) TT T T T FF FF TF T T F FT TT FT T F F FF TF FF T T F FT FT LOGICALLY LOGICALLY LOGICALLY TRUE FALSE INDETERMINATE 2. EVALUATING SETS OF STATEMENTS: If the formula contains curly braces it will be either a set of statements or an argument. If it contains a backslash it will be an argument, while if it does not it will be a set of statements. Sets of statements are either consistent or inconsistent. A set of statements will be consistent if at least one T occurs under the formula's main connective (i.e., in its highest numbered column). On the other hand, if all the values under the highest numbered connective are F's, the set of statements will be inconsistent. The reason for this should be clear. When we say a set of statements is consistent, what this means is that all of the statements in that set could be true together. While when we say a set of statements is inconsistent, this means that it isn't possible for all of the statements in that set to be true together. What we have done here is to convert the set of statements into a single statement. That single statement asserts that all of the members of our original set of statements are true. Clearly, however, if this single statement even can be true this will show that our set of statements is consistent; while if it cannot be true, this will show that our set is inconsistent. EXAMPLE 1 This is a set. This is a single statement that asserts that all the members in the set are true. {(P > Q); - (Q > P)} PQ ((P > Q) . - (Q > P)) TT T F F T TF F FF T FT T T T F FF T FF T

We find a T in row three of the indicated column. This shows that the statement that asserts that all of the set members are true can be a true statement, and so, it establishes that the set is consistent. EXAMPLE 2 This is a set. This is a single statement that asserts that all the members in the set are true. {(((P > Q); P; -Q} PQ (((P > Q) . P) . - Q) TT T T FF TF F F FT FT T F FF FF T F FT

Here we find an F in every row of the indicated column. This tells us the statement that asserts that all the members of the set are true cannot be true. It tells us, therefore, that the set is inconsistent. 3. EVALUATING ARGUMENTS: When we say that an argument is valid what this means is that it is not possible for it to have all true premises and a false conclusion. The statement we have constructed, however, asserts that if all of the premises are true then the conclusion will be true. If this claim is always true (i.e., true in every row of the table), the argument cannot have all true premises and a false conclusion. It will, therefore, be valid. Conversely, if there is even one false row in our table this establishes that it's possible for the argument to have all true premises and a false conclusion. So, it tells us that the argument is invalid. EXAMPLE 1

This is an argument. This is a statement that asserts that if the arguments premises are true its conclusion will be true.

The value F occurs in the third row of the indicated column. So the claim that if the premises of the argument are true its conclusion will be true is sometimes false. Therefore it is possible for the argument to have all true premises and a false conclusion. So it is invalid. EXAMPLE 2 This is an argument. This is a statement that asserts that if the arguments premises are true its conclusion will be true. {(((P = - Q); Q} / - P PQ (((P = - Q) . Q) > - P) TT FF F TF TF TT F TF FT TF T TT FF FT F TT

All of the rows have the value T in the indicated column. So the claim that if the premises of the argument are true, its conclusion is true, has to be true. Therefore, the argument is valid.

4. EVALUATING PAIRS OF STATEMENTS: You may recall that in chapter 1 we briefly introduced the idea of logical equivalence. We said two statements are logically equivalent just in case they must have the same truth-values. How can we use truth tables to decide if a pair of statements is logically equivalent? The procedure here is simple. Just connect the two statements with =, and surround them with a set of parentheses. Test this statement for logical truth. If it is logically true, the pair of statements in question is logically equivalent. On the other hand, if the single statement we have constructed is not logically true, the pair of statements is not logically equivalent. Suppose, for example, that the two statements are (P= - Q), and - (P= Q). To test this pair of statements for logical equivalence, we connect them with =, and surround them with a pair of parentheses. Doing this, we obtain: ((P = - Q) = - (P = Q)) We then test this statement for logical truth. PQ (((P = - Q) = - (P = Q))) TT FF T F T TF TT T T F FT TF T T F FF FT T F T This statement is logically true. Therefore, the pair of statements is logically equivalent.

PROBLEMS

A. SINGLE STATEMENTS Instructions: Determine whether the following statements are logically true, logically false, or logically indeterminate by using the Truth Table Method. If you want to check your answer, go to the "Truth Table" chapter of Logical Reasoning, enter the section of the chapter entitled "Original Problem," and type in the formula exactly as it appears below. The solution will appear on the screen. 1. (((P>Q).(-P>-Q))=((P.Q)v(-P.-Q))) 2. ((P>(-R>-Q)).-(-(P.Q)vR)) 3. (((PvQ)>(R.S))>((P>R).(P>S))) 4. (((P=-Q).(Q=-R))>(P.(QvR))) 5. ((Pv(Q.R))>-((PvQ).(-P>R))) B. PAIRS OF STATEMENTS Instructions: Determine whether the following pairs of statements are logically equivalent by using the Truth Table Method. To use this method, glue the two statements together with = and enclose the result with a pair of parentheses. Thus, the first problem below should be written: (-(-P.-Q)=(PvQ)). If the statement you have thus constructed is logically true, the pair of statements will be logically equivalent. (As before, you can check your results with the computer by going to the "Original Problem" section of the chapter on "Truth Tables and entering the formula.)

1. - (- P. -Q) 2. (P > Q) 3. (P. (Q = R)) 4. (P> (Q > R)) 5. ((P = Q) . (Q = R)) C. SETS OF STATEMENTS

Instructions: Determine whether the sets of statements below are consistent or inconsistent by using the Truth Table Method. As before, you can use the "Original Problem" section of the chapter on Truth Tables to check your results. Note: In the "Evaluating Truth Tables" portion of the Truth Table Tutorial you were told how to use the Truth Table Method to determine whether a set of statements is consistent or inconsistent. If you are using the "Original Problem" section of the program, you should simply type in the whole set of statements exactly as it appears below. The program will convert this set to the appropriate single statement and provide you with the result. 1. {(P > - Q); (Q > - P); (- P = Q)} 2. {(P > (Q . - R)); (- (P > Q) v -(P > - R))} 3. {(P > Q); (- R > - Q); (- R v P); - (P = R)} 4. {((P v Q) > R); (R =-S); (S > (- P v - Q))} 5. {- ((P v Q) v R); ((R . S) v (R . - S))} D. ARGUMENTS Instructions: Determine whether the arguments below are valid or invalid by using the Truth Table Method. Use the "Original Problem" section of the chapter on Truth Tables to check your results. Note: In the "Evaluating Truth Tables" portion of the Truth Table Tutorial you were told how to use the Truth Table Method to determine whether an argument is valid or invalid. If you are using the "Original Problem" section of the program you should simply type the argument in exactly as it appears below. The program will convert this argument into the appropriate single statement and provide you with the result. 1. {((P . Q) > - R); (R > Q)} / - P 2. {- (P > (Q . R)); ((Q > - R) > - S)} / (P . - S) 3. {(P > (Q v R)); (Q > - S)} / ( - P v (R . - S)) 4. {((P . - Q) v ( - P . Q)); (Q > - R); ( - R > - Q)} / (P = R) 5. {(( - P v Q) . ( - Q = - R)); (P . - R)} / - S E. INTRODUCING NEW CONNECTIVES Instructions: Suppose we introduce two new binary connectives. * is true only when its left side is true and its right side is false; while # is false only when both its left and right sides are true. Determine whether the

following single statements are logically true, logically false, or logically indeterminate by using the Truth Table Method. 1. ((P * Q) # (Q * P)) 2. ((P # (Q * R)) * P) 3. ((P * (Q # R)) * P) 4. ((P > (Q * P)) # ((Q * P) = - P)) 5. ((( - P v Q) * - R) # ( - P . - (R # Q)))

BRAINTEASER

THE PURPOSE OF TRUTH TREES

Although the truth tree technique we will be exploring in this chapter can ultimately be used to obtain answers to the same questions truth tables answer, the tree method is primarily designed to find out whether a set of statements is consistent or inconsistent. Once we have learned to construct trees to find out whether a set of statements is consistent or inconsistent we will see how to use them to answer other kinds of questions we might have. One great advantage the truth tree method has over truth tables is that trees frequently involve much less work. When we were learning to build tables, we saw that the more atomic statement letters there are in a problem the longer the table needs to be. With trees, as we will discover, this is not necessarily so.

A truth tree always begins with a set of statements. We start by listing each member of the set, one beneath the other. We then number the lines to the far left and justify them "SM," which is an abbreviation for "set member," to the far right. Suppose, for example, we want to test: {(Pv-(Q.-R));----T;-(Av--B);(S>(U.W));---T}. Our tree will begin as follows: 1 2 3 4 5 (P v - (Q . - R)) -- -- T - (A v - - B) (S > (U . W)) -- -T SM SM SM SM SM

Although this may not look much like a tree, we should view it as one. We should imagine it as upside down, with the trunk beginning on line 1, starting with the first set member, and extending until we have listed all the set members. As we list other formulas beneath the initial set members the tree will gradually grow. Though all of our early trees will only look like trunks coming out of the ground, later we will learn how to build branches on them until they finally really do begin resembling trees. The trees we will be building won't ever look like Christmas trees. Instead, they will consist in connected two-pronged branching affairs, with or without flowers on the ends of their branches. To see how these trees will look, just imagine the following structure containing formulae along its branches: |

/\

| /\

| | | * *

/\

| /\

** |

/\

| * This tree contains six branches. Five of these end in flowers. Only the fifth branch from the left has no flower.

The flowering process is extremely important, for it decides whether a set of statements is consistent or inconsistent. What, however, causes a flower?

Let's call any atomic statement letter that either is not negated, or is singly negated, "a lintel" (so, P, Q, R, -P, -Q, and -R, are all examples of lintels). And let's refer to any pair of lintels, one of which is the negation of the other (e.g., P and -P), as "a conflicting pair of lintels." Whenever a conflicting pair of lintels occurs on a branch we will place a flower at the end of that branch. Thus, for example, in the following tree every branch has a flower on the end of it. 1 2 3 4 5 6 7 8 9 10 11 12 13 ((P v - Q) v (R v ( - - T . U))) ( - P . Q) - (R v T) -P Q -R -T (P v - Q) (R v ( - - T . U)) SM SM SM 2, . D 2, . D 3, - vD 3, - vD 1, vD 8, vD 8, vD 10, . D 10, . D 11, - -D

/\

/\

R (- - T . U) * -- T U T * The farthest branch to the left has a flower because of the conflicting pair of lintels P (on line 9) and -P (on line 4). While the second branch from the left has a flower because of the conflicting pair of lintels -Q (on line 9) and Q (on line 5). The third branch from the left has a flower because of the conflicting pair of lintels R (on line 10) and -R (on line 6). Finally, the farthest branch to the right has a flower because of the conflicting pair of lintels T (on line 13) and -T (on line 7). If all of the branches of a tree end in flowers we will call that tree "a completely flowering tree." A completely flowering tree tells us the set we are testing is inconsistent. P -Q * *

/\

All of this may seem clear enough, but what if a formula is not a lintel and our tree is not completely flowering? It turns out that any formula that is not a lintel must be one among the following nine types of formulas: 1 2 3 4 5 -- ( . ) ( v) ( > ) ( = ) -( . ) -( v) -( > ) -( = )

6 7 8 9

Thus, -(---(P. -Q) v - (R = (T > B))), is of type 7, since its main connective is a tilde and its next main connective is a wedge. While - - - (P. -Q) is of type 1, since it is a doubly negated formula. And the formula, - (R = (T > B)) is of type 9, because its main connective is - and its next main connective is =. Corresponding to each of these nine types of formulas is a decomposition rule. Each decomposition rule tells us how to break a particular kind of formula up into simpler formulas, formulas that are equivalent to the original. When we are constructing a tree, and we find a formula that isn't a lintel, we simply identify and

apply the rule that is applicable to that type of formula. In this way, as our tree develops, we will get to simpler and simpler formulas, until we finally obtain lintels. In what follows we are going to be presenting the nine decomposition rules, one after the other. After we formulate a new rule, we will present an example that uses that rule, with the earlier ones. We will introduce the easier rules first, and you will find that it is a good idea to try to use these rules before using the more difficult ones. This creates trees that are not as complex as they would otherwise be. One other thing is worth mentioning. When a new rule is presented, you might want to write it down for future reference.

This rule tells us that if a formula is a doubly negated formula, we may write down that formula, without the two tildes, on any later line. The earlier formula is then checked off (n and p are just line numbers).

AN EXAMPLE

1 2 3 (P > - (Q = - R)) -- - -R -- - R SM SM SM

Let's begin by decomposing line 2. When we apply the present rule on this line, we get. . . . 1 (P > - (Q = - R)) SM 2 -- - -R SM 3 -- -R SM 4 -- R 2, - -D In effect, we have now replaced the formula on line 2 with one simpler than it, namely, the one on line 4. Now, however, we can decompose line 4. 1 2 3 4 5 (P > - (Q = - R)) -- - -R - -- R -- R R SM SM SM 2, - -D 4, - -D

Our first lintel! Unfortunately, we can't flower yet, however, because we don't have a -R to go with R. It's now time for us to decompose line 3. 1 2 3 4 5 6 (P > - (Q = - R)) -- - -R - -- R -- R R -R * SM SM SM 2, - -D 4, - -D 3, - -D

The last application of the rule causes a flower. Moreover, since all of the branches now have flowers on them, our tree is completely flowering. Therefore, we have shown that the set we are testing is inconsistent.

DOT DECOMPOSITION ( . ) n, .D n, .D

AN EXAMPLE 1 2 (P . - - (Q . R)) -- - Q SM SM

n. p. p+1.

This rule tells us that if a formula is a dot claim, we may write its left side down, followed immediately, on the very next line, by its right side.

In this problem we should begin with line 2, because it involves an earlier rule than line 1 does. When we decompose this line we obtain the following: 1 2 3 (P . - - (Q . R)) -- - Q -Q SM SM 2, - -D

Next, we have to do line 1. Note that we can't use the Tilde Tilde rule on it because these are not its main connectives. Whenever we select it, the only rule we can use on it is Dot Decomposition. 1 2 3 4 5 (P . - - (Q . R)) -- - Q -Q P - - (Q . R) SM SM 2, - -D 1, . D 1, . D

Notice that the Dot Decomposition rule requires that we write down two later lines. (Many rules resemble Dot Decomposition in this respect.) We can now use our Tilde Tilde rule on line 5. 1 2 3 4 5 6 (P . - - (Q . R)) -- - Q -Q P - - (Q . R) (Q . R) SM SM 2, - -D 1, . D 1, . D 5, - -D

We can now finish the problem by decomposing line 6. (If you haven't already done so, note how we justify the steps to the far right.) 1 2 3 4 5 6 7 8 (P . - - (Q . R)) -- - Q -Q P - - (Q . R) (Q . R) Q R * SM SM 2, - -D 1, . D 1, . D 5, - -D 6, . D 6, . D

The conflicting lintels on lines 7 and 3 cause a flower. The set is, therefore, inconsistent. Note that we finish using the rule (on line 8) before flowering.

AN EXAMPLE 1 2 - (P > - (Q . R)) - (R > S) SM SM This rule tells us that if a formula is a tilde horseshoe claim, we may write its left side down, followed immediately, on the next line, by tilde and then its right side.

Although we can choose to decompose either line 1 or line 2, whichever one we decide to do we have to use the Tilde Horseshoe rule on it. Let's begin with line 2. 1 2 3 4. - (P > - (Q . R)) - (R > S) R -S SM SM 2, - >D 2, - >D

This rule works as it does because -(R>S) is equivalent to (R.-S). Shall we decompose line 1 now? 1 2 3 4 5 6 - (P > - (Q . R)) - (R > S) R -S P - - (Q . R) SM SM 2, - >D 2, - >D 1, - >D 1, - >D

Notice that on line 6 all we did was to bring down the right side of line 1 and place a tilde in front of it. We can now use our Tilde Tilde rule on 6. 1 2 3 4 5 6 7 - (P > - (Q . R)) - (R > S) R -S P - - (Q . R) (Q . R) SM SM 2, - >D 2, - >D 1, - >D 1, - >D 6, - - D

That was easy, wasn't it? Now all we can do is decompose line 7. Can you see what is going to happen after we have finished decomposing it? 1 2 3 4 5 6 7 - (P > - (Q . R)) - (R > S) R -S P - - (Q . R) (Q . R) SM SM 2, - >D 2, - >D 1, - >D 1, - >D 6, - -D

8 9

Q R

7, . D 7, . D

At this point we are finished, because we have decomposed every formula that isn't a lintel, yet the tree has no flowers. Does this mean that the set is consistent? Yes! Whenever any branch on a tree contains only formulas that are lintels or have already been checked off, if that branch is not flowering we have a result. The tree is not completely flowering and the set we are testing is consistent.

This rule tells us that if a formula is a tilde wedge claim, we may write down a tilde, then the formula on its left side, followed on the very next line, by a tilde, and then its right side.

In this problem, the only thing we can do is line 2, and the only rule we can use on it is Tilde Horseshoe Decomposition. Do you remember how that rule works? 1 2 3 4 P - ((P . Q) > (R v - S)) (P. Q) - (R v S) SM SM 2, - >D 2, - >D

Now we need to do either line 3 or line 4. Since the rule for decomposing line 3 is an earlier rule than the one for decomposing line 4, let's do it next. 1 2 3 4 5 6 P - ((P . Q) > (R v - S)) (P . Q) - (R v - S) P Q SM SM 2, - >D 2, - >D 3, . D 3, . D

It's now time for us to use our new rule and decompose line 4. (The fact that the formulas, -(Rv-S) and (-R.--S), are logically equivalent, explains why this rule works the way it does, if you really wish to know.) 1 2 3 4 5 6 7 8 P - ((P . Q) > (R v - S)) (P . Q) - (R v - S) P Q -R -- S SM SM 2, - >D 2, - >D 3, . D 3, . D 4, - vD 4, - vD

The only formula we haven't yet decomposed is the one on line 8. Though you may already see the result, we need to finish the branch anyway.

1 2 3 4 5 6 7 8 9

SM SM 2, - >D 2, - >D 3, . D 3, . D 4, - vD 4, - vD 8, - -D

We hope you evaluated the set as consistent, because it is. Let's turn now to a new kind of rule, and to one that is more complicated than the rules we have been examining.

WEDGE DECOMPOSITION n. p. ( v ) n, vD

/\

This rule tells us that if a formula is a wedge claim, to decompose it, we need to draw a fork (viz., /\), and place the left side of the wedge claim under the left branch, and the right side of the formula under the right branch.

FIRST EXAMPLE 1 2 (P . - Q) (Q v - P) SM SM

Since this rule introduces some new twists, let's start with some very simple examples. Given the principle that we should use the earlier rules first, we should begin the decomposition process by selecting line 1 to decompose. When we do this we get. . . . 1 2 3 4 (P . - Q) (Q v - P) P -Q SM SM 1, . D 1, . D

We now need to decompose line 2. The fork, you should note, is part of the rule. It needs to be drawn in, and it goes at the end of the branch. Notice also that this rule adds only one later line (viz., line 5). Under the leftmost branch of our fork we place the formula that occurred on the left side of the wedge claim; while under the rightmost branch we place the right side of that formula. 1 2 3 4 5 (P . - Q) (Q v - P) P -Q SM SM 1, . D 1, . D 2, vD

/\

Q -P * *

Clearly, the leftmost branch of the tree now flowers because of the conflicting lintels Q and -Q. While the rightmost branch flowers because of -P and P. Our tree is now completely flowering and this establishes that the set is inconsistent.

SECOND EXAMPLE 1 2 (P v Q) ( - Q v R) SM SM

Since both these lines involve using our wedge rule, it really doesn't make a difference which one we start with. Let's do line 2 first. When we decompose this line, we get. . . . 1 2 3. (P v Q) ( - Q v R) SM SM 2, vD

/\

-Q R Now we have a puzzle, however. We need to do line 1, but to decompose it we need to fork. Where should we fork, however? The answer is, we need to fork under every branch that doesn't already have a flower on it and on which we find the formula we want to decompose. In the present case this means under both -Q and under R. The two forks will look absolutely identical. 1 2 3 4 (P v Q) (- Q v R) SM SM 2, vD

/\

-Q R

/\ /\

PQ P Q 1, vD * Obviously the resulting tree has three branches that are finished but do not have flowers on the ends of them. All it takes is one such branch to establish that the set we are testing is consistent, however. So the set is consistent. Before we turn to the next rule let's try one more problem -- a bit more complex one this time, hopefully. THIRD EXAMPLE 1 2 ( - (P > Q) v (T v ( - Q v R))) (Q . - T) SM SM

We'll decompose line 2 first, because the rule for breaking it up is much simpler than the rule for breaking up line 1. (As a general principle, do not use forking rules until you have to.) 1 2 3 4 ( - (P > Q) v (T v ( - Q v R))) (Q . - T) Q -T SM SM 2, . D 2, . D

Now we have no choice. We have to break up the formula on line 1. The main connective in this formula is the wedge between Q and T, so we must use the Wedge Decomposition rule. 1 2 3 4 5 ( - (P > Q) v (T v ( - Q v R))) (Q . - T) Q -T SM SM 2, . D 2, . D 1, vD

/\

- (P > Q) (T v ( - Q v R))

It is now best to decompose -(P>Q). Notice that when we decompose it we only break it up under the left branch, and we check off only the formula on that branch. We will turn to the formula on the right branch of line 5 soon. 1 2 3 4 5 6 7 ( - (P > Q) v (T v ( - Q v R))) (Q . - T) Q -T - (P > Q) (T v ( - Q v R)) P -Q * SM SM 2, . D 2, . D 1, vD 5, - >D 5, - >D

/\

The leftmost branch is now flowering because of the conflicting lintels -Q and Q. We still need to break up the formula on the right branch on line 5, however. It causes a fork. Watch how we do it. 1 2 3 4 5 6 7 ( - (P > Q) v (T v ( - Q v R))) (Q . - T) Q -T - (P > Q) (T v ( - Q v R)) P -Q * SM SM 2, . D 2, . D 1, vD 5, - >D 5, - >D

/\

/\

8 T * ( - Q v R)

5, vD

The leftmost branch on line 8 flowers because of the conflicting pair of lintels, T and -T. However there is still something else that needs to be done, namely, (-QvR). Can you see how the tree is going to end? 1 2 3 4 5 6 7 ( - (P > Q) v (T v ( - Q v R))) (Q . - T) Q -T - (P > Q) (T v ( - Q v R)) P -Q * SM SM 2, . D 2, . D 1, vD 5, - >D 5, - >D

/\

/\

8 T * ( - Q v R)

5, vD

/\

9 -Q R * 8, vD

Although there is only one branch that hasn't flowered, our tree is finished. The set is consistent. You might want to study this example before continuing.

This rule tells us that if a formula is a horseshoe claim, to decompose it we need to draw a fork (viz., /\), and put a tilde and the left side of the claim under the leftmost branch and the right side of the formula under the rightmost branch.

/\

Because the main connective in the formula on line 1 is a horseshoe, we have to use the new rule first. (Note: To understand why the new rule works the way it does, it is necessary to realize that (P>Q) is equivalent to (-PvQ).) 1 2 3 ((P v Q) > (R v (S > T))) P /\ - (P v Q) (R v (S > T)) SM SM 1, >D

The normal thing to do next would be to decompose -(PvQ), but we are going to do the right side of line 3 instead. (The reason for this will become clear shortly.) 1 2 3 4 ((P v Q) > (R v (S > T))) P SM SM 1, >D 3, vD

/\

- (P v Q) (R v (S > T))

/\

R (S > T) What are we supposed to do now? Oddly enough, we can stop because the middle branch is finished and does not have a flower, and so, we know the set we are testing is consistent. We could go on, but we don't have to.

This rule tells us that if a formula is a tilde dot claim, to decompose it we need to draw a fork, and put a tilde and then the formula on the left side of the claim we are decomposing under the leftmost branch, and a tilde and the right side of that formula under the right branch.

-- n, - . D

/\

The fact that the formulas, -(P.Q) and (-Pv-Q), are logically equivalent, explains why the rule works the way it does.

We obviously have to start by using dot decomposition on line 1. 1 2 3 4 (P . - (Q . (R > P))) Q P - (Q . (R > P)) SM SM 1, .D 1, .D

We can now use the new rule on line 4. 1 2 3 4 5 6 7 The set is inconsistent. (P . - (Q . (R > P))) Q P - (Q . (R > P)) SM SM 1, .D 1, .D 4,-.D 5,->D 5,->D

/\

- Q - (R > P) * R -P *

This rule tells us that if a formula is a triple bar claim, to decompose it we need to draw a fork and put first the left, and then underneath that, the right sides of the formula we are decomposing, under the left branch. Then we need to put the left and right sides of this formula, but negated, under the right branch of the fork.

/\

AN EXAMPLE 1 2 3 -P -Q (P = - Q) SM SM SM

Notice that this rule requires not only a fork, but also two later lines. It, and the next rule, are the most complex ones of all.

1 2 3 4 5

-P -Q (P = - Q)

SM SM SM 3, =D 3, =D

/\

P -P -Q -- Q *

Now all we need to do is decompose the formula, --Q, on the right side of line 5 and we're finished. Once we do this we get. . . . 1 2 3 4 5 6 The set is inconsistent. -P -Q (P = - Q) SM SM SM 3, =D 3, =D 5,--D

/\

P -P -Q -- Q * Q *

This rule tells us that if a formula is a tilde triple bar claim, to decompose it we need to draw a fork and put first the left side of the formula we are decomposing, and then, underneath that, a tilde and then the right side of this formula, under the leftmost branch. Then, we need to put a tilde and the left side of this formula, and, below that, the right side, under the rightmost branch.

/\

- -

/\

P Q -P -Q

Now it's time for us to decompose line 2 using our new rule. 1 2 3 4 5 6 (P = Q) - (P = Q) SM SM 1, =D 1, =D 2, - =D 2, - =D

/\

P Q -P -Q

/\

/\

P -P P -P -Q Q - Q Q * * * *

The set we are testing is inconsistent. The fact that (P=Q) is equivalent to ((P.Q)v(-P.-Q)) explains why the Triple Bar Decomposition rule works the way it does; while the fact that -(P=Q) is equivalent to ((P.-Q)v(-P.Q)) explains why the Tilde Triple Bar rule works as it does. Now that we have examined all of the decomposition rules, all we need to see is how to use the truth tree method to find answers to the other kinds of questions we might have. More specifically, we need to know how to use the tree method to: (1) determine whether an argument is valid or invalid; (2) determine whether a single statement is logically true, logically false, or logically indeterminate; and (3) determine whether a pair of statements is, or is not, equivalent. Let's turn briefly to these topics.

Really, the tree method is only capable of determining whether a set of statements is consistent or inconsistent. To use this method to find out whether an argument is valid or invalid we must first transform the argument into a set of statements. In the chapter on Basic Concepts we said that an argument is valid just in case the set of statements consisting in that argument's premises and the negation of its conclusion is inconsistent. This definition provides the clue we need to be able to use the tree method on arguments. It suggests that we can find out whether an argument is valid or invalid in the following way:

1. Construct a set of statements consisting in the original argument's premises, together with the negation of its conclusion. 2. Use the tree method on this set of statements. 3. If the result is that the set is inconsistent, this will tell us that the original argument is valid; while if the result is that the set is consistent, this will tell us that the original argument is invalid. AN EXAMPLE ORIGINAL ARGUMENT TESTED Premise: (P > (Q v R)) Premise: (P . - Q) -------Conclusion: R SET OF STATEMENTS TO BE TESTED Set Member: (P > (Q v R)) Set Member: (P . - Q) Set Member: - R

TREE

EXPLANATION

1 2 3 4 5 6 7

(P > (Q v R)) (P . - Q) -R P -Q /\ P (Q v R) * /\ Q R * *

SM SM SM 2, . D 2, . D 1, >D 6, vD

The tree on the left tells us that the set we have tested is INCONSISTENT. This means that the argument we started with is VALID. If the tree had instead shown that the set was CONSISTENT, we would have concluded that the argument was INVALID.

USING THE TREE METHOD TO DETERMINE WHETHER A SINGLE STATEMENT IS LOGICALLY TRUE, LOGICALLY FALSE, OR LOGICALLY INDETERMINATE

If we want to understand how to use the tree method to test a single statement we need to recognize two things. First, a single statement is logically false if and only if the set consisting in that statement, and nothing else, is inconsistent. Second, a statement is logically true if and only if its negation is logically false. To test a single statement all we have to do is transform that statement into the appropriate set of statements and construct our tree on that set. The following principles tell us how to do this.

INSTRUCTIONS FOR DETERMINING IF A STATEMENT IS LOGICALLY FALSE Where S is any statement, to find out whether or not S is logically false test {S}. If the tree on this set is completely flowering, this means that S is logically false. On the other hand, if the tree on this set is not completely flowering, this means that S is not logically false. EXAMPLE (P . - P) P -P * EXPLANATION The tree on the left shows that the single statement, (P.P), is LOGICALLY FALSE. If the tree had not been completely flowering, this would have shown that the statement is NOT LOGICALLY FALSE.

1 2 3

SM 1, .D 1, .D

INSTRUCTIONS FOR DETERMINING IF A STATEMENT IS LOGICALLY TRUE Where S is any statement, to find out if S is logically true, test {-S}. If the tree on this set is completely flowering it means that S is logically true. On the other hand, if the tree on this set is not completely flowering it means that S is not logically true. EXAMPLE 1 2 3 4 - (P v - P) -P - -P P * SM 1,-vD 1,-vD 3,--D EXPLANATION The tree on the left shows that the single statement (PvP), is LOGICALLY TRUE. If the tree had not been a completely flowering tree, this would have shown that the statement is NOT LOGICALLY TRUE.

To use the tree method to find out that a single statement is logically indeterminate, we would have to construct a tree on both the statement and its negation and find both trees not completely flowering.

The key to understanding how to use the tree method to discover if two statements are, or are not, equivalent, lies in recognizing that they will be equivalent if and only if the claim that they are equivalent is logically true. Put a little more clearly, if S1 and S2 are equivalent the statement, (S1=S2), will be logically true; and if S1 and S2 are not equivalent the statement, (S1=S2), will not be logically true.

Where S1 and S2 are the pair of statements in question, test {-(S1=S2)}. If the tree on this set is completely flowering the two statements will be equivalent. On the other hand, if the tree is not completely flowering the two statements will not be equivalent. EXAMPLE 1 2 3 4 - (P = - - P) SM 1,-=D 1,-=D 4,--D EXPLANATION The tree on the left shows that the single statements, P and - - P are EQUIVALENT. Had the tree not been a completely flowering tree this would have told us that the statements are NOT EQUIVALENT.

/\

P -P - - - P - - P -P P * *

PROBLEMS

A. Use the truth tree method to decide whether the following sets of statements are consistent or inconsistent. 1. {(P v ( - Q > R)); - ((R = Q) v (P v S))} 2. {(P = (Q . R)); (P v - R); - (P = Q)} 3. {(P > (Q . - R)); ( - R > (Q . P)); - (P = R)} 4. {((A > B) > (C > D)); ( - (C > A) . (D > E)); - E} 5. {((A > - D) v (A > C)); (D = - C); (C > ( - A . - D))} 6. {((F > G) . (H > I)); ( - (J . H) > - ( -G > I)); (F v H); - H} 7. {(D v (E . - F)); (D = (G . - H)); - (F > - H)} B. Use the truth tree method to decide whether the following arguments are valid or invalid. 1. {(P > Q); (Q > (R . S))} / ( - R > - P) 2. {(P = Q); - (Q = R)} / (P > -R) 3. {(P > (Q . (R v S))); (R > - - S)} / ( - S > - R) 4. {(P > (Q v R)); - (Q . R); - (P = S)} / (P > (Q v - S)) 5. {(( - Q v - R) > - P); (R > - S)} / (P > ( - S . T)) 6. {( - A v - ( - B . C)); - (A > D); (D = B)} / ( - C v - B) 7. {(A = (B > F)); (F > (B v C))} / (B > (A > F)) C. Use the truth tree method to decide whether the following single statements are logically true, logically false, or logically indeterminate.

1. (P > ( - P = (Q . - Q))) 2. ((P . Q) = ( - P v - Q)) 3. ((P > (P > Q)) > (P > (P . Q))) 4. (((P v Q) > R) . - ( - R > - Q)) 5. ((T > S) = (S . T)) 6. ((P v (Q v R)) > ( - P > ( - Q > R))) 7. - ((((P > Q) . (R > S)) . (P v R)) > (Q v S)) D. Use the truth tree method to decide whether the following pairs of statements are equivalent or are not equivalent. 1. 2. 3. 4. 5. 6. 7. (P > (Q > R)) (P . Q) (P > ( - Q v R)) (P = Q) ( - A v - (B . C)) (P > ( - Q > - R)) (P > (Q > P)) ((Q . P) > R) (P v Q) ((Q . P) > R) ( - Q > - P) - ((A v B) . (C v A)) ((R . P) > Q) (R v - R)

The system of derivations, or "proofs" as it is sometimes called, differs in several important ways from both tables and trees. First, unlike tables and trees, derivations use permissive rules rather than ordering rules. These rules permit us to write certain formulas down, provided that certain other formulas have already been listed. We are not forced to use any particular rule at any particular point. Second, the purpose of derivations is different from the purpose of tables and trees. While tables and trees are primarily designed to show us whether or not an argument is valid, derivations show us why an argument is valid (assuming that it is), and they help us explain to other people how the conclusion of that argument follows from its premises. A derivation is best viewed as simply a list of formulas. Each formula listed is either a premise of the argument, or the result of applying a rule on one or two earlier lines. The derivation begins by listing the argument's premises. (The conclusion is often listed to the right of the final premise.) The objective is to get from the premises to the conclusion. Once this is accomplished, the derivation is finished. We have shown a way to arrive at the conclusion of the argument from its premises. (The list of formulas is said to be a derivation of the formula written on the last line -- the conclusion -- from the initial premises.) Once we have constructed a derivation we can use it to help explain to others how to get from the argument's premises to its conclusion by using only simple and obviously correct reasoning processes. There are numerous different systems of derivations. The one we will be developing here contains two different types of rules. First, it contains Rules of Inference. (There are, as you will see, nine different Rules of Inference.) Though all the rules are required, the Rules of Inference are the primary rules in the system and they need to be mastered first. In addition to these rules, however, the system also contains rules we will refer to as "Rules of Replacement." After we have explored all of the Rules of Inference we will examine the Rules of Replacement. Once we have included both sets of rules in the system, theoretically we will be able to derive the conclusion of any valid argument from its premises.

MODUS PONENS

n. p.

( > ) ______

n, p, MP

The rule Modus Ponens, abbreviated MP, tells us that if a horseshoe claim has already been listed on any earlier line of a derivation, and the left side of that horseshoe claim has also been listed on another line, we may write its right side down on any later line we wish. When we write the new formula down we justify it by citing both of the required earlier lines, followed by "MP." AN EXAMPLE 1. ((P . Q) > - R) 2. (P . Q) 3. - R A PROBLEM 1. (M > (( - P v R) > Q)) 2. M 3. ( - P v R)

1, 2, MP

/Q

MODUS TOLLENS

n. p.

( > ) - ______ -

n, p, MT

The rule Modus Tollens, abbreviated MT, tells us that if a horseshoe claim has already been listed on any earlier line of a derivation, and a tilde its right side has also been listed on another line, we may write a tilde its left side down on any later line we wish. Once we have listed the new formula, we justify it by citing both of the required earlier lines, followed by "MT." AN EXAMPLE 1. 2. 3. 4. 5. ((T v - Q) > - R) (T v - Q) (S > R) -R -S A PROBLEM 1. - S 2. (T > S) 3. ( - T > (P > S)) HYPOTHETICAL SYLLOGISM

1, 2, MP 3, 4, MT

/ -P

n. p.

n, p, HS

The rule Hypothetical Syllogism, abbreviated HS, tells us that if a horseshoe claim has already been listed on any earlier line of a derivation, and a second horseshoe claim, whose right side exactly matches the left side of the other horseshoe claim, has also been listed, we may write down a new horseshoe claim whose left side is the non-matching left side of the one horseshoe claim, and whose right side is the non-matching right side of the other horseshoe. The new claim is justified HS. AN EXAMPLE 1. (P > (Q . R)) 2. ((Q . R) > S) 3. (P > S) A PROBLEM 1. 2. 3. 4. (P > Q) ((P > R) > - T) (S > T) (Q > R)

1, 2, HS

/ -S

ABSORPTION

n.

n, Abs

Unlike MP, MT, and HS, which require that two earlier lines be listed for the rule to be used, Absorption (Abs) needs only one earlier line. That line must be a horseshoe claim. The rule Abs then tells us that we may write down a new horseshoe claim. The formula on the left of this horseshoe claim must match the left side of the earlier horseshoe claim, while its right side will be a dot claim. On the left of the dot we put the left side of the original horseshoe, while on the right side of the dot we put the right side of that horseshoe. AN EXAMPLE 1. 2. 3. 4. (P > Q) ((P . Q) > R) (P > (P . Q)) (P > R) A PROBLEM 1. - (R . S) 2. (R > S) 3. ( - R > T)

1, Abs 2, 3, HS

/T

If you bothered comparing the four rules we have just introduced you might have noticed a couple of things about them. First, they all deal with claims that have as their main connectives, a horseshoe. Second, while two of these rules, viz., MP and MT, tell us how to break up a formula whose main connective is a horseshoe, the other two rules tell us how to build a formula that has a horseshoe as its main connective. This last point is, we think, especially helpful. Whenever you want to dig a part of a horseshoe formula out of that formula you should consider the possibility of using either MP or MT. And whenever you want to build a formula that is not yet listed and is a claim that has a horseshoe as its main connective, you should consider the possibility of using either HS or Abs. SIMPLIFICATION

n.

( . ) _______

n, Simp

Like both Modus Ponens and Modus Tollens, Simplification tells us how to use a claim that occurs earlier in a derivation. However, it deals with dots rather than horseshoes. It tells us that we can always write the left side of a dot claim down. (Oddly enough, it does not allow us to write the right side down.) When we use this rule, all we need to do is to cite the line whose left side we are pulling down, and "Simp." AN EXAMPLE 1. 2. 3. 4. 5. 6. (P > Q) ( - Q . R) ( - P > S) -Q -P S

2, Simp 1, 4, MT 3, 5, MP

/ -Q

n. p.

_______ ( . )

n, p, Conj

Like Simplification, Conjunction also deals with a dot claim. However, it tells you how to build one, rather than destroy it. All you need are both sides of the dot claim you want, and you can list it. You justify the new line you are writing down by citing the two earlier lines required and writing "Conj." AN EXAMPLE 1. 2. 3. 4. 5. 6. 7. (P > Q) ( - Q . R) (( - P . - Q) > S) -Q -P ( - P . - Q) S A PROBLEM 1. (P . R) 2. (P > S) 3. ((P . S) > (S > T))

2, Simp 1, 4, MT 4, 5, Conj 3, 6, MP

/ (P > T)

Conjunction is a very powerful rule. It permits you to build infinitely many different formulas, beginning with just two. Thus, P and Q, yield (P.Q), and this, together with P gives (P.(P.Q)), etc. As with other building rules, however, the idea is to build a formula that you can then use to do something with. Let's turn now to two rules that involve wedges, rather than dots. ADDITION

n. p.

_______ ( v )

n, Add

Like Conjunction, Addition is a building rule; and also like Conjunction, it can be used to build infinitely many formulas. The rule tells us that we can add on a wedge anything to any formula we want. We sometimes like to call this rule "The Magic Rule" because it permits us to add on absolutely any formula. Like the rabbit, it comes out of the proverbial hat.

AN EXAMPLE 1. 2. 3. 4. 5. 6. ((P v - Q) > R) (P . - S) P (P v - Q) R (R v (S = - T)) A PROBLEM 1. 2. 3. 4. ((P > - Q) > - R) (P > - S) (( - R v T) > - J) ( - S > - Q)

/ ( - J . - R)

DISJUNCTIVE SYLLOGISM

n. p.

( v ) - _______

n, p, DS

Our next rule, Disjunctive Syllogism (DS), tells us how to destroy a wedge claim. In effect, it says that if we already have a wedge claim listed on an earlier line and a tilde its left side listed on another earlier line, we may write down the right side of the wedge claim at any later point. The move is justified by citing the two required lines and DS. AN EXAMPLE 1. 2. 3. 4. 5. 6. (P > Q) (P v R) -Q -P R (R v - S) A PROBLEM 1. ( - P . Q) 2. (P v (R > S)) 3. (T > R) CONSTRUCTIVE DILEMMA

1, 3, MT 2, 4, DS 5, Add

/ (T > S)

n. p.

Constructive Dilemma, abbreviated CD, is a complicated rule. It says that if we already have a dot claim, both sides of which are horseshoes, and also a wedge claim, whose left side matches the left side of the horseshoe claim on the left of the dot, and whose right side matches the left side of the horseshoe claim on the

right of the dot, we can build a wedge claim whose left side matches the right side of the former horseshoe and whose right side matches the right side of the latter horseshoe. (Examine the rule above carefully.) AN EXAMPLE 1. ((P > Q) . (R > S)) 2. (P v R) 3. (Q v S) A PROBLEM 1. (P>Q) 2. (Pv(R>S)) 3. -Q 4. (T>W) 5. (Qv(TvR))

1, 2, CD

/ (WvS)

The Rules of Replacement differ from the Rules of Inference in several ways. First, they do not require that you use them on the main connective. Instead, they can be applied to a part of a formula. Second, they require only one earlier line of the appropriate sort, and not, as most of the Rules of Inference do, two. Finally, you can work backwards with these rules. (All of this will, we hope, become clearer soon.) Each of the rules, in effect, asserts that a formula of one sort can either replace, or be replaced by, a formula of another sort. To use the rule, you simply make the exchange and cite the earlier line you are making the exchange on. If you are exchanging part of a formula on an earlier line with another unit, the portion of the earlier line not replaced is simply copied. DOUBLE NEGATION

--=

n, DN

This rule tells us that we may either chop off two tildes or add two tildes to any formula, or any part of any formula, which has already been listed. The justification consists in citing the line number of the formula on which the exchange has been made, followed by the abbreviation for the rule. Study the example below carefully. AN EXAMPLE 1. 2. 3. 4. 5. 6. 7. 8. 9. (P > - - Q) ((P > R) > - - S) (Q > R) (P > Q) (P > R) -- S S (S v - T) - - (S v - T) A PROBLEM 1. - - (( - -P v Q) . S) 2. - P

1, DN 3, 4, HS 2, 5, MP 6, DN 7, Add 8, DN

/Q

COMMUTATION

( . ) = ( . ) ( v ) = ( v )

n, Com n, Com

The rule Commutation, abbreviated Com, is a flipping principle. It applies to both dots and wedges, and it allows us to reverse the two sides of the wedge, or dot claim. AN EXAMPLE 1. 2. 3. 4. 5. 6. 7. 8. ((P v Q) > R) ((S > (Q v P)) . - R) ((Q v P) > R) (S > (Q v P)) (S > R) ( - R . (S > (Q v P))) -R -S A PROBLEM 1. - - (P v Q) 2. ((Q v P) > (R . T)) TRANSPOSITION

/T

( > ) = (- >-)

n, Trans

Like Commutation, Transposition is a flipping principle. However, it flips horseshoe claims, rather than dots or wedges. One other important difference between Transposition and Commutation should also be noted. When the two sides of the horseshoe claim are flipped, they either both get a tilde, or they both lose a tilde. Transposition is abbreviated Trans. AN EXAMPLE 1. 2. 3. 4. 5. 6. (P > - (Q > R)) (S > ( - R > - Q)) (S > (Q > R)) ( - - (Q > R) > - P) ((Q > R) > - P) (S > - P) A PROBLEM 1. (( - Q > - R) . S) 2. - - ((R > Q) > T) ASSOCIATION

2, Trans 1, Trans 4, DN 3, 5, HS

/ (S.T)

(. (. )) = ((. ). ) (v (v )) = ((v )v )

n, Assoc n, Assoc

The rule Association, abbreviated Assoc, is a parenthesis-moving rule. It applies only to dots and wedges. Roughly, it says that when you have a complex claim, which contains two wedges, or two dots, you can shift the innermost pair of parentheses to the left or right one. (Look at the rule and the example below to see how this is done.) AN EXAMPLE 1. 2. 3. 4. 5. 6. (((P v Q) v R) > T) ((S . (U . W)) > ((Q v R) v P))) ((P v (Q v R)) > T) (((Q v R) v P) > T) ((S . (U . W)) > T) (((S . U) . W) > T) A PROBLEM 1. - (P v Q) 2. (P v (Q v R)) 3. ( - S > -R) EXPORTATION

/S

n, Exp

The rule Exportation, abbreviated Exp, like Association is a parenthesis-moving rule, but it applies to horseshoes. It says that if you have a claim which contains two horseshoes, and the right horseshoe is surrounded by a pair of parentheses, you can move the parentheses to the left one unit, but the first horseshoe converts into a dot. Alternately, if you have a pair of formulas surrounded by a dot, and then a horseshoe, followed by another formula, you can move the parentheses right one unit, but the dot converts into a horseshoe. (See the example below.) AN EXAMPLE 1. 2. 3. 4. (P > (Q > R)) (S > (P . Q)) ((P . Q) > R) (S > R) A PROBLEM 1. (P > (Q > R)) 2. ( - R . S) 3. (T > (P . Q)) DISTRIBUTION

1, Exp 2, 3, HS

/ (S . - T)

(v (. )) = ((v ) . ( v )) (. (v )) = ((. ) v ( . ))

n, Dist n, Dist

Distribution, abbreviated Dist, is probably the most complex rule of all. It tells us how to manipulate claims that contain combinations of wedges and dots. One version of it tells us that if we have a wedge claim, the right side of which is a dot, we can replace this with a dot claim. This dot claim will consist in two wedge claims. On the left side of each of these wedge claims we list the formula that was on the left of the original

wedge. On the right side of the leftmost wedge claim we put the formula that was on the left side of the original dot, and on the right side of the rightmost wedge claim we put the right side of the original dot. The rule, of course, also works in reverse. It tells us that if we have a dot claim, both sides of which are wedge formulas, and if the left sides of both of these wedged claims matches, we can build a wedge claim. On the left of this wedge claim we put the common formula that appeared on the left side of both of the original wedge claims. On its right side we build a dot claim. On the left side of this dot we put the formula that appeared on the right side of the leftmost wedge, and on the right side of this dot we put the right side of the rightmost wedge. (See the top version of the rule above, and the example provided below.) AN EXAMPLE 1. 2. 3. 4. 5. ((P v (Q . R)) > (S v (U . W))) ( - T > ((P v Q) . (P v R))) ( - T > (P v (Q . R))) ( - T > (S v (U . W))) ( - T > ((S v U) . (S v W)))

2, Dist 1, 3, HS 4, Dist

The other version of Distribution is exactly like the one just described, with dots and wedges exchanged throughout. AN EXAMPLE 1. 2. 3. 4. 5. 6. 7. 8. (( - P . Q) v ( - P . R)) (P v ((Q v R) > - S)) ( -P . (Q v R)) -P ((Q v R) > - S) ((Q v R) . - P) (Q v R) -S A PROBLEM 1. (P v (Q . R)) 2. - Q DEMORGAN

/P

-( . ) = (- v- ) -( v ) = (- .-)

n, DeM n, DeM

One version of DeMorgan, abbreviated DeM, tells us that if we have a negated formula whose right side is a dot claim, we may eliminate this tilde, put a tilde in front of the formulas on both sides of the dot, and change the dot to a wedge. Alternately, if we have a wedge claim, both sides of which are negated, we may eliminate these two tildes, replace the wedge with a dot, and add a tilde in front of the resulting formula. The other version of DeM proceeds in exactly the same way, with wedges and dots replaced throughout. AN EXAMPLE 1. (( - (P . Q) v R) > T) 2. ((( - P v - Q) v R) > T) 3. (( - P v ( - Q v R)) > T)

1, DeM 2, Assoc

/ (P . R)

= ( v ) = ( . )

n, Taut n, Taut

Tautology, abbreviated Taut, is a very simple rule. It tells us that we may add to any formula wedge itself, or dot itself. Alternately, if we have a claim that reads either a formula wedge itself, or a formula dot itself, we may reduce this to that formula alone. AN EXAMPLE 1. 2. 3. 4. 5. (P > ( - Q > - P)) (P > (P > Q)) ((P . P)>Q) (P > Q) ((P v P) > Q) A PROBLEM 1. ((P > - Q) . (R > - Q)) 2. (Q v T) 3. (P v R) IMPLICATION

/ (T . (P > - Q))

( > ) = (- v )

n, Impl

Implication, abbreviated Impl, is a relatively easy rule, but an important one nonetheless. It tells us how to convert a horseshoe claim into a wedge claim, and vice versa. In effect, it says that if we have a horseshoe claim we can convert the horseshoe into a wedge, though the formula on the left side of this wedge must be negated as we do so. Alternately, it tells us we can convert a tilde wedge claim into a horseshoe claim by simply dropping the tilde from the formula on the left side of the wedge and changing the wedge into a horseshoe. AN EXAMPLE 1. 2. 3. 4. ((P > - Q) v R) (( - P v - Q) v R) ( - P v ( - Q v R)) (P > ( - Q v R))

/ ((Q v S) . ( - R v S))

n, Equiv n, Equiv

Our last rule, Equivalence, abbreviated Equiv, tells us how to work with a triple bar claim. Virtually whenever such a claim occurs in either a premise or the conclusion, we need to use one of the two versions of this rule. One version changes the triple bar into a double horseshoe claim, which is glued together with a dot; while the other version builds a wedge claim, both sides of which are dot claims. (Please examine the rule above, and example below, carefully.) AN EXAMPLE 1. 2. 3. 4. 5. 6. 7. ((P = Q) > (R = S)) - (R . S) ((P > Q) . (Q > P)) (P = Q) (R = S) ((R . S) v ( - R . - S)) ( - R . - S) A PROBLEM 1. (P = - Q) 2. - - Q 3. (P v R)

3, Equiv 1, 4, MP 5, Equiv 2, 6, DS

/ ( - P . R)

Although we will normally use only one of the two versions of Equivalence in a given problem, it isn't a bad idea to begin by listing both, and then selecting the one that turns out to be most useful. Unless you are expected to solve the problem in as few steps as possible (which will be the case in both the Examination and the Practice Exercises in the computer program that accompanies this text), or you see how to solve the problem on the fly, you should write down both versions of Equivalence.

If you are going to succeed at solving difficult derivations you need to learn to think about them in the right sort of way. Though there is no substitute for practice, the following suggestions may prove useful. 1. Always look carefully at the conclusion before doing anything else. a. If it contains a letter not found in any of the premises, at some point you will have to use Addition. Either you will need to Add the letter without a tilde, or you will need to Add a tilde then that letter. If one way isn't working out, try the other. b. If the conclusion is a dot claim, try to figure out how to get one of its sides. Then try to get the other side, and use Conj. c. If the conclusion is a triple bar claim, you will almost certainly have to use Equiv. Try translating the conclusion into both of its forms, and then try obtaining one of them.

d. If the conclusion is a horseshoe claim, there are several ways in which it might be obtained. (1) HS is the most likely. Look and see if any of the premises have parts that look like the left side of the conclusion. Look and see if any of the premises have parts that look like the right side of the conclusion. If you find premises of both sorts, and HS is not immediately applicable, think in terms of trying to use the Rules of Replacement in order to get the right side of one to match the left side of the other. (2) Another likely candidate here is Impl. Convert the conclusion into a tilde wedge. See if either it, or a part of it, resembles any of the premises. (3) If the right side of the conclusion is a dot claim, and one of the elements of that dot matches the formula located on the left side of the horseshoe, try to obtain the conclusion through Abs. e. If the conclusion is a wedge claim, look at each of the premises and see if any of them contain wedges, or horseshoes. If so, CD may be worth thinking about. Alternately, Addition is a possibility, though it doesn't work too often. f. If the conclusion is complex and its main connective is a tilde, think in terms of using DeM. (1) If the conclusion has the form, -(.), you might be able to get it by first getting - , or -, and then using Addition (and Commutation, if necessary). (2) If it has the form: -( v) you might try getting - and -, and then conjoining these before using DeM. (3) If the conclusion has the form: -(>), try using Impl to convert the horseshoe into a tilde wedge, and then use DeM. 2. Now look carefully at the premises. Ask yourself if there are any Rules of Inference you can use on the lines available. a. If any of the premises are dot claims, use Simp to pull out their left sides. Then use Com to flip the two sides around and pull out the other side. b. If any of the premises are triple bar claims, use both versions of Equiv on them. c. If two or more premises are horseshoe claims, see if you can't get the right side of one to match the left side of the other, and then use HS. d. If the same letters or formulas occur on the same sides of two horseshoe claims, try using Transposition on one of them. e. Whenever a tilde occurs on the outside of a formula, use DeM to push it in. 3. Once you have made some moves with the premises, go back to the bottom of the problem. The idea is to work from bottom up, and then turn around and work from top down, gradually working your way into the middle of the problem. 4. If you can't solve a problem, put it away. Go on to another problem. Or try doing something else. Sometimes our minds seem to work best on the problem when we aren't consciously thinking about it. 5. On those problems you cannot solve, if someone shows you a solution to them, try to see which rule you missed. (Most of us have a rule or two that we always seem to overlook.) Find your own problem rule, and then, whenever you are having difficulties with a derivation, think about using that rule. 6. Keep the list of rules below readily at hand. The Building Rules tell you what formulas you will need to have if you are trying to build a formula whose main connective is of the sort listed. These are the rules you should think in terms of when you are working up from the bottom of the problem. The Destroying Rules, on the other hand, tell you how to use formulas whose main connectives are of the sort listed. (They are the rules you want to think in terms of when you are working from the top of the problem down.)

PROBLEMS

A. Construct proofs of each of the following arguments. You only need to use the Rules of Inference. If you cannot construct such a proof, the computer program can construct it for you. 1. ((P>Q)>( - R.S)) (P>(M.N)) (((M.N)>Q).S) (RvT) P ((PvQ)> - R) (S>R) ((P. - S)>M) 3. (P>Q) (R>S) (PvR)

/ ((P.Q)vS)

2.

/ (MvN)

B. Construct proofs of each of the following arguments. You only need to use the Rules of Inference. Note: The computer program cannot solve these proofs. 1. (P>Q) (R>S) (PvR) (((P.Q)v(R.S))>(Q>R)) 2. (P>Q) ( - Q.(R=S)) (( - P. - Q)>(R>Q)) (Rv(Q>S)) / ((P>(P.S))v(L>M))

/ (P>S)

C. Construct proofs of each of the following arguments. You will need to use both the Rules of Inference and the Rules of Replacement. If you cannot construct such a proof, the computer program can construct it for you. 1. -T ((T>L)>(M>N)) / ( - N> - M) (R>P) (( - P. - Q) vR) 3. (P=Q) ( - P. (Q=M)) ((P.Q)>R) ((Q>R)>S)

/ -M

2.

4. / (P=R)

/ (Sv - P)

THE SPECIAL RULES

PREMISES AND CONCLUSION

The rules Premise, Assumption, and Ass#2, let you make assumptions. Every problem must begin by using either Premise or Assumption. If you are trying to prove the validity of an argument, you must start the problem by writing down a vertical line with a horizontal stroke on it. You then list the premises to the immediate right of this vertical line and above the horizontal stroke. The goal is to get the conclusion of the argument listed to the direct right of this vertical line.

ASSUMPTION The rule Assumption is both the easiest and hardest rule in the system. Using it is easy. Begin a new vertical line with a horizontal stroke. Place the formula you want to assume to the immediate right of this vertical line, and above the horizontal stroke. This rule is easy because you can assume any formula any time you want. It is hard because you will not have solved the problem until you have the goal or answer listed to the left of all the assumptions you have made. However, only a few of the rules allow you to move to the left of an assumption, thus discharging it. In other words any time you make an assumption you must ultimately discharge it by using a left-moving rule. As a result it is essential to be extremely careful and selective when using the rule Assumption. You should never use Assumption unless you know which left-moving rule you will later be using to discharge that assumption. ASSUMPTION #2 This is another type of assumption. It differs from the rule we call Assumption in one way, however. To use it, stop the last vertical line you drew and start a new vertical line with a horizontal stroke directly under it. Place the formula you want to assume to the right of this vertical line. You will only use the rule Ass#2 when you are constructing a problem that will eventually use the left-moving rule triple-bar introduction or, in some versions of this system, wedge elimination. Normally, this rule is viewed as a variant of Assumption, and so, is justified Assume. We will sometimes follow this practice. REITERATION Reiteration is the last of the special rules we will be using. This rule permits us to repeat a formula we have obtained earlier. The rule tells us we can do this if we have not discharged any assumptions we were working under when we first obtained that formula. Compare the examples below to see how we can and cannot use this rule.

HORSESHOE ELIMINATION

If a derivation contains a formula with a horseshoe as its main connective, and it also contains the left side of that horseshoe claim, you are in luck. The rule, Horseshoe Elimination, permits you to write down the right side of the horseshoe claim. The example below provides illustrations of how this rule works.

HORSESHOE INTRODUCTION Horseshoe Introduction is a left-moving rule. It tells us that if we want to create a formula whose main connective is a horseshoe, we should assume its left side. Under this assumption we then need to get the right side of the horseshoe claim we want. Once we have done this we can discharge the assumption we made and write down the horseshoe claim. Study the example below carefully.

DOT ELIMINATION Both the introduction and elimination rules for the dot are very easy. The rule Dot Elimination says that if you have a claim whose main connective is a dot, you may write down either side of that claim. Unlike Horseshoe and Triple-Bar Elimination, you don't need to have the other side of the dot claim to do this. A quick example should suffice.

DOT INTRODUCTION Dot Introduction is only slightly more complex than Dot Elimination. The rule says you can create a dot claim if you have both of its sides already listed.

WEDGE INTRODUCTION

Wedge Introduction is the easiest rule in the system. To use it, a formula is all you need to have. The rule permits you to create that formula wedge any formula or, any formula wedge that formula. The example below should illustrate this.

WEDGE ELIMINATION Wedge Elimination is quite complex. To use this rule you must have a claim whose main connective is a wedge already listed. Under this formula you need to assume the left side of the wedge, and under this assumption, you need to derive the formula you want to obtain. You then stop the assumption you have been working under and directly under it you begin a second assumption, an assumption that consists in the right side of the wedge claim. You then need to obtain the formula you want to get a second time, but under the second assumption this time. The rule Wedge Elimination then says you can write down the formula you have obtained to the left of this assumption. Study the rule above and the example below carefully before continuing.

TRIPLE-BAR ELIMINATION

Triple-bar Elimination permits you to write down either side of a triple-bar claim. You can do this if you already have listed both the triple-bar claim and its other side. This rule obviously resembles the rule Horseshoe Elimination. The brief example below should suffice.

TRIPLE-BAR INTRODUCTION A more complex rule than Triple-bar Elimination, and another of our left-moving rules, is Triple-bar Introduction. To use this rule, we need to begin by assuming the left side of the triple-bar claim we ultimately want to obtain. Under this assumption we must derive the right side of the triple-bar claim. After we have done this we then stop the assumption we have made and start a second assumption directly under it. This time we assume the right side of the triple-bar claim, and under this assumption we derive the left side of the triple-bar claim. The rule then allows us to move to the left of our assumption and write down the triple-bar claim we have been looking for. Study the rule and example carefully.

TILDE INTRODUCTION The rule Tilde Introduction is another left-moving rule. It tells us that if we have made an assumption, and under this assumption we obtained any formula and its negation, we can stop the assumption, move left, and write down a tilde that assumption. This type of rule is sometimes called Reductio ad Absurdum.

TILDE ELIMINATION

Tilde Elimination works just like Tilde Introduction. The only difference is that the assumed formula must begin with a tilde, and the formula we move to the left deletes this tilde. This is another version of Reductio ad Absurdum. For our purposes, it is also important to note that it is a left-moving rule.

OVER EA SY

((P > - Q) > (R > Q)) -Q __________ -R

P OA CH ED

(P . Q) (((R > P) . (P = Q)) > S) __________ ((S v T) . ( - B v S))

FRIED

((P = - Q) . Q) (P v R) __________ (S > R)

H A RD BOIL ED

(P = Q) (( - P = - Q) > ( - P v - Q)) __________ ( - P . - Q)

TH E OMEL ET FROM H EL L

(((P = Q) > ( - P v Q)) > (R > S)) __________ ( - R v S)

In this chapter we are going to be examining some arguments whose conclusions do not follow with certainty from their premises, but are only more or less likely, given those premises. Although these arguments cannot provide us with certainty, we would at least like them to be able to give us a degree of confidence in the claim they are reasoning about; and we would like to know how to strengthen these arguments, to the extent that it is possible for us to do so. As we noted earlier, such arguments are inductive rather than deductive. So we are going to be briefly exploring the realm of inductive logic. There are two types of inductive arguments that seem especially worth examining, if only because we use them so often in life: Arguments by Analogy, and Causal Arguments. Let's briefly consider each of these, and then close the chapter by briefly discussing classical probability theory.

ARGUMENTS BY ANALOGY

When we start running low on toothpaste many of us go to the grocery store and select the same brand we have often purchased in the past. Why? Perhaps it's mostly a matter of habit. However, there is a rational basis to our selection, and if we were to formulate it in terms of an argument, we might say something like: "I have purchased this brand frequently in the past and have always found it effective in preventing cavities. So I will probably find this package of toothpaste effective also." Notice that the conclusion of this argument is only likely. We could get the toothpaste home, start using it, and find that it actually encourages cavities. Unfortunately, that is exactly the problem with inductive arguments. They can't provide us with certainty. Sometimes though, they're all we've got. Let's develop a little terminology and explore our toothpaste example a bit more. In these cases, we are always reasoning about some object; and we are trying to establish that it has some characteristic, or property, like being effective in fighting cavities. We are also referring back to similar objects that we know had that property. Let's call the object we are trying to conclude something about "the object we're reasoning about," and the property we are reasoning that this object has "the key property." Further, let's call the objects we are referring back to "the objects in the comparison class," and the respects in which we know that these objects resemble the one we're reasoning about "the resembling properties." Clearly, the conclusion of our argument is that the object we're reasoning about has the key property. Why? Because the objects in the comparison class had it, and they resemble it in many other respects (i.e., there are many resembling properties between the two.) Armed with these terms we can now make several points, most of which are obvious yet are often overlooked, about these sorts of arguments. First, the more objects there are in the comparison class the stronger our argument is likely to be. Surely if we have had experience with only a few packages of a particular brand of toothpaste our conclusion is less likely to be true than it is if we have had experience with many such packages. Oddly enough, people frequently violate this principle and reason based on far too few objects in the comparison class. Indeed, sometimes they seem to think all that is required is one such object. (This is especially true when it comes to buying expensive items like cars. We have often heard people reason that they are going to buy a certain car because someone they know bought one and never had any trouble with theirs.) Second, the more resembling properties there are the stronger our argument is likely to be. Ideally, we would like the objects in the comparison class to resemble the object we're reasoning about in every respect. Yet this is surely not possible since, for example, the old packages of toothpaste were bought before the one we are now considering buying, and they may even have been manufactured in a different factory, etc. Still, we would like the object we are reasoning about to strongly resemble them. Apparently, consumers frequently overlook this point too. When they see "new and improved" on the label, they evidently reason that if the toothpaste was good before, it will be even better now. They fail to notice that the argument is actually weaker, precisely because there are fewer resembling properties. Third, for our argument to be at all effective, the resembling properties should have something to do with the production of the key property. That is why, in the toothpaste case for instance, we should be more

concerned with the contents of the package than we are with the packaging. Of course here too the consumer is often bamboozled. Fourth, the weaker the conclusion of the argument is relative to its premises the stronger it is likely to be. Suppose we have used Frosty Fresh toothpaste the past ten years and have never had a cavity in that time. If we conclude based on this evidence that probably we won't have any cavities while this package of Frosty Fresh lasts, our argument will be much weaker than if we conclude that we probably won't have many cavities while we use this package of Frosty Fresh. Our final point concerns a case where although we know that our new object has one of several properties we don't know exactly which of these it has. (Suppose, for example, we know Frosty Fresh is manufactured in four different locations. We might not know which of these locations the package on the shelf in front of us was manufactured in, but we do know that it was manufactured in one of the four.) In these kinds of cases we want the objects in the comparison class to vary among themselves within that range of properties as much as possible. The reason for this is simple. What we don't want is for the object we are reasoning about to have a property that is different from the objects in the comparison class and is causally significant in the production of the key property. These are a few of the more important points that need to be born in mind when we are dealing with Arguments by Analogy. Let's try some questions.

PROBLEMS

How about horses? Does only one man harm them, while others do them good? Is not the exact opposite the truth? One man is able to do them good, or at least not many. Is it not true that the trainer of horses does them good, and others who have to do with them rather injure them? Is not that true, Meletus, of horses, or of any other animals? Most assuredly it is; whether you and Anytus say yes or no. Happy indeed would be the condition of youth if they had one corrupter only, and all, the rest of the world, were their improvers. (Plato's Apology.) 1. Answer a, b, c, d, or e. What is the object being reasoned about in the passage above? a. The horses b. The youth c. The trainer d. The corrupter e. The one who does them good? 2. Answer a, b, c, d, or e. What are the objects in the comparison class? a. The horses b. The youth c. The trainer d. The corrupter e. The one 3. Answer s, w, or n. Is the Analogical argument in this passage stronger, or weaker, or neither stronger nor weaker, than our toothpaste example above? 4. Answer a, b, c, d, or e. Why did you answer question 3 in the way you did? a. There aren't enough objects in the comparison class. b. There aren't enough resembling properties. c. The resembling properties have little to do with the production of the key property. d. The conclusion is too strong relative to the premises. e. The objects in the comparison class don't vary among themselves enough.

5. Answer a, b, c, d, or e. The biggest problem with the Analogy above is that: a. There aren't enough objects in the comparison class. b. There aren't enough resembling properties. c. The resembling properties have little to do with the production of the key property. d. The conclusion is too strong relative to the premises. e. The objects in the comparison class don't vary among themselves enough.

CAUSAL ARGUMENTS

Suppose you are a health inspector. Several people have recently come down with ptomaine poisoning. Your job is to find out what is causing it and, if possible, correct the problem. Clearly what you need here is an argument whose conclusion is of the form, "A caused P," where P is the phenomenon you are trying to causally explain, namely, ptomaine poisoning, while A is its cause. What are missing are not only the premises, but also knowledge of precisely what A is. How are you going to find this out? You begin your investigations by talking with those people who have recently gotten ptomaine poisoning. You discover that they all came down with the illness during the sweltering last two weeks of August, and that they had all been swimming shortly before they became ill. Many of them had also shopped at the Big Chain Department Store, seen the new movie "Violent Affair" at the Strand Cinema, and watched television. You remember from the logic class you took in college many years ago that you cannot reason, for example, that the sweltering heat caused the illness. To do so would be to commit the False Cause Fallacy. How should you go about reasoning to discover the cause of the illness? In the 19th Century, John Stuart Mill proposed five methods he claimed could be used to provide inductive evidence to support a claim of the sort, "A caused P," where "P" is the phenomenon we want to causally explain (viz., ptomaine poisoning) and A, B, . . ., O, are antecedent circumstances (i.e., the events that occurred before the phenomenon we are trying to explain: the sweltering heat and the fact that those people who got ptomaine poisoning had gone swimming shortly before they became ill, etc.), these methods can be summarized as follows: 1. THE METHOD OF AGREEMENT To use this method we need several different cases where the phenomenon in question did occur. Suppose you have questioned five people who came down with the illness. After a more thorough examination you discover that, while many antecedent circumstances differed from case to case, there were, in fact, four such circumstances shared by all. Not only had they become ill when it was hot (which we will represent by the letter "H"), and gone swimming (S), but they had also eaten at the Greasy Spoon (G), and had stopped by for dessert at a place called "Calorie Heaven" (C). Mill's Method of Agreement, in effect, now tells us that we have some justification for asserting that the ptomaine poisoning was probably caused by one of these four antecedent circumstances. Schematically, we can formulate this argument as follows: CASE ANTECEDENT CIRCUMSTANCES PHENOMENON

1.

A,

-,

C,

G,

H,

S,

V,

-, W, W, W, W,

X X X X

P P P P P

2. THE METHOD OF DIFFERENCE To use this method we need only two cases, but they must be extremely similar. Suppose, to continue our story, you discover that one of the people who came down with ptomaine poisoning had a brother with whom he spent the entire weekend. Oddly enough, his brother did not become ill. On questioning him you discover that the only differences between the two were that the brother who didn't become ill, didn't go shopping at the Big Chain Department Store, didn't go swimming, and didn't eat at the Greasy Spoon. Schematically, you set the case up as follows: CASE ANTECEDENT CIRCUMSTANCES PHENOMENON

1.

A,

B,

C,

G,

H,

S,

V,

W, W,

X X

P -

3. THE JOINT METHOD OF AGREEMENT AND DIFFERENCE This method is really just a combination of the two methods we have already considered. Suppose we include the brother who did not get ptomaine poisoning in the list of people. We can then conclude that probably either eating at the Greasy Spoon, or going swimming, is the cause of the illness. Schematically, the case can be formulated as follows: CASE ANTECEDENT CIRCUMSTANCES PHENOMENON

1.

A,

-,

C,

G,

H,

S,

V, V, -, V, V, V,

-, W, W, W, W, W,

X X X X X

P P P P P -

This method tells us that if, from a list of antecedent circumstances and subsequent phenomena, we know the causal impact of all of these antecedent circumstances except one, the antecedent circumstance whose causal impact we do not know is probably the cause of that phenomenon whose cause we do not know. Thus, suppose we know that someone who shopped at the Big Chain, ate at the Greasy Spoon, and went swimming, became depressed, got sunburned, and came down with ptomaine poisoning. Suppose we also know that shopping at the Big Chain Department Store causes depression, and that swimming causes sunburns. Then we have some reason for thinking that eating at the Greasy Spoon causes ptomaine poisoning. Schematically, we can represent this as follows: CASE 1. ANTECEDENT CIRCUMSTANCE B, S, G B caused S caused ____________________________________ Probably G caused P. PHENOMENA D, R, P D. R.

5. THE METHOD OF CONCOMITANT VARIATION This method deals with cases, which involve a variation in intensity not only of antecedent circumstances, but also of phenomena. Suppose, in the example we have been discussing, we have discovered three sisters, all of whom shopped at the Big Chain, ate at the Greasy Spoon, and went swimming. Suppose further, that they all became depressed, got sunburned, and suffered from a greater or lesser degree of ptomaine poisoning. In addition to this, suppose the sister who ate the most got the worst case of ptomaine poisoning, while the sister who ate the least got the mildest case of ptomaine poisoning. In this case we have some further evidence that eating at the Greasy Spoon causes ptomaine poisoning. Schematically we can represent this as follows: CASE ANTECEDENT CIRCUMSTANCES 1. B, S, G+ 2. B, S, G 3. B, S, G_____________________________________ Probably G caused P. PHENOMENA D, R, P+ D, R, P D, R, P-

While these methods are not foolproof, and can at most provide us with some justification for a claim of the form, A caused P, people who are trying to determine the cause of a given phenomenon still frequently use them.

PROBLEMS

Instructions: Determine which Method is being used in each of the passages below? Answer a, b, c, d, or e. a. The Method of Agreement b. The Method of Difference c. The Joint Method of Agreement and Difference d. The Method of Residues e. The Method of Concomitant Variation 1. Jack: You're going to have to cut down on your smoking Bill, because the more smoking you do, the worse your cough gets. 2. Since the only difference between the way in which you cared for your peach trees this year and last is that you didn't use dormant spray on them this year but you did last year, and since they have developed peach leaf curl this year, while they didn't last year, your failure to spray them probably is the cause of the problem. 3. All those students who turned their homework assignments in on time got A's or B's on the Examination. So probably they did well on the Exam because they did their homework and got it in on time. 4. If you don't think that a helmet works, try the following experiment: Put a helmet on, and ask someone to hit you over the head with a baseball bat. Now take the helmet off. Have him hit you over the head again. If you can't tell the difference, you're probably too hardheaded to need a helmet. 5. Harry: How do you know that a faulty fuel pump was causing the car to stall? Larry: On the bill it said they had replaced the fan belt, the starter, and the fuel pump. But I knew the car was overheating because the fan belt was loose; and I knew that I frequently couldn't get the engine to turn over in the morning because the starter was broken. So I guess the faulty fuel pump must have been causing the car to stall.

6. Betty Noire: Whenever I hit my brother, and kick him in the stomach, he grabs his belly and runs screaming to mom. Moreover, the harder I hit him the louder he screams. If I don't hit him he stops screaming, and if I just kick him in the stomach he runs to mom, but he doesn't scream. Sera Phim: Maybe if you stopped hitting your brother he wouldn't be such a crybaby.

There are several senses in which we use the term probable. We may, for example, say that it is probable that the U.S. will lose the war in Iraq, or that it is probable that a person will not live past his or her 100th birthday, or that it is probable that the next card drawn from a deck of cards will not be a face th card. While our knowledge that it is probable that a person will not live past his or her 100 birthday is obtained by taking a large sample of people and determining what percentage of them live beyond their th 100 birthday, our knowledge that it is probable that the next card drawn from a deck of cards will not be a face card is more likely to be obtained in an a priori manner. In this last case the assumptions are made that we know how many cards are involved and that it is equally probable for any card in the deck to be selected as any other, and since there are fewer face cards in the deck than non-face cards, the likelihood is that a non-face card will be selected. This a priori theory of probability is often called Classical Probability Theory. According to Classical Probability Theory, the probability of an event a occurring, represented as P(a), equals the number of favorable outcomes divided by the number of possible outcomes. So if we are dealing with a standard deck of 52 playing cards the probability of drawing an ace equals the number of aces in the deck, viz., 4, divided by the total number of cards in the deck, viz., 52. Also, it is assumed that for any event a, 0 P(a) 1, and if a is impossible then P(a) = 0, while if a is certain then P(a) = 1. Multiple events are then handled in terms of the following principles: 1. The Probability of Joint Occurrences: What is the probability of both of two events, a and b, occurring: a. Where the two events are independent (i.e., where the occurrence of a has no effect on the occurrence of b, and vice versa)? Multiplicative Law 1: P(a & b) = P(a) x P(b). Example: What is the probability of getting two aces on two successive draws from a 52 card deck when the first card is replaced before the second card is drawn? Solution: P(a) = 1/13, P(b) = 1/13. So P(a & b) = (1/13 x 1/13) = 1/169. b. Where one of the events is dependent on the other (i.e., where the occurrence of a has an effect on the occurrence of b)? Multiplicative Law 2: P(a &b) = P(a) x P(b if a). Example: What is the probability of getting two aces on two successive draws from a standard 52 card deck when the first card is not replaced before the second card is drawn? Solution: P(a) = 4/52 and P(b if a) = 3/51. So P(a & b) = (4/52 x 3/51) = 12/2652 = 1/221. 2. The Probability of Alternate Occurrences: What is the probability of either of two events, a or b, occurring: a. In cases where the two events are mutually exclusive (i.e., exactly one of the two events can occur but not both)? Additive Law 1: P(a or b) = P(a) + P(b). Example: What is the probability of getting either an ace or a king from a 52 card deck? Solution: P(a or b) = 1/13 + 1/13 = 2/13. b. In cases where both of the two events can occur (i.e., where the two events are not mutually exclusive)? Additive Law 2: P(a or b) = P(a) + P(b) - P(a & b). Example 1: What is the probability of getting an ace or a spade from a 52 card deck? Solution: P(a or b) = 1/13 + 1/4 - (1/52). = 4/52 + 13/52 - 1/52 = 16/52 = 4/13 Example 2: What is the probability of getting an ace from a 52 card deck on two draws from the deck, when the first card is replaced before the second card is drawn? Solution: P(a or b) = 1/13 + 1/13 - (1/13 x 1/13) = 13/169 + 13/169 - (1/169)

= 25/169 But in cases of this sort it is frequently more convenient to use the Negative Law. This rule relies on the fact that where represents the event of a not occurring, P(a) + P( ) = 1. Negative Law: P(a) = 1 - P( ). Using this rule in the example above we get: Solution: P(a) = 1 - (12/13 x 12/13) = 1 - 144/169 = 25/169.

PROBLEMS

1. When a pair of dice is rolled twice, what is the probability of getting a 7 on both rolls? 2. When a pair of dice is rolled, what is the probability of getting either a 7 or an 11? 3. A die is rolled five times. What is the probability of getting two twos? 4. Three crazy Cossack officers are playing Russian roulette using one bullet and a six chambered pistol. The officer who is to go third gets to decide whether or not the cylinder of the gun should be spun after each pull of the trigger. Assuming he wants to live, should he select the spin-the-cylinder version, and what are the probabilities that he will survive if he selects this version? Suppose a new round is started, but this time the person who goes third has to choose whether to select a gun with only five chambers and be permitted to select whether or not to play the spin-the-cylinder version, or select a gun with six chambers but where no cylinder spinning is allowed. Assuming this person wants to live what should he do?

STANDARD-FORM CATEGORICAL STATEMENTS

In this chapter we are going to explore a special kind of argument known as a categorical syllogism. A categorical syllogism is a deductive argument that has exactly two premises and contains only categorical statements. A categorical statement is a statement that asserts that either a part of, or the whole of, one set of objects -- the set identified by the subject term in the sentence expressing that statement -- either is included in, or is excluded from, another set -- the set identified by the predicate term in that sentence. For a categorical statement to be in standard-form, the sentence expressing that statement must begin with the quantifier "all," "no," or "some." It must then present the subject term -- the term designating the set of objects the statement is about -- followed by the copula -- either "are" or "are not" -- followed, finally, by the predicate term. So, for a categorical statement to be in standard-form, the sentence that expresses it must have precisely the following structure: Quantifier + Subject Term + Copula + Predicate Term There are exactly four standard-form categorical statements, each of which is identified with a capitalized vowel of the alphabet. They are: A: E: I: O: All S are P. No S are P. Some S are P. Some S are not P. Example: Example: Example: Example: All birds are mammals. No birds are reptiles. Some birds are sparrows. Some birds are not carnivores.

1. THE QUANTITY AND QUALITY OF STANDARD-FORM STATEMENTS The words all, no, and some are called quantifiers because they specify a quantity. All and no are universal quantifiers because they refer to every object in a certain set; while the quantifier some is a particular, or existential, quantifier because it refers to at least one existing object in a certain set. A categorical statement is said to have a universal quantity when the sentence that expresses it begins with a universal quantifier. It is said to have a particular quantity, on the other hand, when the sentence that expresses it begins with a particular quantifier. Thus, both A- and E-statements have a universal quantity, while both I- and O-statements have a particular quantity. Besides having either a universal or particular quantity, standard-form categorical statements also have either affirmative or negative quality. The statements, all birds are mammals, and some birds are sparrows, have affirmative quality because they assert something about the inclusion of the set of birds in the set of mammals or sparrows. While the statements, no birds are reptiles, and some birds are not carnivores, have negative quality because they deny that any members of the set of birds are included in the set of reptiles in the former case, and they deny that some members of the set of birds are included in the set of carnivores in the latter case. 2. THE DISTRIBUTION OF TERMS IN SENTENCES EXPRESSING CATEGORICAL STATEMENTS Another important pair of concepts concerns both the subject and predicate terms of sentences that express categorical statements. A term is distributed in such a sentence if it refers to all members of the set of objects denoted by that term. Otherwise, it is said to be undistributed. In a sentence expressing an A-statement, e.g., "All birds are mammals," the subject term "birds" is distributed, while the predicate term "mammals" is undistributed. On the other hand, in a sentence expressing an E-statement, e.g., "No birds are reptiles," both the subject and predicate terms are distributed. In a sentence expressing an I-statement, e.g., "Some birds are sparrows," neither the subject nor the predicate term is distributed. Finally, in a sentence expressing an O-statement, e.g., "Some birds are not carnivores," only the predicate term is distributed. We can summarize these points as follows:

MANIPULATING SENTENCES

In the next section we are going to be developing some concepts that involve manipulating sentences in various ways. The present section helps prepare the way for this. Suppose, for whatever reason, we decide we want to change the quantity of a statement without altering its quality. If the sentence which expresses this statement begins with the universal quantifier "all," the only thing we need to do is replace that word with the particular quantifier "some." For example, we just replace, "All birds are mammals," with "Some birds are mammals." However, suppose the sentence begins with the universal quantifier "no." Suppose, for example, it reads, "No birds are reptiles." If we simply replace the word "no" with "some," and thus obtain, "Some birds are reptiles," we have not only altered the quantity, but the quality as well. For, although the statement this sentence expresses does have a particular quantity, it also has an affirmative quality, whereas the original statement had a negative quality. What we must do instead, then, is to write, "Some birds are not reptiles." An important concept we need to be familiar with is the complement of a set. The complement of a set, A, is the set of all those things that are not As. Thus, the complement of the set of birds is the set of things that are not birds. To express this set in a sentence we will replace the term "birds" with the term "non-birds." For example, when we are instructed to replace the predicate term, in the sentence, "Some birds are sparrows," with its complement, we will write, "Some birds are non-sparrows." 1. CONVERSION, OBVERSION, AND CONTRAPOSITION One categorical statement, S1, is the converse of another, S2, if and only if the sentence expressing S1 is the result of switching the subject and predicate terms in the sentence expressing S2. So, for example, the statement that all mammals are birds is the converse of the statement that all birds are mammals; while the statement that some carnivores are not birds is the converse of the statement that some birds are not carnivores. Obversion is a more complicated operation than conversion. To obtain the obverse of a categorical statement we must first replace the predicate term in the sentence expressing that statement with its complement, and then change the quality. So the obverse of the statement that all birds are mammals is the statement that no birds are non-mammals; while the obverse of the statement that some birds are sparrows is the statement that some birds are not non-sparrows. Finally, contraposition is even a more involved operation than obversion. To obtain the contrapositive of a statement, we first replace the subject and predicate terms in the sentence that expresses this statement and then exchange both these terms with their complements. So, the contrapositive of the statement, "All birds are mammals," is the statement, "All non-mammals are non-birds"; while the contrapositive of the statement, some birds are mammals, is the statement, some non-mammals are non-birds. 2. CONTRADICTORY, CONTRARY, SUBCONTRARY, AND SUBALTERNATION Typically the notions we are going to begin discussing in this section are developed in terms of truth-values. For the moment, however, it might prove useful to introduce them in a different way. Two statements are contradictory if and only if the sentences expressing them have the same subjects and predicates, but they differ in both quantity and quality. Thus, the statements that all birds are mammals and that some birds are not mammals, are contradictory; and so also are the statements that no birds are reptiles and that some birds are reptiles. Two statements are contrary if the sentences expressing them have the same subjects and predicates, and both begin with universal quantifiers, but they differ in quality. Thus, the statements, all birds are mammals, and no birds are mammals, are contraries.

Two statements are subcontraries if and only if the sentences expressing them have the same subject and predicate terms, and both begin with existential quantifiers, but they differ in quality. So, for example, the statements that some birds are sparrows and that some birds are not sparrows, are subcontraries. Finally, the relation of subalternation will be said to obtain between two statements when the only respect in which they differ is their quantity. Here the statement that has a universal quantity is called the superaltern," while the statement that has a particular quantity is called the subaltern." Thus, the relation of subalternation obtains between the statement that all birds are mammals and the statement that some birds are mammals; and in this relation, the statement, all birds are mammals, is the superaltern, while the statement, some birds are mammals, is the subaltern. Similarly, the relation of subalternation obtains between the statement that no birds are reptiles and the statement that some birds are not reptiles; and in this relation the superaltern is the statement that no birds are reptiles, while the subaltern is the statement that some birds are not reptiles.

MULTIPLE-CHOICE QUESTIONS

Instructions: Answer a, b, c, d, e, f, or g. a. All savings and loans are insolvent institutions. b. No savings and loans are insolvent institutions. c. Some savings and loans are not insolvent institutions. d. Some insolvent institutions are savings and loans. e. No savings and loans are non-insolvent institutions. f. All savings and loans are non-insolvent institutions. g. Some non-insolvent institutions are non-savings and loans. 1. The statement that results from changing both the quantity and quality of the statement that some savings and loans are insolvent institutions is: 2. The statement that results from changing only the quality of the statement that some savings and loans are insolvent institutions is: 3. The contrapositive of the statement that some savings and loans are insolvent institutions is: 4. The superaltern of the statement that some savings and loans are insolvent institutions is: 5. The subcontrary of the statement that some savings and loans are insolvent institutions is: 6. The converse of the statement that some savings and loans are insolvent institutions is:

So far all we have been doing is introducing terminology that deals with manipulating certain sentences to obtain new statements from ones originally given. We are now ready to begin discussing some principles of reasoning that were believed to hold in classical logic. (Aristotle invented classical logic over 2,000 years ago.) We will begin with some principles that involve inferences from one statement to another. The classical view of contradiction maintained that two statements were contradictory just in case they had to have different truth-values. Thus, A- and O-statements were contradictories, and so also were Eand I-statements. Two statements were said to be contraries if and only if at least one of them was false, and they could not be both true. Thus, A- and E-statements were said to be contraries. Two statements were said to be subcontraries just in case at least one was true and they could not be both false. Thus, both I- and Ostatements were held to be subcontraries. Finally, one statement was said to be the superaltern of another if and only if its truth entailed the truth of the other, while the other's falsity entailed its falsity. Thus, Astatements were held to be the superalterns of I-statements, and E-statements were held to be the superalterns of O-statements. These principles were illustrated graphically by means of the Square of Opposition.

F O

s u b a l t e r n a t i o n

A: E: I: O:

To use this diagram, suppose we know the claim that all whales are mammals is true. Since this is an A-statement, we know that the contradictory, some whales are not mammals, is false; and since this Ostatement is false, and it is the subaltern of the E-statement that no whales are mammals, this latter statement is also false. Finally, since this E-statement is false, its contradictory is true. Therefore, the statement that some whales are mammals is true. So, by using the Square of Opposition we have identified the truth-values of all four of the standard-form categorical statements. Unfortunately the Square of Opposition does not always provide us with truth-values for all of the standard-form statements. Suppose, for example, the A-statement had been false instead of true. Suppose it had been that all whales are fish. The Square of Opposition now tells us that the contradictory O-statement, some whales are not fish, is true. Unfortunately, that is all the information the Square will give us here. In this instance the other two statements are therefore said to have undetermined truth-values.

QUESTIONS

Now let's see if you can use the Square of Opposition to answer the following questions. 1. Which of the four forms is the statement that some magicians are illusionists (answer A, E, I, or O)? 2. If the statement that some magicians are illusionists is true, can you determine the truth-value of the corresponding E-statement (answer Y or N)? 3. If you answered Y to question 2, then is the corresponding E-statement true or is it false (answer T or F)? 4. Can you determine the truth-value of the corresponding O-statement (answer Y or N)? 5. If you answered Y to question 4, then is the corresponding O-statement true, or is it false (answer T or F)?

If we add the following principles concerning conversion, obversion, and contraposition, to the above account, the Square of Opposition can help us obtain answers to many other questions.

The Principle of Conversion: The converse of an E- or an I-statement is logically equivalent to the original. Thus, for example, the statement that some birds are sparrows is logically equivalent to the statement that some sparrows are birds. Note, however, that the above principle does not apply to either A- or O-statements. For example, the statement that all birds are mammals is not equivalent to the statement that all mammals are birds; nor is the statement, some birds are not sparrows, equivalent to the statement, some sparrows are not birds. The Principle of Obversion: The obverse of any of the four types of statements is logically equivalent to the original. Thus, the statements that all birds are mammals and that no birds are non-mammals are equivalent; and so also are the statements that some birds are sparrows and that some birds are not non-sparrows. The Principle of Contraposition: The contrapositive of an A- or an O-statement is logically equivalent to the original. For example, the statement that all birds are mammals is logically equivalent to the statement that all nonmammals are non-birds; and so also are the statements that some birds are not carnivores and that some noncarnivores are not non-birds. Notice, however, that the principle does not hold for either E- or I-statements. Thus, for example, the statement that no birds are reptiles is not equivalent to the statement that no non-reptiles are non-birds; and the statement that some birds are reptiles is not equivalent to the statement that some non-reptiles are nonbirds. To see how these principles can be used with the traditional Square of Opposition to obtain results about some immediate inferences consider, for example, the following argument: All birds are mammals. ______________________ Some non-mammals are not non-birds. We know that the statement that all birds are mammals is an A-statement, and the Principle of Contraposition informs us that this statement is logically equivalent to the statement, "All non-mammals are non-birds." The contradictory of the statement, all non-mammals are non-birds, however, is the statement, some non-mammals are not non-birds; and the contradictory of a given statement always has the opposite truth-value of that statement. So the argument is invalid since it can have a true premise and a false conclusion. Or, consider the following argument: No birds are reptiles. ___________________ Some non-reptiles are birds. The Square of Opposition tells us that if the premise is true then its subaltern, viz., some birds are not reptiles, is also true. The Principle of Contraposition, however, informs us that this O-statement is equivalent to the statement, some non-reptiles are not non-birds, and this is the obverse of the statement, some non-reptiles are birds. Since the Principle of Obversion holds for all four types of categorical statements it follows that the conclusion of the argument must be true, if its premise is true. Therefore the argument is valid. Now let's see if you can do it. Is the following a valid argument (Y/N)? Some ships are aircraft carriers. _____________________________ It isn't true that no aircraft carriers are ships. The reasoning here goes as follows: Since the statement that some ships are aircraft carriers, is an Istatement, the Principle of Conversion tells us it is equivalent to the statement that some aircraft carriers are

ships. However, if this is a true statement then its contradictory, viz., no aircraft carriers are ships, is false. However, this is simply the negation of the conclusion. So, the argument is valid.

CATEGORICAL SYLLOGISMS

So far all we have done is to investigate categorical statements and some immediate inferences that classical logic claimed could be derived from them. (An immediate inference is an argument that has only one premise.) We turn now to the classical account of categorical syllogisms. As was mentioned earlier, a categorical syllogism is an argument that contains only categorical statements and has exactly two premises. Though the statements need not be in standard-form, we will only deal with syllogisms that contain such statements in the present discussion. Categorical syllogisms contain exactly three terms, the major term, the minor term, and the middle term. The major term occurs as the predicate term in the conclusion of the syllogism, while the minor term occurs as the subject term in this statement. Each of these terms must also occur in one of the two premises. The premise in which the major term of the conclusion occurs is called the major premise," while the premise in which the minor term of the conclusion occurs is referred to as the minor premise." Besides this, the two premises must share a term, viz., the middle term. In representing the argument, we list the major premise first, and the minor premise directly beneath it. A line is then drawn and the conclusion is listed beneath it. Thus, it is represented as follows: Major Premise (i.e., the premise containing the major term). Minor Premise (i.e., the premise containing the minor term). _____________________________________ Conclusion (i.e., quantifier + minor term + copula + major term). Together, the mood and figure identify categorical syllogisms. The mood is determined by simply listing the standard-forms of the major premise, minor premise, and conclusion (in that order). Thus, the mood of the syllogism, Major Premise: Some birds are sparrows. Minor Premise: All birds are mammals. ___________________________________ Conclusion: Some mammals are sparrows. is IAI. The figure of the syllogism, on the other hand, is determined by the position of the middle term in the two premises. There are four figures: FIGURE 1 FIGURE 2 FIGURE3 FIGURE 4 -M -P -P - M -M -P -P - M -S - M -S - M -M -S -M -S _______ _______ _______ _____________________________ -S - P -S - P -S - P -S - P The argument's structure depends on its figure and mood. To represent its structure all we need to do is list both its mood and figure. We do this by simply indicating its mood first, and then a hyphen, followed by its figure. So the structure of the argument, Major Premise: Some birds are sparrows. Minor Premise: All birds are mammals. _____________________________________ Conclusion: Some mammals are sparrows. is represented as, IAI-3. All we need to do now is to decide whether this form of argument is valid.

1. DETERMINING VALIDITY To decide whether a categorical syllogism is valid or invalid, according to the classical view, the following four rules can be used: 1. The middle term must be distributed at least once. 2. Any term distributed in the conclusion must be distributed in a premise. 3. At least one premise must be affirmative. 4. A negative conclusion requires at least one negative premise, and vices versa. Any argument that violates one or more of these rules will be invalid. Consider, for example, the following argument: Some naval vessels are aircraft carriers. Some ships are naval vessels. _________________________________ Some ships are aircraft carriers. This argument (and any other argument like it that has the form III-1) is invalid because the middle term (viz., "naval vessels") is not distributed in either of the premises. (Recall that a term is distributed when it refers to all objects in a set, and that in I-statements neither the subject nor predicate terms are distributed.) For this reason it is said to commit the Fallacy of the Undistributed Middle. The argument, All birds are mammals. Some birds are not sparrows. ___________________________ Some sparrows are not mammals. (or any other argument that has the form AOO-3) violates rule 2, because the term "mammals" is distributed in the conclusion but not in the premise. Here the argument is said to commit the Fallacy of the Illicit Major because the major premise contains the term "mammals" which is distributed in the conclusion but not the premise. (If the undistributed term had occurred in the minor premise the fallacy committed would have been called the Fallacy of the Illicit Minor.) The following argument (which has the form EEE-1) violates rule 3, since both of its premises are negative: No reptiles are mammals. No birds are reptiles. _______________________ No birds are mammals. In such cases the Fallacy of Exclusive Premises is being committed. Finally, both the arguments, All birds are mammals. Some reptiles are not birds. ________________________ Some reptiles are mammals. Some birds are sparrows. All birds are mammals. ______________________ Some mammals are not sparrows.

violate rule 4. The one on the left, which has the form AOI-1, has a negative premise but no negative conclusion; while the one on the right, which has the form IAO-3, has a negative conclusion but no negative premise.

We hope this provides at least a brief introduction to the central points of classical logic. We now turn to a critique of this system of logic, and an introduction to modern symbolic logic that began in the 19th century with George Boole (1815-1864). 2. CLASSICAL VS. MODERN LOGIC On the classical account, both A- and E-statements are said to have existential import, since the corresponding I- and O-statements can validly be inferred from them. Thus, for example, from the statement, all birds are mammals, it follows that some birds are mammals; and from the statement, no birds are reptiles, it follows that some birds are not reptiles. This seems quite reasonable. But what if the subject term in the sentence that expresses an A-statement refers to objects that don't exist? What if, for example, the statement asserts that all unicorns are beautiful animals? If we evaluate this statement as true then its subaltern that some unicorns are beautiful animals must also be true. Yet this latter statement clearly commits us to the existence of unicorns. On the other hand, if we contend that the statement that all unicorns are beautiful animals is a false claim, because there are no unicorns, it follows that its contradictory, some unicorns are not beautiful animals, must be true; and this is also wrong, again, because there are no unicorns. Because of these and other considerations, George Boole and his contemporaries were led to revise the assumptions on which classical logic was based. The original assumption that A- and E-statements had existential import was abandoned. Instead, A-statements, like all birds are mammals and all unicorns are beautiful animals, were taken as asserting that if there are any birds (or unicorns) they are mammals (or beautiful animals). While E-statements, like no birds are reptiles, were viewed as asserting that if there are any birds they are not reptiles. Unfortunately these changes forced logicians to make several other revisions in classical logic and the traditional Square of Opposition. For example, the view that the subaltern of a true A- or E-statement was also true had to be abandoned; while the theory that contraries could not be both true also had to be rejected, since statements like, all unicorns are beautiful animals, and no unicorns are beautiful animals, were evaluated as true on the modern view, precisely because there are no unicorns. Moreover, the view that subcontraries could not both be false, proved wrong as well, since the statements that some unicorns are beautiful animals and that some unicorns are not beautiful animals, are both false, once again, because there are no unicorns. Besides this, a fifth principle was added to the four rules for evaluating standard-form categorical syllogisms mentioned above. That rule said, "If both premises are universal the conclusion must also be universal." (Any syllogism that violated this principle was said to commit the Fallacy of Existential Import.) Accordingly, any standard-form syllogism that does not violate any of the following rules is valid on the principles of modern logic (provided, of course, that the three terms are being used univocally): 1. The middle term must be distributed at least once. 2. Any term distributed in the conclusion must be distributed in a premise. 3. At least one premise must be affirmative. 4. A negative conclusion requires at least one negative premise, and vices versa. 5. If both premises are universal, the conclusion must also be universal. Any syllogism that violates only the fifth rule above will be valid on the classical view, but it will be invalid on the modern view.

CONCLUDING REMARKS

While it may be a bit disconcerting to study a theory and then be told it has been replaced, it is useful to keep a couple of points in mind. First, though several principles of classical logic have been abandoned or replaced, the theory works quite well unless we are talking about objects that don't exist. Moreover, it often helps us understand a revised theory better, if we know what revisions it made on the theory it supplanted, and why it made those revisions.

EXERCISES

A. IMMEDIATE INFERENCES Instructions: Use the square of opposition, together with the principles of conversion, obversion, and contraposition, to determine whether the following immediate inferences are valid or invalid. 1. All mummies are embalmed bodies. So some embalmed bodies are not non-mummies. 2. No stoics are epicureans. So some epicureans are non-stoics. 3. Some students are procrastinators. So it is false that no procrastinators are students. 4. It is not the case that all crooks are bunco artists. So some non-bunco artists are not non-crooks. 5. Some boars are not wild animals. So it isn't true that all wild animals are boars. B. SYLLOGISMS Instructions: Identify the mood and figure of each argument below. Then, determine whether it is valid or invalid on both the classical and modern views. If it is invalid on either of the two theories, explain which rule or rules it violates. 1. Some arguments are valid. No sets of statements are arguments. So some sets of statements are not valid. 2. All werewolves are nocturnal beasts. No nocturnal beasts are sun-struck beachcombers. So some sun-struck beachcombers are not werewolves. 3. All bungling buglers are cacophonous clamorers. Some cacophonous clamorers are blasphemous blasters. So some blasphemous blasters are bungling buglers. 4. All amigos are friends indeed. Some friends indeed are not friends in need. So some friends in need are not amigos. 5. Some rebels are not idealists. No idealists are realists. So some realists are rebels. Instructions: Using "liars" as the major term, "honest injuns" as the minor term, and "truth tellers" as the middle term, formulate the categorical syllogisms which have the mood and figure indicated below, and determine whether they are valid or invalid on both the classical and modern interpretations. a. EIO-2 b. EAO-3 c. AAI-1 d. EAE-1 e. EIO-4

Instructions: Using "jungles" as the major term, "funny farms" as the minor term, and "cities" as the middle term, formulate the categorical syllogisms that have the mood and figure shown below. Then decide whether they are valid or invalid on both the classical and modern interpretations. a. AII-3 b. OAE-4 c. AAI-1 d. IOO-2 e. AIE-3

SORITES

Arguments containing more than two premises can be handled by inferring a suppressed conclusion from two of the premises and then using the suppressed conclusion as a premise together with another premise to infer yet another conclusion, until the ultimate conclusion is finally reached. For example, the argument, All orcas are whales. All whales are mammals. All mammals are warm-blooded. ________________________ Some orcas are warm-blooded. can be conceived as containing the suppressed claim that all orcas are mammals and then reformulating it as the following two separate syllogisms: All orcas are whales. All whales are mammals. _____________________ All orcas are mammals. All orcas are mammals. All mammals are warm-blooded. __________________________ Some orcas are warm-blooded.

The three premise argument will then be evaluated as valid on Classical Logic but invalid on Modern Logic because, although the AAA-1 argument on the left is valid, the AAI-1 argument on the right violates the rule that if both premises are universal the conclusion must be universal also.

INTRODUCTION

In the nineteenth-century, John Venn developed a technique for determining whether a categorical syllogism is valid or invalid. Although the method he constructed relied on the modern interpretation of universal statements, we can easily modify it for use on the older classical view of such statements. In the present chapter we will begin by explaining the technique as Venn originally developed it. Then later we will show you how to use it to determine validity on the older classical theory of categorical syllogisms.

S P

The two overlapping rectangles above should be construed as representing two sets. The one on the left represents the set consisting in all of the S things there are in the universe; while the one on the right represents all of the P things there are in the universe. That part of the S rectangle which overlaps with the P rectangle represents those objects in the universe that are members of both sets. That portion of the S rectangle that does not overlap with the P rectangle represents all of the objects in the universe that are S but not P. While that portion of the P rectangle that does not overlap with the S rectangle represents all of the things in the universe that are P but not S.

SHADING

If we want to show that nothing is in a certain area we do this by shading that area. So, for example, if we want to say there are no Ss that are not Ps, we shade the area of the S rectangle that does not overlap the area of the P rectangle. We represent the claim that all S are P in this way. S P

All S are P.

On the other hand, if we want to represent the claim that no Ss are Ps, we can do this by shading the area where the S rectangle and P rectangle overlap. S P

No S are P. Finally, if we want to show that all P are S, or, in other words, there are no Ps that are not also Ss, we can shade the rightmost part of P. S P

All P are S.

To show that something is in a certain area we place a capital X in that area. Thus, to say that there is something that is a member of S, but not a member of P, we place an X in the section of the S rectangle that does not overlap with the P rectangle. S P

To say that something exists that belongs to both sets we place a capital X in the area where the two rectangles overlap. S P

Some S are P. To represent that something is a member of P, but not of S, we place an X in the area of the P rectangle that doesn't overlap the S rectangle. S P

Some P are not S. Finally, if we know that the S set has something in it, but we don't know whether that thing is or is not also a member of P, we will place a lower case x in both sets and then connect them with a line. Thus, the diagram below tells us that there is at least one member of S, but it does not tell us whether that thing is, or is not, also a member of P. S P

So far we have been concerned with representing single statements in Venn diagrams. An argument, however, is not one statement, and a categorical syllogism is a type of argument. More specifically, a categorical syllogism is an argument that contains exactly two premises, both of which are categorical statements. The problem is how do we use Venn diagrams to represent categorical syllogisms? And how do we use them to decide whether such syllogisms are valid? Perhaps the first thing to notice is that a categorical syllogism refers to three sets, rather than two. So, instead of two overlapping rectangles we will need three such rectangles. EXAMPLE 1 S I

C All schools are educational institutions. Some schools are colleges. ________ Some colleges are educational institutions. To decide whether the argument above is valid or invalid we must begin by representing both of the premises in the diagram. We represent the first premise by shading the area of schools that are not educational institutions. S I

C All schools are educational institutions. Some schools are colleges. ________ Some colleges are educational institutions. Then we need to represent the second premise. This premise tells us that there is at least one school that is also a college. To express this in the diagram, we need to place an x in those areas where schools and colleges overlap. The only such area that has not been shaded, however, is the area where the three rectangles

overlap. So we know that something exists in this area. We must, therefore, place a capital X in this area and our diagram will then look like this: S I

C All schools are educational institutions. Some schools are colleges. ________ Some colleges are educational institutions. All that remains is to evaluate whether the conclusion of the argument follows from its premises. If it does the argument is valid; otherwise it is invalid. Clearly the argument in question is valid. Let's try one more example. EXAMPLE 2 S I

C Some schools are educational institutions. Some schools are colleges. ________ Some colleges are educational institutions. To represent the first premise of this argument, we need to show that something is in the area of schools that are educational institutions. In the diagram there are, however, two areas that represent schools that are educational institutions and, unfortunately, the first premise doesn't tell us whether the schools that are educational institutions are, or are not, colleges. So we need to place a lowercase x in both areas and draw a line between them.

C Some schools are educational institutions. Some schools are colleges. ________ Some colleges are educational institutions. Clearly we have to represent the second premise in a similar way, except that we need to x-x the areas where schools and colleges overlap. Once we have done this our diagram will look like this: S I

C Some schools are educational institutions. Some schools are colleges. ________ Some colleges are educational institutions. Evidently the conclusion of this argument does not follow from its premises since the conclusion informs us that something is definitely in the area where colleges and educational institutions overlap but the diagram doesn't show this.

To use Venn diagrams to determine validity on the classical view we need to alter the above account. Classical logic evidently assumes that in sentences which express universal categorical statements, both the subject and predicate terms refer to existing objects. So on the classical view, the statement that all S are P entails that there exists an S that is a P. While the statement that no S are P entails not only that there exists an S that is not a P, but also that there exists a P that is not an S.

If we want to use Venn diagrams to determine validity on the classical view we need to add these assumptions to the diagram. So if the statement is an A-statement, besides shading the area of S that is not P, we must also add an X to the area where S and P overlap. Doing this, we obtain. . . . S P

All S are P. On the other hand, if we want to represent the statement that no S are P, besides shading the area where the S and P rectangles overlap, we also need to place an X in the area of the S rectangle that is not in the P rectangle, and in the area of the P rectangle that is not in the S rectangle. Doing this we get: S P

No S are P. Let's see how this will work with some actual syllogisms. EXAMPLE 1 S I

C All colleges are schools. All schools are educational institutions. ________ Some colleges are educational institutions.

The easiest way to handle this is to begin by representing the premises in the same way we would if we were adopting the modern perspective. Accordingly, we shade all of those areas of schools that are not educational institutions and of colleges that are not schools. Thus, we will shade the diagram as suggested below. (Notice that if this were the end of the matter we would have to evaluate the argument as invalid, since the conclusion does not follow from the premises.) S I

C All colleges are schools. All schools are educational institutions. ________ Some colleges are educational institutions. However, we are not finished yet. We must now add the assumptions that the premises make about the sets referred to in their subject terms. Here this means that we must place a capital X in the area that represents colleges that are schools, since this is implied by the first premise. S I

C All colleges are schools. All schools are educational institutions. ________ Some colleges are educational institutions. Normally, we would also have to place lowercase x-x in the two areas where schools and educational institutions overlap, since this is an assumption that the second premise commits us to. Here, however, since the earlier premise already commits us not only to the existence of some colleges, but also to existence of some schools, this is unnecessary. Clearly, our diagram shows that the argument is valid on the classical view.

NOTE: Any argument that is valid on the modern view will also be valid on the classical view; and any argument that is invalid on the classical view will also be invalid on the modern view. (The converses of these principles are, however, not true.) EXAMPLE 2 S I

B All schools are educational institutions. No schools are bars. ________ Some bars are not educational institutions. As we suggested in the last example, we begin by representing the premises just as we would if we were adopting the modern view of syllogisms. Doing this, we get. . . . S I

B All schools are educational institutions. No schools are bars. ________ Some bars are not educational institutions. Now, however, we need to add the additional assumptions that the classical view makes about the premises. Since the first premise commits us to the existence of at least one school that is an educational institution, we need to represent this by placing a capital X in the un-shaded area where schools and educational institutions overlap.

B All schools are educational institutions. No schools are bars. ________ Some bars are not educational institutions. To represent the presuppositions that classical logic makes about the second premise we must also add in our diagram that there exist some schools that are not bars and some bars that are not schools. However, we have already represented the first of these assumptions in the diagram. So all we need to do then is to represent the assumption that some bars exist that are not schools. We do this by x-ing the area of bars that are educational institutions and the area of bars that are not educational institutions, and drawing a line between them. S I

B All schools are educational institutions. No schools are bars. ________ Some bars are not educational institutions. Inspecting the diagram we have now completed, we see that the conclusion of the argument does not follow from its premises. It is, therefore, invalid.

A WORD OF WARNING

All of this may seem clear enough, and so long as the things we are reasoning about exist, the principles of classical logic are acceptable. When we begin reasoning about objects that don't exist, however, problems arise. First, the principle that the contradictory of any A statement must have the opposite truth-value of that statement does not work. Thus, suppose for example, the statement is that all unicorns are beautiful animals. This statement is false on the classical view, because there are no unicorns; while the contradictory, "Some unicorns are not beautiful animals," is also false for precisely the same reason. Moreover, both the I-statement "Some unicorns are beautiful animals," and its contradictory "No unicorns are

beautiful animals," will be false for the same reason. Second, the principle that subcontraries cannot both be false must also be abandoned. For both the I-statement that some unicorns are beautiful animals, and the Ostatement that some unicorns are not beautiful animals, are false. Whenever we are dealing with arguments that refer to nonexistent objects, we must represent them from the modern perspective. Unfortunately, this is a severe limitation of classical logic. Many logicians believe that logic should be independent of the way the world actually is. Whether an argument is valid or invalid should not depend on whether or not its terms refer to existing objects.

1. Whenever a premise is a universal statement you should always shade; and whenever it is an existential statement you should always x. 2. With any kind of premise, always shade or x in the rectangle that represents the statement's subject term. a.) If the premise reads, "All S are P," always shade the areas of the S rectangle that are outside the P rectangle. b.) If the premise reads, "No S are P," always shade those areas where the S and P rectangles overlap. c.) If the premise reads, "Some S are P," find the two areas of the S rectangle that overlap the P rectangle. If neither of these areas is shaded, place an x-x between them. If one of the two areas is shaded, place a capital X in the other area. d.) If the premise reads, "Some S are not P," find the two areas of the S rectangle that do not overlap with the P rectangle. If neither of these areas is shaded, place an x-x between them. If one of the two areas is shaded, place a capital X in the other area. 3. If you are working within the classical interpretation, after proceeding as indicated above, if either of the premises is a universal statement: a.) If the premise reads, "All S are P," you must indicate that there is at least one thing in the areas where the S and P rectangles overlap. If one of the two areas where these rectangles overlap is shaded, place a capital X in the other area. If neither of these areas is shaded, place x-x between them. b.) If the premise reads, "No S are P," you must show that there is at least one thing in the area of the S rectangle that does not overlap the P rectangle, and that there is at least one thing in the area of the P rectangle that does not overlap the S rectangle. If either of the two areas where the S rectangle doesn't overlap the P rectangle is already shaded, you should place a capital X in the other area. If neither of these areas is shaded, place x-x between them. Then do the same for the area of the P rectangle, which does not overlap the S rectangle. 4. When an area is shaded, it means nothing is in that area. When a capital X is in an area, it means that something is definitely in that area. When an x-x exists between two areas, it means something exits in one of the two areas. An empty area tells you nothing. 5. An argument is valid if the diagram forces you to admit that its conclusion is true. Otherwise, it is invalid.

HELP

If you have not understood the discussion above then proceed as follows: Number the three rectangles as indicated below. Then identify the four numbers that constitute the rectangle of the statements subject. Write them down. Underneath these four numbers write the four numbers that constitute the rectangle of the statements predicate. Then use the chart below to either shade or x-x two areas.

1 4

22 5

3 6

7

C The two common numbers between the subject and the predicate Shade x-x HOW TO HANDLE AN A-STATEMENT Suppose the claim reads All B are C. The four numbers that represent the subjects rectangle (i.e., the B rectangle, since B occupies the subject position in the sentence) are 2356. The four numbers that represent the predicates rectangle (i.e., the C rectangle, since that occupies the predicate position in the sentence) are 4567. Since the statement is an A-statement the chart above tells us to find and shade the two numbers that are different in the subjects rectangle. Those numbers are 2 and 3. So the chart above tells us to shade 2 and 3. (Read across and up from the A in the chart above.) If we are representing things from the classical perspective we have additional work to do. We need to represent Aristotles claim that a true A-statement implies a corresponding true I-statement in the diagram. So after we have proceeded as indicated in the paragraph above the chart above tells us to x-x the two common numbers between the subject and predicate. Those numbers are 5 and 6. HOW TO HANDLE AN E-STATEMENT Suppose the claim reads No A are C. The four numbers that represent the subjects rectangle (i.e., the A rectangle, since A occupies the subject position in the sentence) are 1245. The four numbers that represent the predicates rectangle (i.e., the C rectangle, since that occupies the predicate position in the sentence) are 4567. Since the statement is an E-statement the chart above tells us to find and shade the two numbers that are common between these two sets of numbers. Those numbers are 4 and 5. So the chart above tells us to shade 4 and 5. (Read across and up from E in the chart above.) If we are representing things from the classical perspective, here again we have additional work to do. We need to represent Aristotles claim that a true E-statement implies a corresponding true Ostatement in the diagram. The chart above tells us to x-x the two numbers that are different in the top set of numbers. These numbers are 1 and 2. But on Aristotles view the converse of a true E-statement has the same truth value as the original. So the claim that No A are C implies that No C are A, and this in turn commits us to the O-statement Some C are not A. To represent this claim in the diagram we must find and x-x the two numbers in the bottom set that are different from the numbers in the top set. Those numbers are 6 and 7. So we should also x-x these two areas. HOW TO HANDLE I-STATEMENTS AND O-STATEMENTS If the claim reads Some C are B the diagram above tells us we need to x-x areas 5 and 6; while if it reads Some C are not B we need to x-x areas 4 and 7. No extra work is required if we are constructing the diagram on the Classical perspective. E I The two numbers that are different in the top set of numbers A O

EXERCISES

Instructions: Construct two Venn Diagrams on the syllogism below, one of which represents that argument on the modern view, and the other of which represents it on the classical perspective. Decide whether the argument is valid or invalid on each view. All vampires are bats. No vampires are hemophiliacs. ____________________________ Some hemophiliacs are not bats.

Instructions: Construct two Venn Diagrams on each of the arguments below, one of which represents that argument from the modern view, and the other of which represents it on the classical view. Determine whether the argument is valid or invalid on each of the two perspectives. 1. No diamonds are opals. No diamonds are sapphires. So no sapphires are opals. 2. Some islands are vacation resorts. All islands are paradises. So some paradises are vacation resorts. 3. All teachers are alcoholics. No alcoholics are politicians. So some politicians are not teachers. 4. No pleasurable experiences are headaches. All IRS audits are headaches. So no IRS audits are pleasurable experiences. 5. All dragons are fire hazards. No endangered species are fire hazards. So some endangered species are not dragons. 6. Only funny people are clowns. Funny people are never tedious oafs. So, all tedious oafs are non-clowns.

A BRAINTEASER

Instructions: Construct Venn-like Diagrams on the argument below and determine whether it is valid or invalid on both the classical and modern interpretations. Since all vampires are bats and some vampires are bloodsuckers, but no hemophiliacs are bats, it follows that some bloodsuckers are not hemophiliacs.

Although they are effective in determining the validity of many arguments containing quantifiers, both the syllogistic and diagrammatic approaches we have been exploring in the last two chapters are somewhat limited, most especially because they are unable to effectively evaluate arguments with premises involving two or more quantifiers. They cannot, for example, determine the validity of the argument that since everyone loves a lover and someone loves someone, it follows that everyone loves everyone. How these sorts of arguments are to be dealt with was not discovered until the invention of quantification theory in the early part of the twentieth century.

INTRODUCTION

In our earlier chapters on Translation, Tables, Trees, and Proofs, we were primarily concerned with building a symbolic system that would allow us to decide the validity of a certain, simple kind of argument. We then noted that this system was not sophisticated enough to adequately capture the structure of many arguments. It could not, for example, be used to establish the validity of the argument: "Since all priests are men and all men are rational, all priests are rational." In the chapters on Syllogisms and Venn Diagrams, we developed some techniques for testing these more complex arguments. We are now going to begin constructing a symbolic system that will not only allow us to test these arguments, it will also allow us to test arguments that are vastly more sophisticated then they are. In the present chapter we will learn how to translate them into symbols and how to construct trees on them to determine their validity. Then, in the next chapter, we will learn how to construct proofs of the ones that are valid. The basic idea here is to build a system that is an extension of the one we constructed earlier. As a result, it presupposes an understanding of the material contained in the earlier chapters.

Consider the claim that Bill is tall. This claim ascribes a property, viz., the property of being tall, to an individual. In our earlier chapter on Translation we might have assigned the letter "T" to this claim, and we could still do so, but, for reasons that will become clear shortly, we may now want to represent a bit more of the claims' structure than we did before. Instead of using capital letters to represent entire statements we can also use them to represent only a property. We will use "T," for example, to stand for the property of being tall; and we will use lower case letters from "a" to "u" to represent individuals. (The letters from "a" to "u" are called "constants.") In the present case, we might select the letter "b" to represent Bill. We can then express the claim that Bill is tall by writing: Tb. Of course, for the symbols "Tb" to be used to represent the claim that Bill is tall, in our translation key we need to specify that "T" represents the property of being tall, and that "b" represents Bill. We do this as follows:

Notice the use of "x" here. It functions as a placeholder. Unlike "Tb," the expression "Tx" is not a claim. Nor, for that matter, is "x is tall." What the key tells us, in effect, is that we can construct a statement by placing a constant after "T." More specifically, it tells us that we get the claim that Bill is tall by writing: Tb. Besides ascribing properties to individuals we also sometimes assert that certain relations obtain between individuals. For example, we say that Bill loves Carol. Here we are talking about two individuals, Bill and Carol, and we are asserting that a certain relation, the loving relation, obtains between them. In the system we are constructing, besides using capital letters to represent properties, we also use them to represent relations. Thus, we might use "L" to represent the loving relation. In our key, we set this up as follows:

Once we have constructed the key in this way, we can then say that Bill loves Carol by writing: Lbc. On the other hand, we can make the different claim that Carol loves Bill, by writing: Lcb. (Note that we always place the property or relation first, and then put the constants after it.) With our five connectives and these symbols we can make some complex claims. We can say, for example, that if Bill loves Carol, she loves him, by writing: (Lbc>Lcb). Or, we might say that if Bill doesn't love himself, Carol doesn't love him either, by writing: (-Lbb>-Lcb).

If this were all the system we are building could do it wouldn't be very useful. However, it can also represent claims like: "Everybody is tall," and "Someone loves Carol." To be able to represent claims of this sort we need to include a domain in our translation key. The domain tells us what kinds of things are to be included in an "all" or "some" claim. In the sorts of cases we have been discussing our domain would probably be the set of people. However, if we also wanted to say things like, "Fido loves Carol," we would need to widen it to include dogs as well. (If we were to widen the domain in this way, we would then need to include two more letters in our key, one for the property of being a person and another for the property of being a dog. Let's assume, however, that we are only talking about people for now.) In the translation key, we specify the domain before we identify any properties, relations, or individuals. Thus, our translation key might read:

DOMAIN: {People} Lxy: x loves y. Tx: x is tall. b: Bill c: Carol Suppose we want to say that everybody loves Carol. In symbols we represent this as (x)Lxc. The initial part of this formula, namely, (x), is called a "universal quantifier." It really says, "Everything in the domain is such that." However, since our domain has been identified as people, it says that "All people are such that." The second occurrence of "x" -- its occurrence in "Lxc" -- functions just like a pronoun. "Lxc" can be read "he/she/it loves Carol." So the entire formula reads: "Every person is such that he/she loves Carol." In other words, it says that everybody loves Carol. Instead of saying that everybody loves Carol we might want to say that someone loves her. We do this by using an existential quantifier instead of a universal one. (Ex) is an existential quantifier. It should be read, "there is at least one thing in the domain that is such that," or, since our domain here is {people}, in the present case it says: "Someone is such that." So the formula, (Ex)Lxc, says, "Someone is such that he/she loves Carol" or, more pleasantly expressed, it says that somebody loves Carol. However, now suppose we wanted to say that Carol loves someone who is tall. We would express this in symbols by writing: (Ex)(Lcx.Tx), i.e., there is someone who is such that Carol loves him/her and he/she is tall. While we would represent the claim that Carol loves everyone who is tall as (x)(Tx>Lcx), or, in other words, every person is such that, if they are tall then Carol loves them. We could represent the claim that Carol doesn't love anyone who is tall in either of two ways. We might express it in symbols by writing: -(Ex)(Lcx.Tx), i.e., it is not true that there is someone Carol loves and who is tall. Or, we might express it by writing: (x)(Tx>-Lcx), i.e., everyone is such that, if they are tall then Carol doesn't love them. From our discussion, it should be clear that variables function in three different ways: They occur in the translation key, where they are used as place-holders; they occur as parts of both universal and existential quantifiers; and they function as pronouns in expressions like "Lxc."

It isn't important that we selected "x" as our variable. We could have chosen any variable. (The letters "w," "x," "y," and "z" are all variables.) The claims (w)Lwc and (x)Lxc, say the same thing. What we

cannot do, however, is to write (x)Lwc. It violates a fundamental principle about constructing well-formed formulas in our system. Roughly, the problem here is that the "w" that occurs after "L" is supposed to be functioning like a pronoun, and as such, it must refer back to an antecedent noun or noun phrase. It doesn't refer back to the noun phrase, (x), however, because that expression contains a different variable. There is a simple principle to use here. Every occurrence of a variable that is not a part of a quantifier must lie within the scope of a quantifier that contains that variable. In the example above, the variable "w," in "Lwc," doesn't lie within the scope of a quantifier which contains "w." To identify the scope of a quantifier begin looking, immediately after that quantifier, for a binary connective or a left parenthesis. If a binary connective occurs before you see a left parenthesis, all and only those variables in that portion of the expression from the quantifier up to the connective, which are identical with the variable in that quantifier, are within its scope. If the left parenthesis is a part of another quantifier, skip it. Otherwise, continue until you find the right parenthesis that is the mate to the first left one you hit. All and only those variables that are the same as the one inside the quantifier are within that quantifier's scope. Consider the following examples: (1) (x)(y)(Pxy>Qzy) (2) (Ex)-((y)Pxy>Pyx) (3) ((Ex)(y)Pxy>(z)Paz) In (1) after the two quantifiers, (x) and (y), we notice that there are several variables that occur, viz., the "x" and "y" after "P," and the "z" and "y" after "Q." Now the scope of the (x) quantifier begins immediately after that quantifier. The first thing we see after this quantifier is another quantifier, (y). Skip it. After this, we see a left parenthesis. The mate to this is the right parenthesis at the end of the expression. Since the only "x" in this portion of the expression occurs between this set of parentheses, it is in the scope of the (x) quantifier. Clearly also, after the (y) quantifier, both of the occurrences of "y" are within the scope of the (y) quantifier. What about the "z" immediately following "Q" however? There is no (z) or (Ez) quantifier whose scope it is within. So (1) is not a legal well-formed formula. In (2) the first thing we see is an existential quantifier, (Ex). Its scope starts after it. Immediately after it, we see "-." Skip it. We then see a left parenthesis. The right mate to this occurs at the far right end of the expression, after "Pyx." The scope of the (Ex) quantifier goes all the way to this right parenthesis. So both occurrences of x after P are within the scope of this quantifier. What about y? The scope of the (y) quantifier begins right after it and extends only until we hit the ">." So the "y" in "Pyx" is not within the scope of (y) and therefore (2) is also not a well-formed formula. In (3) all of the occurrences of "x," "y," and "z" that are not parts of any quantifiers are within the scope of the appropriate quantifiers. So (3) is a well-formed formula.

By now it should be clear that we could represent all the different kinds of categorical statements discussed in the chapters on Syllogisms and Venn Diagrams. The statement, "All P are Q" can be represented as, (x)(Px>Qx). While the statement, "No P are Q" can be translated as either -(Ex)(Px.Qx), or (x)(Px>-Qx). Our old friend, "Some P are Q" can be expressed, (Ex)(Px.Qx); while "Some P are not Q" is either, (Ex)(Px.-Qx) or -(x)(Px>Qx). However, we can also use our symbols to represent claims that involve multiple quantifiers. We can say things like, "Everybody loves everybody," (x)(y)Lxy; "Nobody loves anybody," (x)(y)-Lxy; "There is someone who loves everybody," (Ex)(y)Lxy; and "Everybody loves someone," (x)(Ey)Lxy. Using the translation key provided earlier how do you think we would say, "Everybody, who loves anyone, is loved by him or her?" If you said, (x)(y)(Lxy>Lyx), you were right; but now compare this claim with the claim, "Everyone who loves anybody is loved by somebody." This we can represent as either: (x)((Ey)Lxy>(Ez)Lzx) or (x)(y)(Lxy>(Ez)Lzx) Notice the difference between these last two cases. The statement, (x)(y)(Lxy>Lyx) tells us that anyone who loves is loved by whomever they love. The claim, (x)((Ey)Lxy>(Ez)Lzx), on the other hand, commits us only to their being loved by someone or other, and not necessarily by the person whom they love. (So the second claim is weaker than the first.)

The trouble with all of this is that it gets very difficult very quickly. We need to understand exactly what is being said in English and then construct a formula that makes the same claim. Typically there are many such formulas and any one of them will do. Suppose, for example, we wanted to say, "Only those who are tall, are loved." We could express this by writing either: (x)((Ey)Lyx>Tx), or (y)((Ex)Lxy>Ty), or (x)(y)(Lxy>Ty), or (x)(-Tx>(y)-Lyx), or (x)(-Tx>-(Ey)Lyx). Just as there are many ways of translating it correctly, however, there are also many ways of expressing it incorrectly. It would be wrong, for example, to translate it: (x)(Tx>(Ey)Lyx). Although it makes no difference which variables we use, as the first two cases hopefully illustrate, where they occur in the formula and how those that aren't part of a quantifier are related to those that are, frequently matters a lot. Thus, (x)((Ey)Lxy>Tx) is a very different claim than, (x)((Ey)Lyx>Tx). The former asserts that all of those who love someone are tall, while the latter makes a different claim. It asserts that all of those who are loved by someone are tall. When we were dealing with translation in the earlier chapter, we pointed out that practice is essential. The same is true here to an even greater extent. Unfortunately, because there are so many different ways of expressing the same claim in symbols, and space is severely limited, we cannot give you a lot of examples. On the computer disks, the practice exercises in this chapter contain only six arguments, each of which has three premises. Each time through, you will be asked to translate two of these arguments. You should try going through the exercises until you have seen all of the problems. While a correct translation is provided, however, even when you know what that answer is you might try formulating the same claim in other ways. Although copying the answer you are given down, and then entering it when you see that problem again will result in a high score, it will be of only limited value when it comes to understanding how to translate.

EXERCISES

Instructions: Using the translation key provided, translate each of the following statements into symbolic notation.

DOMAIN: {Roadrunners & Coyotes} Cx: x is a coyote. Rx: x is a roadrunner. Fxy: x runs faster than y. c: Wily Coyote 1. Every roadrunner runs faster than some coyote. 2. No coyote runs faster than any roadrunner. 3. If Wily Coyote runs faster than any roadrunner, then he runs faster than any coyote.

In this section we will expand on the tree method developed in Chapter 6 so we can use it to determine whether single statements are quantificationally true, false, or indeterminate, whether a pair of statements is quantificationally equivalent, whether sets of statements are quantificationally consistent or inconsistent, and whether arguments are quantificationally valid or invalid. The rules for decomposing statements that are negated existentially quantified and negated universally quantified formulas are straightforward. We simply replace the existential quantifier with a universal quantifier, or replace the universal quantifier with an existential quantifier, move the tilde to the other side of this quantifier, and then check off the original statement. This is obviously legitimate since a statement that asserts that it is not the case that something in the domain has a certain characteristic is

equivalent to the statement that everything in the domain lacks that characteristic, while the statement that not everything in the domain has a certain characteristic amounts to the statement that something in the domain lacks that characteristic. Thus, NEGATED EXISTENTIAL QUANTIFIER DECOMPOSITION (-ED)

n. - (E variable) . . . p. (variable) -

n, -ED

n. - (variable) . . . p. (E variable)

n, - UD

Once we have used all the old rules from Chapter 6 on any statements these rules can be used on, and used these two new rules on any statements they can be used on, we will be left with only existentially and/or universally quantified formulas. We can then begin decomposing these remaining statements using two additional new rules. An existentially quantified statement can be decomposed by first eliminating the existential quantifier and then replacing every occurrence of the variable that occurred in the quantifier in the original statement with a new constant that has never occurred on any branch before. Once we have done this we can check the original statement off. Thus, EXISTENTIAL QUANTIFIER DECOMPOSITION (ED)

n. (E variable) . . . p.

where every occurrence of the variable in (E variable) has been replaced by a new constant.

n, ED

The following example may help illustrate this. 1. (Pa > - (x)(y)Qxy) 2. Pa /\ 3. - Pa - (x)(y)Qxy 4. * (Ex) - (y)Qxy 5. - (y) Qby 6. (Ey) - Qby 7. - Qbc SM SM 1, >D 3, -ED 4, ED 5, -ED 6, ED

Why do we require that a new constant be selected when using this rule? If we didnt require this the following tree would show that the set of statements consisting in the set of formulas on lines 1 and 2 below would be inconsistent.

1. (Ex)Px SM 2. (Ex) - Px SM 3. Pa 1, ED 4. - Pa ILLEGAL 2, ED * But clearly it is possible for something in the domain to have a property which something else lacks. The requirement that we pick a new constant makes the move on line 4 above illegal. UNIVERSAL QUANTIFIER DECOMPOSITION (UD) Our final rule, Universal Quantifier Decomposition functions much like the rule Existential Quantifier Decomposition, with two very important exceptions. First, once we have eliminated the quantifier, instead of replacing the variable with a constant that has not occurred on that branch before we need to replace it with a constant that has occurred before, and we need to do this for each constant that has occurred on that branch before; and second, we cannot check the original statement off. The reason for this is simple. Our original statement asserts that every object in the domain has the characteristic in question. So the claim implies that this characteristic applies to each and every object in the domain. Thus, the rule can be formulated as follows:

n. (variable) . . . p. n, UD

Study the following example carefully.

where the variable occurring in the quantifier is replaced by a constant that has occurred on that branch before for each such constant.

1. (- Pa v - (Ex)(- Px & (Ey) Qxy)) SM 2, (Pa & Qab) SM 3. Pb SM 4. Pa 2, &D 5. Qab 2, &D /\ 6. - Pa -(Ex)( Px & (Ey) Qxy) 1, vD 7. * (x) - (Px & (Ey)Qay) 6, -ED 8. - ( Pa & (Ey)Qay) 7, UD 9. - (Pb & (Ey)Qby) 7, UD /\ 10. - Pa - (Ey)Qay 8, - &D * /\ 11. - Pb - (Ey)Qby 9. - &D 12. * (y) - Qay 10, - ED 13. (y) - Qby 11, - ED 14. - Qaa 12, UD 15. - Qab 12, UD * Though the formula on line 13 has not been decomposed, the set being tested is quantificationally inconsistent because every branch has been flowered. Unfortunately there is a problem with all of this in cases where our domain contains infinitely many constants. Suppose we try to decompose a set that contains only the following very simple formula: 1. (x)(Ey)Pxy 2. (Ey)Pay 3 Pab SM 1, UD 2, ED

Since a new constant b has now occurred and we have not yet decomposed line 1 using that constant we need to go back and decompose it using this constant. We then get, 4. (Ey)Pby 1, UD followed by, 5. Pbc 4, ED etc. This process will never end. Branches of this sort we will refer to as non-terminating branches. Unfortunately, whether a branch is a non-terminating branch or a branch that will eventually terminate is something that is not always easy to determine, and there is no method for constructing a tree that will always give us a result in a determinate number of steps because the system in question is incomplete. Nevertheless, if we proceed according to the following procedure we will at least have a result that will tell us in a great many cases whether the set we are dealing with is quantificationally consistent or inconsistent.

THE PROCEDURE

If at any point in the construction of the tree: 1. All the branches end in flowers then the set being tested is quantificationally inconsistent 2. Any branch is either: (a) A non-terminating branch, or (b) A branch on which every formula is either: (i) A lintel, or (ii) Has been checked off, or (iii) Is a universally quantified formula in which, ( ) At least one instance of that formula occurs on that branch, and, ( ) For each constant occurring on that branch an instance of that universally quantified formula containing that constant occurs on that branch, then the set being tested is quantificationally consistent. Step 1: Decompose all formulas that can be decomposed using the rules in Chapter 6, and any formula that results from using those rules which itself can be decomposed using them. Step 2: Decompose all formulas that can be decomposed using -ED and -UD. Step 3: Decompose all existentially quantified formulas, and then, if any new formulas are listed, return to step 1. Step 4: Decompose all universally quantified formulas that have not been decomposed using every new constant that has appeared on that branch. If no constants have appeared on the branch in question select an instance of the universally quantified formula using the constant a. Step 5: Return to Step 1. EXAMPLE 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. (Ex)(y)Pxy ((x)(Ey)Pxy > Qa) - (Ex)Qx /\ - (x)(Ey)Pxy Qa (x)-Qx (x)-Qx (Ex) - (Ey)Pxy (y)Pby (y)Pby - (Ey)Pcy (y)-Pcy - Qa - Qa - Qb * - Qc Pba Pbb Pbc - Pca -Pcb SM SM SM 2, >D 3, -ED 4, -UD 1, ED 6, ED 8, -ED 5, UD 5, UD 5, UD 7, UD 7, UD 7, UD 9, UD 9, UD

18.

-Pcc

9, UD

The set being tested is quantificationally consistent. Notice how we have proceeded. Step 1 told us to use the rules from Chapter 6 on any formulas we could. Since the formula on line 2 was the only one we could use one of these rules on, we did so. Then we proceeded to step 2 and applied the -ED rule on line 3. This created line 5. Next we used the -UD rule on the formula on the left branch of line 4. We then proceeded to step 3 and decomposed the formula on line 1 on both branches at line 7, and the formula on line 6 on the left branch only at line 8. Step 3 then instructed us to return to step 1. Nothing was relevant here, so we moved on to step 2. This caused line 9. Step 3 was now irrelevant, so we proceeded to step 4, which led to all the remaining formulas. Although the right branch flowered at line 10, the left branch continued to grow until it finally met condition 2(b) above.

EXERCISES

Instructions: Use the tree method to determine whether the following sets of statements are quantificationally consistent or inconsistent. A. {(x)(y)Pxy, ((Ex) - Pxx v (x) - Qx), Qb} B. {(x)(Px > Qx), (x)( - Qx v Rx), ((Ex)Px > - (Ex)Rx)} Instructions: Use the tree method to determine whether the following arguments are quantificationally valid or invalid. Note: An argument is quantificationally valid just in case the set of statements consisting in its premises together with the negation of its conclusion is quantificationally inconsistent. A. (x)(Px > Qx) (x)(Qx > (Rx . Sx)) ______________ (x)(Rx v - Px) B. (Ex)(y)Pxy ((Ex)Pxx > (y)Qyy) ______________ (Ex)(Ey)Qxy C. (x)(y)( - Pxy v Qxy) - (Ex)(Ey)Qxy ______________ (Ex)(Ey)Pxy

THE RULES

In this chapter we will introduce eight new rules. These rules, when used in conjunction with those presented in our earlier chapter on Proofs, will permit us to construct proofs in quantification theory. Henceforth, any argument whose conclusion follows from its premises using both the new and old rules will be called quantificationally valid; while any argument whose conclusion follows from its premises using only the earlier rules, will be called truth-functionally valid . So, every truth-functionally valid argument will also be quantificationally valid, but not every quantificationally valid argument will also be a truth-functionally valid one. Two of our eight new rules -- Assumption and Reiteration -- might best be labeled special rules. We will discuss them first although the reason for having them will not be entirely clear until we have explored some of the remaining rules. Our third rule -- Conditional Proof -- could have been presented in our earlier chapter on proofs. It functions as a rule of inference. Rules four through seven, are all Rules of Inference that deal with quantifiers. The fourth rule -- Universal Quantifier Generalization -- tells us how to introduce a universally quantified formula. The fifth rule -- Universal Quantifier Instantiation -- tells us how to use a universally quantified formula. The sixth rule -- Existential Quantifier Generalization -- tells us how to create an existentially quantified formula. And the seventh -- Existential Quantifier Instantiation -- tells us how to use an existentially quantified formula. Finally, our eighth rule -- Quantifier Equivalence - is a Replacement Rule for exchanging quantifiers. ASSUMPTION We may use the rule Assumption, abbreviated Ass, any time, and we can assume any formula we wish. When we use this rule, however, it blocks off all future formulas until the assumption is discharged. The proof is only completed when the conclusion is obtained after all assumptions have been discharged. (On the computer disks, in both the practice exercises and the examination, no points are given for using this rule.) We will see how to discharge assumptions shortly.

To use the rule Assumption, we type "Ass" when we are asked which rule we want to use. Then, we simply type in the formula we want to assume. REITERATION The rule Reiteration, abbreviated Reit, permits us to repeat a formula into an assumed block. Once an assumption has been discharged, however, no formula listed under that assumption can be reiterated elsewhere. To use the rule, we type "Reit" when we are asked which rule we want to use, and then we type the formula we want to reiterate. (As with Assumption, no credit is given for using Reit.) Study the examples below.

In the example on the right, on line 3 we discharged the assumption on line 2, so we cannot reiterate the formula (PvR) on line 6. CONDITIONAL PROOF Conditional Proof, abbreviated CP, is one of the two rules in our system that permits us to discharge an assumption. To use this rule, we must already have made an assumption and obtained a formula under that assumption. CP permits us to end that assumption and build a horseshoe claim whose left side is the assumption we made, and whose right side is the formula we obtained under that assumption. After the assumption has been discharged, neither it nor any formula obtained under it, can be reiterated or otherwise used. Examine the examples below.

EXAMPLES

UNIVERSAL QUANTIFIER GENERALIZATION So far, although the rules we have added to our old system may make some of our proofs less lengthy than they otherwise would be, they will not allow us to derive the conclusions of any valid arguments from their premises that we could not have derived before. However, with Universal Quantifier Generalization, abbreviated UG, everything changes. The rule UG is a building rule. It permits us to create a universally quantified formula. To create such a formula, all we need to do is go through the following process: 1. Select a formula that has already occurred in the proof and that we want to use UG on. 2. Select a constant (viz., a-u) that has not occurred in any premise or in any un-discharged assumption. (Normally, this constant will occur in the formula we have selected, but it need not have.) 3. Select a variable (viz., w-z). 4. Replace every occurrence of the constant selected in the formula we have decided to use UG on with the variable we have chosen. 5. Precede the result with (, followed by the chosen variable, followed by ). Thus, suppose Paba is the formula we want to use UG on, suppose the constant we have selected is "a," and that this constant doesn't occur in any premise or un-discharged assumption. Suppose, further, that the variable we have chosen is "x." Using UG on Paba, we will get (x)Pxbx. Study the examples below.

Although these proofs are very similar, the one on the right makes the mistake of trying to generalize on a constant (viz., "a") that occurs in the premise on line 1. Why do we have this restriction? Because, if we didn't, we could go from a premise that said, for example, Albert is happy, to the claim that everyone is happy. In the example on the left, it didn't matter that "b" was selected. We could have chosen any constant. So our reasoning is true of everyone. UNIVERSAL QUANTIFIER INSTANTIATION This rule, abbreviated UI, is quite simple. To use it, all we need to have is a universally quantified formula occurring on an earlier line. We simply delete the quantifier, and replace every occurrence of the variable that occurred in that quantifier, with a constant of our choice. (Normally, the constant we choose will have occurred before, but this is not necessary.) Consider the example below.

EXISTENTIAL QUANTIFIER GENERALIZATION The rule Existential Quantifier Generalization, abbreviated EG, is even easier than UI. It builds an existentially quantified formula. To use it, we simply select an earlier line in the proof, replace any number of occurrences of a constant of our choice with whatever variable we want, and then precede the result with (E, followed by that variable, and then ). Thus, using EG on Paba, we can obtain, for example, (Ex)Pxbx, or (Ex)Pxba, or (Ey)Pyba, or (Ez)Pabz, or even (Ex)Paba. The idea of this rule is that since a particular individual has a certain property, it follows that someone or something has that property. Study the following proof.

Note the way in which (Ex)(Ey)Pxy gets built. We build it from Paa by adding existential quantifiers from right to left. We could, of course, have chosen variables other than x and y, but we selected the ones we did so we could use MP on line 1. EXISTENTIAL QUANTIFIER INSTANTIATION This is the toughest rule in the whole system. Like CP, it is a rule that permits us to discharge an assumption, and like UG, it contains restrictions. However, the restrictions here are enough to drive one crazy. To use the rule Existential Quantifier Instantiation, abbreviated EI, we must have an existentially quantified formula already listed. (It is the line number of this formula which we will enter when we are asked what line we want to use the rule on.) Moreover, we must have made an assumption that is an instance of the existentially quantified formula we are going to use EI on. To obtain an instance of the existentially quantified formula, delete the quantifier and replace every occurrence of the variable that was in that quantifier with a constant. Make sure the constant selected does not occur in any premise or un-discharged assumption, and that it does not occur in the existentially quantified formula, or we will be violating one of the restrictions on using the rule. Once we have derived a formula under the assumption we have made, we may discharge that assumption and enter the same formula we obtained under that assumption left one line. However, there is a restriction here. The formula we are moving left must not contain any occurrences of the constant that we selected in the assumption we are discharging. (This is the formula we are obtaining when we use EI.) Let's look at some examples. We'll see how not to do it first. Then we will see how it should be done, and why.

The use of EI here is illegal because the constant selected on line 4 occurs in the premise on line 3.

The use of EI here is illegal because the constant selected on line 4 occurs in the formula you are pulling out of the assumed block.

Now let's redo this problem. However, let's do it correctly this time.

Note that the line number we are using EI on is 1. It is this line number that we cite, when we are asked, "Which earlier line do you want to use this rule on?" The block starting with line 4 is also cited. The assumed formula on line 4 is an instance of the existentially quantified formula, and the constant selected, "b," doesn't occur in line 1, or in any premise or un-discharged assumption, or in line 8. Why all the restrictions on EI? They are, of course, designed to prevent us from deriving the conclusions of invalid arguments from their premises. Suppose, for example, that when you are creating the assumed instance of the existentially quantified formula, we permitted you to select a constant that had already occurred in a premise, or an un-discharged assumption, or in the existentially quantified formula. From a claim like, "Someone is shorter than Albert," you could then easily derive the claim that "Someone is shorter than he himself is." The proof might proceed as follows:

While we can't prevent you from assuming Saa on line 2, because the rule Assumption permits you to assume anything, our restrictions do prohibit the use of EI on line 4. Or, suppose we allowed you to pull a formula out of the assumed block that contained the constant selected when you created the assumed instance of the existentially quantified formula. Then, from "Someone is happy," you could derive the claim that "Albert is happy." The proof would develop as follows:

Unfortunately, it is all the restrictions that make EI such a difficult rule. Before you use the rule, please check to see that you haven't violated them. QUANTIFIER EQUIVALENCE As the name suggests, Quantifier Equivalence, abbreviated QEQ, is an equivalence or replacement rule, and so we can use it on a part of a formula. It tells us that a tilde followed by an existential quantifier is the same thing as a universal quantifier followed by a tilde; or, in ordinary English, it tells us that "It is not true that something has such-and-such" is the same thing as "Everything lacks such-and-such." Alternately, it tells us that a tilde followed by a universal quantifier is the same thing as an existential quantifier followed by a tilde. In other words, it tells us, "It is not true that all things have such-and-such" is the same thing as, "Some things lack such-and-such." Symbolically: -(Ex)Px = (x)-Px and -(x)Px = (Ex)-Px The following example uses QEQ:

STRATEGIES

The following ideas may help some in your construction of proofs: 1. Look first at the premises and see if any of them are either negated existentially quantified formulas, or negated universally quantified formulas. If so, use QEQ to push the tilde in. Keep using this rule until the tilde is to the right of all the quantifiers. Do the same for the conclusion. 2. Now carefully examine all the premises you didn't use QEQ on, and all the claims you got when you used QEQ. If any of these claims are existentially quantified formulas, you need to assume an instance of them, and work toward using EI. If any contain two existential quantifiers in succession (e.g., (Ex)(Ey)Pxy), you'll need to make two assumptions. (In the example above, first assume (Ey)Pay, and then assume Pab.) 3. Now look at the conclusion. If it's a universally quantified formula, imagine it without the quantifier. At each point where the variable that was in that quantifier occurs you need to have the same constant, and that constant must not occur in the initial list of formulas. 4. If you are looking for a horseshoe claim, assume its left side and try to get its right side. Then use CP. 5. Now look back at your initial list of formulas. If any of them are universally quantified formulas, use UI on them. Select constants here that have occurred before. 6. At this point, try to use the rules and the strategies for them that we examined in the earlier chapter on Proofs. 7. Work from the bottom of the problem up, and then start at the top and work down. After you have made some moves, go back to where you left off at the bottom, and work up some more. Then try to make more moves from the top down. 8. If you can't solve the problem, put it away for a few minutes. Let's see how these ideas might work in practice. We know we have succeeded when we get the conclusion listed. So let's start by writing it down at the bottom of the page.

The formula on line 1 is a negated-universally quantified formula, so let's use QEQ to push the tilde to the right of the quantifiers.

Now let's start working from the top of the problem down. Since the formula we have written on line 4 is an existentially quantified formula, we need to assume an instance of it. We must not use the constant "a," however, since it already occurs in the second premise. So let's use "b."

Since the assumption we just made is also an existentially quantified formula, we need to assume an instance of it. Here again however, we must select a new constant. Let's choose "c." Our problem will then look like this:

The idea now is to get the conclusion under this assumption and then pull this conclusion over twice using EI. Let's work on the end of the proof some.

Our goal is obviously to get a universally quantified formula. To get it we need to obtain an instance of it where the constant we are generalizing on doesn't occur before. Let's assume this constant is "d." Then the formula we have to obtain is (Rad>(Ey)Sy).

This formula is a horseshoe claim. We know we can get it by using CP if we assume its left side and obtain its right side under that assumption. So let's assume Rad, and try to get (Ey)Sy under this assumption. We now need to get (Ey)Sy, and we are going to get this by using EG after we have obtained an

instance of it. The rule here is that the constant should have occurred before. But are we getting (Ey)Sy from Sa, or from Sb, or Sc, or Sd? Before we decide this, let's look at line 2. It is a universally quantified formula, and from it we can get, for example, (-Pa>(y)(Ray>Sa)), or (-Pb>(y)(Ray>Sb)), or,. . . Wait a second. Hold the phone. Line 6 contains the constant "b" after the predicate "P." So we now know we need to instantiate on line 2 with "b." Let's do it. The remainder of the problem should now be a piece of cake. We are going to use DeM on line 6, and then pull -Pb out of that. Then we'll use MP on -Pb and line 8. This will give us (y)(Ray>Sb). We can then use UI on this to get (Rad>Sb). That will yield Sb, etc. If you thought proofs were difficult before, you had no idea! But with a bit of practice you should become proficient at them.

EXERCISES

A. (x)(Px>(y)Qxy) (x)(y)(Qxy>Ryx) / (x)(Px>(Ey)Ryx) C. -(x)(y)Pxy (x)(y)(Pxy=Qxy) / (Ex)(Ey)-Qxy E. (Ex)(Ey)Pxy ((x)(Ey)Pyx>(z)Qz / (x)(Rx>Qx) B. (Ex)(y)(Pxy>Qxy) (x)(y)(Qxy>Rxy) / ( (x)Pxx>(Ey)Ryy) D. (x)(y)(Qxy>Rxy) -(x)Pxx / (Ex)(Ey)Qxy

This system expands on the one developed in Chapter 7B. All the rules used there are applicable here, together with the following new ones.

The rule Universal Quantifier Elimination tells us how to use a universally quantified formula. We should view the rule as a two-step process. It tells us first to drop the quantifier, and then select a constant--a lower-case letter from a to u--and replace every remaining occurrence of the variable that appeared in that quantifier with that constant. Study the example below.

Universal Quantifier Introduction tells us how to create a universally quantified formula. It tells us that we can select a formula, a constant, and a variable, and replace every occurrence of that constant in the formula with that variable. We then precede this with a left parenthesis plus that variable plus a right parenthesis. Unfortunately, there is an important restriction on using this rule. The constant we choose cannot occur in any un-discharged assumption or in any premise. Examine the examples below carefully.

Existential Quantifier Introduction allows us to create a formula whose main operator is an existential quantifier. The rule is very easy. To use it all we need to do is select a variable, a constant, and a formula. We then replace any number of occurrences of the constant we have chosen with that variable in that formula. We then precede the formula with a left parenthesis, then the existential quantifier E, followed by the variable we chose, and then a right parenthesis.

Of all the rules in this system, Existential rule is the most complex. To use the rule we must have an existentially quantified formula already listed. We then need to assume an instance of this formula. (To obtain an instance of the formula, chop off the quantifier, and replace every occurrence of the variable that occurred in that quantifier, with a constant.) Under this assumption we then need to derive a formula. The rule tells us we can stop the assumption we made. Moreover, we can move the formula we derived under it to

the left of that assumption, if we have not violated the following restrictions: 1. The constant we selected when we assumed an instance of the existentially quantified formula cannot occur in the formula we are moving over. 2. This constant also cannot occur in any un-discharged assumption. 3. The constant cannot occur in the original existentially quantified formula.

EXERCISES

- COM 204- Advocacy and Argument (TPMC)Transféré parMousey
- Sound Argument and Cogent ArgumentTransféré parAmirah Nadia Mat Lias
- CHAPTER1.pdfTransféré parjuhi993
- pracexam1Transféré parfaris
- Developing Critical ThinkingTransféré parFall Out Boyz
- How to create Thesis StatementsTransféré parDesiree Ng
- forallX [remix] Sections 5.1 and 5.2Transféré parStephen Harris
- chapter 1Transféré parAhmad Saifuddin Che Abdullah
- finalrhetoricalanalysispaperTransféré parapi-316438109
- Excel formularTransféré parSamuel Humphrey
- 5. Aristotle 1 - Logic and Rational ThoughtTransféré paraexb123
- Soundness and Validity.pdfTransféré parkobeadjordan
- moniTransféré parRavindran Harraveendrran
- thethingstheycarriedthesispaperTransféré parapi-283833515
- Assignment 1617 Investigative Study SB-1Transféré parAce Ara
- UntitledTransféré parcristina_grecu2735
- 1Transféré parPranab Das
- Keeandra Cummings EPS Rubic Draft (5)Transféré parKeeandra Cummings
- LTL Introduction to Logical Concepts and FallaciesTransféré parNelson Macabocsit
- debate assignmentTransféré parapi-318900725
- UbD Stage 1Transféré parleahjones73
- 00034___71b1a11a0b27a1762d4bfa385fbec461Transféré parreferee198032
- Session+21Transféré parSimrah Zafar
- CA Bar: Performance Test Tips 1 (2009)Transféré parThe Lawbrary
- Douglas N. Walton a Pragmatic Theory of Fallacy (Studies in Rhetoric and Communication)Transféré parCaleb Hull
- Argument From DesignTransféré parKarim Atef Guirguis
- PPA_691_Lecture2.pptTransféré parkhan7ven
- Effective Reading13sept07Transféré parLilisKhairani
- A List of Fallacious ArgumentsTransféré parKaz Hassan
- English 1 Spring 2014 Elements of an Academic PaperTransféré parAnna Ko

- Visualizing Stylistic VariationTransféré parthangdaotao
- An Anticausative Verb is an Intransitive Verb That Shows an Event Affecting Its SubjectTransféré parthangdaotao
- a case grammar approach to verb classification in.docTransféré parthangdaotao
- focus on advanced elish grammar practice@.pdfTransféré parthangdaotao
- Avidance of Phrasal Verbs_Liao&FukuyaTransféré parthangdaotao
- Action Research Project - Jacqui Allens_thesisTransféré parthangdaotao
- Building an Ontology for the Lexicon- Semantic Types and Word MeaningTransféré parthangdaotao
- What is Grammatical GenderTransféré parthangdaotao
- Analysis on Sexism in Contemporary American AdvertisementsTransféré parthangdaotao
- Causation and External Arguments_PylkkanenTransféré parthangdaotao
- A Finer Look at the Causative-Inchoative Alternation_PlnonTransféré parthangdaotao
- Foregrounding in Poetic DiscourseTransféré parthangdaotao
- Munro - Probabilistic Representation SFGTransféré parlinusalbertus
- Historical and Dialectal Variants of Chinese General Classifiers2222Transféré parthangdaotao
- 10.1.1.105Transféré parTô Trọng Danh
- List of Chinese classiﬁers2222Transféré parthangdaotao
- Classifier in Kam-Tai LanguagesTransféré parthangdaotao
- Classifier Assignment by Corpus-based Approach123Transféré parthangdaotao
- Pre Course Reading Teaching Skills 1Transféré parthangdaotao
- Educating SchoolteachersTransféré parummahzy
- Schools for the 21st Century Resource DocumentTransféré parthangdaotao
- 2011 Peer-To-Peer Violence and BullyingTransféré partruongkhoilenguyen
- Promoting Action Research in Singapore SchoolsTransféré parthangdaotao
- Pragmatics of Classifier Use in Chinese Discourse123Transféré parthangdaotao
- thesisTransféré parnguyenquang
- Classifier Systems and Noun Categorization Devices in BurmeseTransféré parthangdaotao
- Comparing Classifier Use in Chinese and Japanese123Transféré parthangdaotao
- Vietnamese and the Structure of NPTransféré parthangdaotao
- Action Research in Education- A Review and a Case ReportTransféré parthangdaotao

- Zalamea – Peirce's Continuum - Part 1Transféré parcsp-peirce
- solution1-10Transféré partntxp256
- Van Fraassen, 1966 - Singular Terms, Truth-Value Gaps, And Free LogicTransféré parAnderson Luis Nakano
- cu 1.pdfTransféré parAditi Agarwal
- Proof TheoryTransféré paredutakeo
- The Schellingian Alternative to Hegel.. From Bulletin of the Hegel Society of Great BritainTransféré parandrewbowie
- Paradoxes and Their ResolutionsTransféré parAvi Sion
- Konstruktivizam i Nauka -Transféré pardakintaur
- Fuzzy Logic With Engineering Application_Timothy J RossTransféré parVishwanath Ketkar
- Martin-Lof, Verificationism then and nowTransféré parPatoMoyaM
- Logic Made Easy (2004)Transféré parapi-3830039
- John McDowell Meaning, Knowledge, And RealityTransféré paranastacia33
- Martin Löf Verificationism Then and NowTransféré parapplicative
- Conceptions of Truth in Intuitionism[1]Transféré parCamila Jourdan
- Nagatomo_logic of Diamond-SūtraTransféré parMigellango
- Bourbakis Destructive Influence on the Mathematisation of EconomicsTransféré parvikky90
- Coursang1.PDFTransféré parAbel Ganz
- Virno Angls, Genl Intellct IndividuationTransféré parJohannes Knesl
- Elk an i Eee DiscussionTransféré parSupraja Sundaresan
- Intuitionism Essay PDFTransféré parSallie Barnes
- Fuzzy Logic With Engineering ApplicationsTransféré parburhanseker
- Feminist Interpretations AristotleTransféré parsayonee73
- The Theory of Thought 1000108153 (1)Transféré parAbdul Sami Abdul Latif
- A Relationship Between Lacanian Theory of Sexuation and Brouwerian IntuitionismTransféré parAnonymous uFZHfqpB
- Ultra ViresTransféré parNoor Emilia
- AssertionTransféré parCasey Wiley
- Beziau - Bivalence, Excluded Middle and Non ContradictionTransféré parpolity2
- Louis O. Kattsoff - Logic and the Nature of RealityTransféré parunperrofumador
- goedelTransféré parAbram Demski
- 40a.10-Unanswered-questions-piya.pdfTransféré parAshish Thapa

## Bien plus que des documents.

Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.

Annulez à tout moment.