Vous êtes sur la page 1sur 100

 M,g,c

 
 
 
  

 
 

 L  L 

 
 
 Christopher Potts 
 
 UMass Amherst 
 
 
 
 LSA Institute 2007, Stanford, July 1–3 
 
 
λw . . . 
June 14, 2007 Christopher Potts

This work is licensed under the Creative Commons Attribution-


Noncommercial-Share Alike 3.0 License. To view a copy of this license, visit
http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative
Commons, 171 Second Street, Suite 300, San Francisco, California, 94105,
USA.
Contents

1 About this course 1


1.1 Ambitions and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The language–logic connection . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Biases of this mini-course . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Useful texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Logic elsewhere in linguistics . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Course requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Foundational concepts 5
2.1 Truth conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Compositionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Models and talking about models . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Direct and indirect interpretation . . . . . . . . . . . . . . . . . . . . . . 10

3 Technical preliminaries 12
3.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Ordered tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4 Propositional logic 21
4.1 The usual presentation of PL . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 PLf : A functional perspective on PL . . . . . . . . . . . . . . . . . . . . 22
4.4 Comparing PLf with PL . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 A linguistic theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.6 Assessment of the linguistic theory . . . . . . . . . . . . . . . . . . . . . 27
4.7 The intensionality of propositional logic . . . . . . . . . . . . . . . . . . 29

5 Extensional lambda calculus 30


5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.2 Lambda caculi: General definition . . . . . . . . . . . . . . . . . . . . . 30
5.3 Defining specific lambda calculi in your work . . . . . . . . . . . . . . . 32
5.4 Commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.5 Linguistic theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.6 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.7 Type mismatches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

i
6 The axioms of the lambda calculus 41
6.1 A general note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.2 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.3 Alpha conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.4 Beta reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.5 Eta reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

7 Intensions 44
7.1 The limits of extensional models . . . . . . . . . . . . . . . . . . . . . . 44
7.2 An intensional logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.3 Linguistic theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.4 Commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.5 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

8 Building a suitable machine 51

9 Quantifiers 52
9.1 The view from first-order logic . . . . . . . . . . . . . . . . . . . . . . . 52
9.2 The view from generalized quantifier theory . . . . . . . . . . . . . . . . 52
9.3 Conservativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.4 Two other properties of determiners . . . . . . . . . . . . . . . . . . . . 55
9.5 The terrain not covered . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

10 Pragmatic connections 58
10.1 Indexicality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
10.2 Deictic pronouns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.3 Propositions and probabilities . . . . . . . . . . . . . . . . . . . . . . . . 62

References 68

A Problems 69
A.1 Relative truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.2 Tarski’s hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.3 Idioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.4 Nondeterministic translation . . . . . . . . . . . . . . . . . . . . . . . . 70
A.5 A subtlety of predicate notation . . . . . . . . . . . . . . . . . . . . . . . 70
A.6 Exclusive union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.7 Is it a function? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.8 Characteristic sets and functions . . . . . . . . . . . . . . . . . . . . . . 72
A.9 Some counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
A.10 Schönfikelization and implications . . . . . . . . . . . . . . . . . . . . . 73

ii
A.11 nor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.12 The type definition for PLf . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.13 A more readable PLf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
A.14 Interdefinability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.15 Relating PL and PLf . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.16 PLf and negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.17 PLf and implication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.18 Exclusive disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.19 PLf and compositionality . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.20 Conjunctions and constituency . . . . . . . . . . . . . . . . . . . . . . . 77
A.21 Coordination and function composition . . . . . . . . . . . . . . . . . . 77
A.22 PL intensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
A.23 Alternative type definition . . . . . . . . . . . . . . . . . . . . . . . . . 78
A.24 Possible types given assumptions . . . . . . . . . . . . . . . . . . . . . . 79
A.25 Vacuous abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A.26 Partiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
A.27 Novel types and meanings . . . . . . . . . . . . . . . . . . . . . . . . . 80
A.28 Types, expressions, and domains . . . . . . . . . . . . . . . . . . . . . . 81
A.29 Recursive interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
A.30 An alternative mode of composition . . . . . . . . . . . . . . . . . . . . 81
A.31 Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
A.32 Variable names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.33 Cross-categorial and . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.34 A relational reinterpretation . . . . . . . . . . . . . . . . . . . . . . . . . 83
A.35 What’s the source of the ill-formedness? . . . . . . . . . . . . . . . . . . 84
A.36 Building a fragment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.37 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.38 Beta reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
A.39 Eta conversion and distinguishable meanings . . . . . . . . . . . . . . . 86
A.40 Extensional beliefs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.41 Modals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.42 Hintikka’s believe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.43 Individual concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.44 How many worlds are there? . . . . . . . . . . . . . . . . . . . . . . . . 88
A.45 Finding common ground . . . . . . . . . . . . . . . . . . . . . . . . . . 88
A.46 Definites and semantic composition . . . . . . . . . . . . . . . . . . . . 88
A.47 Contradictory beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.48 Degree constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.49 Singular and plural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.50 Exactly 1 in first-order logic? . . . . . . . . . . . . . . . . . . . . . . . . 90

iii
A.51 A closer look at the universal . . . . . . . . . . . . . . . . . . . . . . . . 90
A.52 All and only Lisa’s properties . . . . . . . . . . . . . . . . . . . . . . . . 90
A.53 Intensional quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A.54 Nonconservative determiners? . . . . . . . . . . . . . . . . . . . . . . . 91
A.55 Coordination and monotonicity . . . . . . . . . . . . . . . . . . . . . . . 92
A.56 Indexicals as proper names? . . . . . . . . . . . . . . . . . . . . . . . . 92
A.57 Indexicals and constants: A crucial difference . . . . . . . . . . . . . . . 92
A.58 Denotations as sets of assignments . . . . . . . . . . . . . . . . . . . . . 92
A.59 Dynamic indefinites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.60 Probabilities and sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

iv
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 1: About this course


This handout provides general information about this course: its goals, its
limitations, and its biases. It suggests textbooks and papers that you can use
during the course or after it to deepen your knowledge of the material. And it
seeks to give you a sense for the connection between language and logic that is
central to all this material (and, arguably, to all of formally-inclined linguistic
meaning studies).

1.1 Ambitions and limitations


This course not an introduction to logic, nor is it an introduction to semantics. Rather,
it’s an introduction to logic as it is used in linguistic semantics. I hope to clarify the
connection between the two, and I want to try to convince you that the connection, though
exceedingly difficult to make, is capable of yielding insights into how natural languages
express meaning.
I will try not to presuppose anything beyond a basic familiarity with linguistic argu-
mentation. If you’ve had an undergraduate introduction to linguistics or something com-
parable, then you should be in good shape, and I will rely on you to let me know if I’ve
moved too fast or presupposed too much.
Chris Barker has the following quotation at his website:

“In mathematics you don’t understand things, you just get used to them.”
—John von Neumann

This is a deep insight into the way in which humans come to understand semantic theory as
well. It’s here as a kind of warning: if this is your first time through material like this, then
you can’t expect to fully understand all the hows and whys. It takes repeated exposure,
and it takes time for these things to sink in. Thus:

• Don’t give up if you feel like you don’t quite see what’s happening.

• Expect to feel confused at times, as we settle into this web of interdependent con-
cepts.

• Raise your hand often — it’s the only way to get your needs met.

• Expect this to take a lifetime.

1
About this course

1.2 The language–logic connection


This section takes us rapidly from empirical predictions to the construction of a semantic
theory, pausing only to identify where the linguistic (empirical) work happens and where
the logical (non-empirical) work happens, and how the two are connected.

1.2.1 Empirical predictions


This is the heart of it. You’ve gathered some data and worked through it. You believe
you’ve identified a pattern. And then you take the leap: from the finite data set, you
generalize over the infinitude of objects in the language(s), and, in doing so, you make
some predictions about how things will work in areas you’ve not explored.
It’s a risky business, but it’s the lifeblood of our field. Whatever you do in the area of
linguistic meaning, these descriptive generalizations should be at center-stage, and every-
thing you do should be geared towards illuminating them.

1.2.2 Logics and their entailments


A well-defined logical system will validate some claims and invalidate others, and this will
be reflected (at least to some extent) in the intended models for that logic. For instance,
classical propositional logic validates the inference from a single occurrence of p to two
(or more) occurrences of p. But not all logics validate that inference. Whether a logic
validates an inference is not an empirical matter. It is a definitional matter. The logician
might design his logical system with some facts about our world in mind, but this does not
actually inject an empirical component into the logic.

1.2.3 Translation
In formal linguistic semantics, analysis always involves translating from the descriptive
generalizations into a formal system. Papers vary in their explicitness about the pieces
(the natural language, the formal system, the bridge between), but they’re always present.

1.2.4 A welcome loss of control


Why translate? I see three central, intimately related reasons:
• A more precise statement of the original generalization.
• A deeper understanding of the original generalization.
• A chance to see the generalization interact with other assumptions and, in turn,
explore the consequences of those interactions.

2
About this course

1.2.5 A cautionary note about the connection


It is crucial, going forward, to keep sight of the fact that any connection you make between
natural language and logic is empirical. For instance, the propositional logic connective ∧
is often glossed as and, and this word is historically part of the motivation for defining ∧.
But if we, as linguists, want to connect and with ∧, we should make that step explicit, and
we should keep in mind that, if incorrect, it doesn’t tell us about ∧ (the meaning of which
is just stipulated), but rather that we need to look to another logical system.

1.3 Biases of this mini-course


I’ve made some overarching decisions about how to explore the connection between lan-
guage and logic and, as a result, a lot of things will remain largely unexplored here.
• Compositional interpretation means systematically mapping the constituents of syn-
tactic structures to meanings.
• Meanings are model-theoretic objects, and semantic analysis is in turn largely about
exploring the models for whatever logical language we have set up. As a result,
proof theory will play only a minor role (but see handout 6).
• Intensions are based in possible worlds. This is largely a pedagogical move. It’s not
meant to downplay the importance of events and situations. It just reflects the fact
that the best path towards an understanding of those small objects is through these
absurdly large ones.
Please don’t mistake these biases for fundamental discoveries. In truth, at this point,
it’s all up for grabs. (For quite different views of how semantics can or should work:
Barwise and Perry 1983; Ginzburg and Sag 2001; Jackendoff 1996; Kracht 2007.)

1.4 Useful texts


1.4.1 Textbooks
• Heim and Kratzer (1998): Precise linguistic analysis is the main concern. Contains
excellent examples of how to construct a semantic analysis. Intensions appear only
at the end, but Kai von Fintel and Irene Heim are working on an intensional follow-
up, and Kai often makes drafts available at his website.
• Gamut (1991a,b): Detailed presentations of the logics involved. Covers not only
the core of semantics, but also many of the surrounding areas. If you want to study
Montague in the original, then Volume II is a great place to start towards that goal.

3
About this course

• Carpenter (1997): A wonderfully rich toolkit. If your pair it with Heim and Kratzer
(1998), you get a great introduction to analysis and the tools you need to do it.

1.4.2 Foundational work


• Lewis (1976): A classic general statement about how a linguistic semantics can, or
should, look. It is worth returning to this article again and again. (Note also how
few symbols it contains!)

• Portner and Partee (2002): This contains many classic articles in linguistic seman-
tics and pragmatics. Just one caveat: for some, the 1970s notation (adapted from
Montague’s original papers) makes the work somewhat inaccessible.

• Halvorsen and Ladusaw (1979): A great way into Montague (1974). A good next
step after Gamut (1991b) but before Montague semantics itself.

• van Benthem (1991) is a wonderful little guide to, among other things, the ways in
which we can interpret types and lambdas.

1.5 Logic elsewhere in linguistics


This course is focused on linguistic meaning, especially the conventionalized sound–
meaning connections of semantics. But the tools can be applied throughout linguistics.
Every theory has its models. In semantics, these can be intensional models. In pragmatics,
they can be discourse situations. In phonology, morphology, and syntax, they can be trees
of various dimensions. In phonetics, they can be the vocal apparatus or sounds themselves.
And so forth. The kinds of logic discussed here can be used to talk about and characterize
all of these objects.

1.6 Course requirements


Attendance is mandatory. At the end of this packet is a large set of problems. They are
referenced in the margins of the handouts where they become appropriate to try. Please do
at least eight of them. They are marked variously as ‘practice’, ‘open-ended’, and ‘hard’.
Try to do a mix. The answers will be available at the course website at the end of the
course. I will end up doing a random assortment of them during the lectures, but don’t let
this stop you from doing them as well.

4
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 2: Foundational concepts


Our semantics will be geared towards getting at truth-conditions, we will abide
throughout by (context-free) compositionality, and we will do mostly indirect
interpretation. This handout explicates these foundational ideas, and it ad-
dresses the role that model theory plays in present-day semantic investigation.

2.1 Truth conditions


Linguistic semantics tends to be about truth-conditions. The central goal is to obtain a
systematic procedure for determining, for each sentence, what conditions would have to
be like for that sentence to express a truth.
This assumption is so deeply embedded in semanticists’ minds that even non-truth-
conditional things like questions and imperatives are often reduced to things with truth
conditions.

2.1.1 Flexibility on the topic of truth and falsity


As linguists, we should be wary of direct claims about the nature of reality, truth, or falsity.
We’re in the business of studying language, not physics, metaphysics, etc. For this reason,
we’ll use T for truth and F for falsity. And we’ll remain largely silent on whether truth
conditions are determined relative to an external reality, or mental representations, or the
mutual beliefs of the participants in a discourse. Our theory should be compatible with all
these views. There might, after all, be a place for all of them.
Ex. A.1

2.1.2 An illustration and commentary


In truth-conditional semantics, we often arrive at equations like the following:

(2.1) The sentence Lisa is a linguist is interpreted as T if Lisa is a linguist, else it is


interpreted as F.

In the simple systems we’ll study initially, there are just two values for sentences, T and
F. This is a simplification; we’ll need more and different values before long (handout 7).
But, for now, it means that we can state truth conditions in a way that makes them more
obviously like definitions (equality statements):

(2.2) The sentence Lisa is a linguist is interpreted as T if and only if (iff) Lisa is a
linguist.

5
Foundational concepts

The phrase ‘if and only if’ has the same force as an equal sign. In logic and linguistics, it
is often abbreviated to ‘iff’. Some equivalent formulations are ‘just in case’ and ‘exactly
when’.

2.1.3 Object language and metalanguage


The above equations, (2.1) and (2.2), have an air of circularity about them. After all, the
string ‘Lisa is a linguist’ appears on both sides of ‘iff’ (our equal sign). But the definition
is not circular. Let’s look closely at it to see why.
On the left side of ‘iff’, we have a sentence of a natural language. By convention, in
these notes, those are given in italic font. The sentence is one of our objects of study —
something from our object language. On the right side of ‘iff’, we have a statement in
the language that we are currently using to make theoretical statements and talk about the
properties of our object language. This is our metalanguage. We don’t inquire into its
properties. We have to rely on our mutual understanding of its terms and structures.
We needn’t use English as our metalanguage. (In fact, we will generally resort to
formal languages for this purpose, for the reasons discussed in handout 1, section 1.2.) A
disadvantage of using English is that its complexity is formidable — enough so that we
can spend our working lives as linguists studying it! It is often easier to have a simpler,
more obviously regulated language for stating truth conditions. Set theory (handout 3,
section 3.1) is a natural choice; (2.3) is equivalent to (2.1) and (2.2).

(2.3) Lisa is a linguist is interpreted as T iff ∈ {x | x is a linguist}

Statements of set theory have a rigorously defined interpretation. We needn’t rely on


mutual understanding to figure out what the metalanguage statement means. We can just
look to the definitions. For this reason, it is extremely useful — assuming one’s audience
is also versed in set theory. But many audiences aren’t, making English (or some other
natural language) a better choice on many occasions.
The bottom line should be this: pick the vehicle for your ideas that will best commu-
Ex. A.2 nicate your proposal to your audience.

2.1.4 Not our field


Even with the charge of circularity dropped, our definitions often seem lacking in insight.
For instance, (2.3) has different languages on the left and right of ‘iff’, but it still doesn’t
tell us what a linguist is.
And this is as it should be. After all, though linguists might get to define linguist, they
have no business telling us what it takes to be a planet, or a tree, or what it means for two

6
Foundational concepts

people to be in love, or married, or employed by someone else. These topics are clearly
nonlinguistic. So we compromise by writing informal things like ‘. . . is a linguist’ and ‘x
loves y’, on the assumption that the relevant expert could fill out the truth conditions.
This means that, in the end, linguists have very little to say about the meanings them-
selves. In fact, our theories are deliberately defined so as to make as few commitments
as possible about what meanings are. We are more interested in how meanings (whatever
they are) interact with each other to produce ever richer meanings.

2.1.5 Why it can’t be syntax alone


Prior to and during the rise of Montague Grammar (Montague 1974), the generative se-
manticists were doing pioneering work in uncovering subtle semantic distinctions and gen-
eralizations. But, from our perspective, their approach had one strange property: it never
got to the point of interpretation. Syntactic structures mapped to semantic structures —
expressions of a language called Markerese. But the semantic structures were just highly
abstract, feature-laden syntactic structures. They appeared not to be models of anything
but themselves. David Lewis gets to the heart of it at the start of ‘General semantics’:

Semantic markers are symbols : items in the vocabulary of an artificial lan-


guage we may call Semantic Markerese. Semantic interpretation by means of
them amounts merely to a translation algorithm from the object language to
an auxiliary language Markerese. But we can know the Markerese translation
of an English sentence without knowing the first thing about the meaning of
the English sentence: namely, the conditions under which it would be true.
Semantics with no treatment of truth conditions is not semantics. (Lewis
1976:1)

A specific example helps to highlight the importance of this point. Assume that I speak
neither Italian nor Irish. Suppose that I learn that the Irish noun madra translates as cane
in Italian. Have I learned the meaning of madra or cane ? I have not. I’ve just learned a
bit about the translation function that takes Irish to Italian. To learn the meaning of madra,
I need to learn what conditions have to be like for a given object to count as having the
property named by madra.
The same is true internal to a language. I might learn that the English words wood-
chuck and groundhog are synonymous. But if I don’t know what it takes for d is wood-
chuck or d is groundhog to be true, then I don’t know the meaning of woodchuck or
groundhog. It’s for this reason that dictionaries are not semantic theories. They provide
(language-internal) translations, appealing always, at some point, to their readers’ knowl-
edge of semantics.

7
Foundational concepts

2.1.6 Beyond truth conditions


Truth-conditional semantics is a partial theory of meaning, in the sense that it has nothing
to say about many of the meanings that we perceive. This is widely acknowledged, and
linguists have worked hard to reach beyond its boundaries. Theories of pragmatics help
us to predict what meanings a given sentence can give rise to relative to a specific context
of utterance (handout 10). Theories of framing or connotations help us to understand
how specific words make salient a complex web of additional meanings. And so forth.
These are important development. In these notes, I concentrate on truth-conditions for a
handful of reasons. One is historical: modern linguistic semantics is founded on truth-
conditionality. But I believe that this historical explanation has an intellectual justification
as well. Truth-conditional semantics is an excellent foundation on which to build more
complex, more all-encompassing theories of linguistic meaning.

2.2 Compositionality
Here’s a very broad, oft-repeated statement of the principle of compositionality in linguis-
tic semantics:

(2.4) The meaning of an expression is a function of the meanings of its parts and the
way they are syntactically combined. (Partee 1984:153)

This seems simple enough. The definition harbors some technical notions behind common
language (‘is a function of’, ‘syntactically combined’), but the intuition behind it is clear:
our meaning for Chris smiled should be unique and it should be fully determined by the
meaning of Chris, the meaning of smiled, and some general principle or principles for
putting these two meanings together.

2.2.1 Where do we bottom out?


Of course, for all we know, Chris and smiled might each be complex expressions them-
selves. This seems right, in fact, for smiled. But compositionality is in effect here as well:
the meaning of smiled should be determined by the meanings of its parts and their mode
of combination. Once we have that meaning, we can use it to derive the meaning of Chris
smiled.
Where does this process of decomposition end? That’s an empirical question. For ex-
ample, perhaps Chris is semantically atomic, and thus we might get away with stipulating
its meaning. But smiled is, as suggested above, a complex expression. We could claim
that it is made up of the root smile and the past-tense morpheme. But there are plenty of
cases in which it is not obvious where we would ‘bottom out’. Idioms raise this question:

8
Foundational concepts

when kick the bucket is used to mean ‘die’, should we, as semanticists, be looking at the
syntactic subphrases of this expression, or should we just assign a meaning to the whole?
Ex. A.3
2.2.2 The importance of syntax
The other part of the empirical aspects of compositionality concerns the syntax. In a
properly-designed linguistic theory, the ‘parts’ mentioned in (2.4) should be given to us
by the syntactic theory. For these notes, we’ll say that the parts correspond perfectly to
the nodes in syntactic structures. This approach has much to recommend it, and it is also
a useful way to be precise about how compositional interpretation is controlled.

2.2.3 Unconstrained compositionality is not constraining


Over the years, linguists, logicians, and semanticists have collectively discovered that there
is little or no empirical bite to definition (2.4). Linguists have developed intuitions about
what sort of analysis respects compositionality, but these intuitions are hard to make pre-
cise. “I know it when I see it” is the phrase of the day.
But there are alternative statements of the general idea. We can’t delve too deeply
into this here (see Janssen 1997; Partee 1997; Barker and Jacobson 2007; Dowty 2007),
but I think it is worth highlighting one refinement: context-free compositionality, which
seems to come close to what linguists have in mind when they call for compositional
interpretation.

2.2.4 Context-free compositionality


Context-free grammars provide a formal characterization of locality. (When they establish
long-distance dependencies, they do so via local connections.) Locality can be defined in
terms of local trees: a mother node and its daughter(s).
If we abide by context-free compositionality, then all semantic operations involve
putting the daughters’ meanings together to get the value of the mother. We never reach
deeper into the tree.
(2.5) f (α, β) where f is the appropriate mode of composition
"!
" !
α β
From another perspective: context-free compositionality says that the daughters are the
input and the mother is the output. The machine is the mode of composition (i.e., f in
(2.5)).
This seems quite close to what linguists have in mind when they call for compositional
analysis. There are complications, to be sure (Dowty 2007; Barker 2007), but this is a good
intuitive start. All the analyses in these handouts abide by context-free compositionality.

9
Foundational concepts

2.3 Models and talking about models


One of the most fundamental distinctions in logic (and linguistics) is also one of the hardest
to keep track of at the beginning: the logical language is a way of talking about model-
theoretic objects, and thus it cannot be conflated with the objects themselves.
Why is this hard? This difference is as dramatic as the difference between the word
war and wars themselves, but it often grows fuzzy when one reads the technical literature.
Two reasons:

Constrained by the medium. If I want to tell you the meaning of Geoff Pullum, I cannot
very well drag him around with me. So I am forced to resort to symbols. Where possible,
I use pictures. But pictures are cumbersome. Thus, many authors rely on very subtle
typographic differences to distinguish the languages from the things the language talks
about. For instance, this is common:
(2.6) jog is interpreted as jog
On the left, we have a piece of language (some symbols). On the right, we are supposed
to imagine that we have the property of jogging. Semanticists must be typographically
aware!

Some sloppiness. A semanticist might say “Sam combines with jog ” when she means
to say “the individual named by Sam combines with the interpretation of jog ”.

Anyway, onward into the models; everything else is in the service of studying them.

2.4 Direct and indirect interpretation


Very broadly speaking, there are two ways that one can move from a natural-language
expression to its interpretation: one can go directly, or one can stop off at an intermediate
logical language. There are conceptual advantages and disadvantages to both, and it might
even be possible to distinguish the two approaches empirically, though that remains one of
the great open conceptual questions of linguistic semantics.

2.4.1 Direct interpretation


In a system of direct interpretation, we map syntactic nodes directly to model-theoretic
objects:
interpretation
(2.7) natural language =⇒ model

10
Foundational concepts

Two examples:

interpretation
(2.8) Bart =⇒

' (
interpretation
(2.9) Simpson =⇒ , , , ,

2.4.2 Indirect interpretation


In a system of indirect interpretation, syntactic nodes map to logical formulae (expressions
in our meaning language). These logical formulae are then interpreted (mapped to) model-
theoretic objects:
translation interpretation
(2.10) natural language =⇒ logic =⇒ model

translation interpretation
(2.11) Bart =⇒ bart bart =⇒
translation
(2.12) Simpson =⇒ simpson

' (
interpretation
simpson =⇒ , , , ,

2.4.3 Discussion
Many practitioners of indirect translation countenance a stopping-off point — the logical
formulae — merely for convenience. The logical formulae might have a more obviously
systematic structure than the natural-language expressions. Or the researcher might want
to stay clear of debates about syntactic phrasing, category-labels, and so forth. In these
systems, the assumption is usually that we have two regular, structure-preserving map-
pings: translation and interpretation. The composition (handout 3, section 3.4.4) of these
two operations is also a regular, structure-preserving mapping. That is, the intermediate
step is dispensable.
But it’s possible to imagine systems in which the intermediate logical language is not
dispensable. It might provide information that is not present in the natural-language syntax
but that is nonetheless crucial for interpretation. The most famous arguments for a theo-
retically robust meaning language come from Discourse Representation Theory (DRT;
Kamp and Reyle 1993).
Ex. A.4
11
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 3: Technical preliminaries


This handout reviews some basics of sets, relations, and functions. Its aims
are very specific — I merely want to establish a connection between natu-
ral language semantics and these tools and concepts. For a more technical,
systematic introduction, I recommend Partee et al. 1993 as well as the more
advanced Kracht 2003.

3.1 Sets
A set is an abstract collection of objects. These can be real-world objects, concepts, other
sets, etc.

3.1.1 Notation
3.1.1.1 Curly braces

By convention, sets are specified using curly-braces. Commas usually separate the mem-
bers. For example, here is a depiction of the set containing Bart Simpson, the letter b, and
the number 47:

' (
(3.1) b, 47,

And here is a picture of the set whose members are Lisa Simpson and the set above:

' ' ((
(3.2) , b, 47,

3.1.1.2 The empty set

There are, by convention, two equivalent ways of specifying the empty (null) set: with ∅
(the empty-set symbol) and with { } (empty curly braces).
The empty set is simply the set with no members. There is only one empty set. It is a
subset (section 3.1.4.4) of every set.

12
Technical preliminaries

3.1.1.3 Venn diagrams


The curly-brace notation is abstract, and it has some misleading properties (e.g., it makes
it look as though the objects are ordered, when they are not — see below). And, especially
for visual thinkers, it can be hard to work with them. Venn diagrams provide a more
intuitive depiction of sets. They are simply circles. We’ll work with them a lot. For now,
here’s a a Venn diagram of the set specified in (3.2):

b 47
(3.3)

3.1.1.4 Predicate notation


Very often, we don’t know, or can’t specify, the complete membership of a set. Predicate
notation is useful in these cases. For instance, here is a specification of the set of all natural
numbers using predicate notation:
(3.4) x | x is a natural number
) *

This is glossed as ‘the set of all x such that x is a natural number’. So the curly braces tell
us that we are talking about a set, and the vertical line (sometimes a colon) is read as ‘such
that’. It’s important to keep sight of the fact that this specification does not tell us about
any specific x. The choice of x as the symbol in this specification is arbitrary. All of the
following are equivalent to (3.4):
(3.5) a. y | y is a natural number
) *

b. n | n is a natural number
) *

c. † | † is a natural number
) *

A note of caution, though: use the variable symbol systematically. The following is differ-
ent from (3.4) and (3.5):
(3.6) x | y is a natural number
) *

To understand this specification, we do need to know what object y picks out. If y is a


natural number, then everything is a member of this set. If y is not a natural number, then
nothing is a member of this set.
Ex. A.5
13
Technical preliminaries

3.1.1.5 Recursive definitions


Recursive definitions are another way of specifying infinite sets in a compact yet complete
way. Consider for instance the following definition of a very boring fragment of English,
here called E:
(3.7) a. pigs fly is a member of the set E.
b. If S is a member of the set E, then Chris knows S is a member of E.
c. Nothing else is a member of E.
Here’s another way of specifying this set:
+ ,
(3.8) pigs fly, Chris knows pigs fly, Chris knows Chris knows pigs fly, . . .
The ellipsis dots indicate that the pattern continues in this way.

3.1.2 Set membership


To indicate that an object is a member of a set, we use a rounded lowercase Greek ep-
silon: ∈. For instance, the following asserts that Bart Simpson is a member of the set of
Simpsons:

+ ,
(3.9) ∈ a | a is a Simpson
‘Bart is a member of the set of all a such that a is a Simpson.’
A slash through the set-membership symbol (or just about any other logical connective)
is its negation. Thus, (3.10) asserts that Burns is not a member of the set of Simpsons.

+ ,
(3.10) ! d | d is a Simpson
‘Burns is not a member of the set of all d such that d is a Simpson.’

3.1.3 Central properties of sets


3.1.3.1 Unordered
The members of a set are not ordered in any way. So the following are depictions of the
same set, namely, the set containing Bart and Lisa:
   


 

 

 




 

 

 


, is the same as ,
   


 

 

 




 

 

 


   

14
Technical preliminaries

When we induce an ordering on a set, the result is an ordered tuple. These are discussed
in section 3.2 below.

3.1.3.2 No repetitions
When specifying a set, repetitions of the same object are meaningless. For example, each
of the following depicts the set containing only Bart Simpson:
   


 

 

 




 

 

 


is the same as ,
   


 

 

 




 

 

 


   

If we need repetitions to be meaningful, we must again look to ordered tuples (section


3.2).

3.1.4 Set-theoretic relations


3.1.4.1 Intersection
(3.11) The intersection of a set A with a set B is the set of all things that are in both A
and B. In symbols, A ∩ B.
def
A ∩ B = {x | x ∈ A and x ∈ B}

Here are a two equivalent ways of specifying the union of the set containing Bart and Lisa
with the set containing Lisa, Maggie, and the number 17.

+ , + ,
(3.12) a. , ∩ , , 17

b.
17
The intersection is the smallest circle, the one containing just Lisa.

3.1.4.2 Union
(3.13) The union of a set A with a set B is the set of all things that are in A or B. (Things
in both sets are included in the union.) In symbols, A ∪ B.
def
A ∪ B = {x | x ∈ A or x ∈ B}

15
Technical preliminaries

Here are two equivalent ways of specifying the union of the set containing Bart and Lisa
with the set containing Lisa, Maggie, and the number 17.

' ( ' (
(3.14) a. , ∪ , , 17

b.
17
Ex. A.6

3.1.4.3 Set-theoretic difference


The difference between two sets A and B is defined as follows:
def )
(3.15) A − B = x | x ∈ A and x ! B
*

' ( ' ( ' (


(3.16) , 17 − = 17

When the first argument to − is the universe U, this is called complementation.

3.1.4.4 Subset
The subset relation doesn’t return a new set (in contrast to ∪, ∩, and −). Rather, it returns
truth or falsity:

(3.17) A ⊆ B iff for all x, if x ∈ A, then x ∈ B

A special case to keep in mind Every set is a subset of itself; A ⊆ A, for any set A. If
you want to ensure that there are things in B but not in A (and that everything in A is in B),
write A ⊂ B.

3.1.4.5 Equality
Equality is defined in terms of the subset relation:

(3.18) A = B iff A ⊆ B and B ⊆ A

This states very clearly that there is nothing more to a set than its members.

16
Technical preliminaries

3.1.5 Powerset
The powerset of a set A, written ℘(A), is the set of all subsets of A.
 


 {a, b, c} 


def  {a, b} {a, c} {b, c} 


 

℘({a, b, c}) = 




 {a} {b} {c}  



{}

 

3.2 Ordered tuples


An ordered n-tuple is a finite sequence of n objects of any kind. We use angled brackets
to specify tuples. Thus, the ordered 2-tuple (ordered pair) whose first member is Bart and
whose second member is the number 17 is written like this:

4 5
(3.19) , 17

3.2.1 Ordered tuples are ordered


The following are different tuples:

4 5 4 5
, ,

3.2.2 Repetitions are meaningful


Repetitions are meaningful for tuples. Example (3.20a) represents the 1-tuple whose first
and only member is Monty Burns, whereas (3.20b) represents the ordered pair both of
whose members are Monty Burns:

4 5
(3.20) a.

4 5
b. ,

Ordered tuples are the members of relations, which are another fundamental building block
of meaning. Relations are our next topic.

17
Technical preliminaries

3.3 Relations
3.3.1 Basics
A relation is a set of n-tuples. For example:
 


 4 5 4 5 4 5 


(3.21)
 
, , , , ,


 


 
Here is an example of the usual predicate-notation for relations:
+ ,
(3.22) *x, y+ | x teases y

3.3.2 Cartesian products


The Cartesian product of two sets A and B is written A × B. It is defined as follows:
def
(3.23) A × B = {*x, y+ | x ∈ A and y ∈ B}
' (
{a, 1} {a, 2}
(3.24) {a, b} × {1, 2} =
{b, 1} {b, 2}
With A a set, the set of all n-tuples formed from objects in A is often given as An .

3.4 Functions
3.4.1 Technical specifications
Here’s a useful depiction of the function that maps Bart, Lisa, and Maggie to T and Burns
to F:
 
 
 
 
 
 
 
T 
 
(3.25) 
 
 
F 
 
 
 
 

The domain is the set of objects that can be inputs (on the left). The range (sometimes
called the co-domain) is the set of objects that can, but need not be, outputs for some input.
We gloss f : A -→ B as ‘the function f with domain A and range B’.

18
Technical preliminaries

3.4.2 Kinds of function


• A relation R is a function iff each x in the domain of R is mapped by R to at most
one element in the range of R.

• A function f is total iff every element in the domain of f has a value in the range of
f . If f fails to meet this condition, it is called a partial function.

• A function f is onto iff every element in the range of f is the value of some element
in the domain of f .

• A function f is one-to-one iff no member of the range is assigned to more than one
member of the domain.

• A function f is bijective or a one-to-one correspondence iff it is total, onto, and


one-to-one.

Ex. A.7

3.4.3 Functions and sets


There is a close correspondence between functions into the domain of truth values {T, F}
and sets: for each function into {T, F}, we can form is characteristic set. For each set, we
can form its characteristic function. These views are equivalent, so semanticists tend to
switch back and forth between them freely based on what seems most likely to convey
their ideas efficiently and clearly.

3.4.3.1 Characteristic sets

If f is a function into the domain {T, F}, then the characteristic set of f is the set of all
objects d such that f (d) = T.

3.4.3.2 Characteristic functions

If A is a set and U is the universe of objects in the same domain as A, then the characteristic
function of A is the f such that f (d) = T if d ∈ A, and f (d) = F if d ! A, for all d ∈ U.
It’s crucial that we know what the universe of discourse is, so that we know which
objects to map to F. (The objects that map to 1 are just those that are in A.)
Ex. A.8,
19 A.9
Technical preliminaries

3.4.4 Function composition


The composition of a function f : A -→ B with another function g : B -→ C is g ◦ f : A -→
C, defined as
(g ◦ f )(d) = g( f (d))
Importantly, g ◦ f is defined only if the range of g is the same as the domain of g.

3.4.5 Schönfinkelization (Currying)


The functions discussed in linguistics are almost always unary functions, i.e., functions
that take single objects as their arguments. For instance, transitive verbs are usually con-
sidered to be functions that map entities (the direct object’s meaning) to functions that map
entities (the subject’s meaning) to truth values (handout 5). But it is equally (or perhaps
more) intuitive to think of transitive verbs as mapping ordered pairs of entities to truth
values.
In a compositional theory, these views might be different. But, mathematically, they
are indistinguishable. The insight of Schönfinkel (1924/1967) and Curry and Feys (1958)
goes like this: start with a function f : A -→ (B -→ C) and then define the equivalent
function f 0 : (A × B) -→ C as follows:
f 0 (*a, b+) = ( f (a))(b) for all a, b
Since this is an equivalence, we an also start with f 0 = (A × B) -→ C. We can also do some
reversing of arguments. For example:
( f 00 (b))(a) = ( f (a))(b) for all a, b
Here’s an illustration (the arrows go in all directions):
6 7  6 7 
x -→ 1  a -→ 1 
 
 a -→  x -→
 
 6 y →
- 2 
7   6 b →- 3 7 
x -→ 3  a → - 2 
 
 
 b -→  y -→
y -→ 4 b → - 4
 

 *x, a+ -→ 1  *a, x+ -→ 1


   
 
 *y, a+ -→ 2   *a, y+ -→ 2 
   
 *x, b+ → - 3   *b, x+ → - 3 
   
*y, b+ -→ 4 *b, y+ -→ 4
Ex. A.10

20
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 4: Propositional logic


We explore a somewhat nonstandard presentation of propositional logic: the
models are given explicitly in terms of a (finite) hierarchy of functions, and
the logical expressions have types and general rules for forming complex ex-
pressions. As a linguistic theory, the result seems superior to the usual presen-
tation, and it builds a foundation for the more complex systems to come. The
handout closes with a brief linguistic assessment.

4.1 The usual presentation of PL


It’s useful to have this system, call it PL, here for reference. The functional perspective we
develop below duplicates its results, but in a way that makes more sense linguistically.
Syntax Semantics
V · maps formulae into {T, F}
8 9
i. p, q, p0, p00 , . . . are well-formed
(propositional letters).

ii. If ϕ and ψ are well-formed, then


V ϕ ∈ {T, F} if ϕ is a propositional letter
8 9
all of the following are well-
formed:
V ¬ϕ = T iff V ϕ = F
8 9 8 9
¬ϕ [‘bar’]
V (ϕ ∧ ψ) = T iff V ϕ = V ψ = T
8 9 8 9 8 9
(ϕ ∧ ψ) [‘wedge’]
(ϕ ∨ ψ) [‘vee’] V (ϕ ∨ ψ) = T iff V ϕ = T or V ψ = T
8 9 8 9 8 9
(ϕ → ψ) [‘arrow’]
V (ϕ → ψ) = T iff V ϕ = F or V ψ = T
8 9 8 9 8 9
(ϕ ↔ ψ) [‘iff’]
V (ϕ ↔ ψ) = T iff V ϕ = V ψ
8 9 8 9 8 9

iii. Nothing else is well-formed.

Truth-table representation of the semantics

p q ¬p p∧q p∨q p→q p↔q


V · 1
8 9
T T F T T T T
V · 2
8 9
T F F F T F F
V · 3
8 9
F T T F T T F
V · 4
8 9
F F T F F T T

Ex. A.11
21
Propositional logic

4.2 Functions
PL’s models can be presented as a class of functions. This doesn’t change the underlying
logic, but it has conceptual advantages:
i. Its compositionality is obvious.
ii. It permits us to come closer to natural language syntax.
iii. It reveals that this is a subsystem of the more complex logics we’ll use later.
The following presentation is adapted and expanded from the one in Gamut 1991a:§2.7.

4.3 PLf : A functional perspective on PL


4.3.1 The syntax of PLf
The syntax for PLf is, I admit, needlessly complex. But we’ll soon be glad that we have
all these pieces in place.
Ex. A.12 A type is a kind of implication: *t, t+ says “If
4.3.1.1 Types
you give me a t, I will return a t”. And *t, *t, t++
i. t is a type for PLf . says, “If you give me a t, I will return a *t, t+”.

ii. *t, t+ is a type for PLf .


*t, *t, t++ t *t, t+
iii. *t, *t, t++ is a type PLf . [in] [out]
iv. Nothing else is a type for PLf .

4.3.1.2 Expressions
i. p, q, p0, q0 , . . . are expressions of PLf , type t.
ii. ¬, I, 3, and ⊥ are expressions of PLf (one-place connectives), type *t, t+.
iii. ∧, ∨, →, and ↔, are expressions of PLf (two-place connectives), type *t, *t, t++.
iv. ϕ is a PLf expression of type *σ, τ+ and ψ is a PLf expression of type σ, then (ϕ(ψ))
is a PLf expression of type τ.
v. Nothing else is an expression of PLf .
The first three clauses are lexical. They divide up by type. The fourth clause handles all of
the combinatorics. With it, we can build expressions of arbitrary complexity by building
from simpler ones.

22
Propositional logic

(¬(p)) : t ((∧(p))(q)) : t ((∨((∧(p))(q)))(q))


*)
:t
$# &&%%% *** )))
$ # && % * )
0
¬ : *t, t+ p:t q:t (∧(p)) : *t, t+ p :t (∨((∧(p))(q))) : *t, t+
('' &&%%%
(( ' && %
∧ : *t, *t, t++ p:t ∨ : *t, *t, t++ ((∧(p))(q)) : t
We see here that all is not really well: these expressions are close to unreadable! This is
the price we pay for having a single general rule of composition. But unreadability is a
huge price to pay; we aim to illuminate, not obfuscate. Try your hand at exercise A.13 if
(∨((∧(p))(q))) makes you grumpy.
Ex. A.13
4.3.2 Semantics for PLf
Without a semantics, PLf would just be a set of symbols, arranged in an orderly way but
without any meaning. Let’s now imbue them with meaning.

4.3.2.1 Domains
i. The interpretation of the type t is Dt = {T, F}.
ii. The interpretation of a type *σ, τ+ is the set of all functions from Dσ to Dτ .
Since we have a finite basic domain Dt and a finite hierarchy of types, we can be concrete
about the functional domains specified in clause (ii):
i. D*t,t+        
 T -→ F   T → - T   T → - T   T → - F 
       
F -→ T F → - F F → - T F → - F
ii. D*t,*t,t++
6 7  6 7  6 7  6 7
T→
- T   T -→ T   T -→ T   T -→ T 

 T -→   T -→   T -→   T -→

F→- F   F→ - F   F→ - F   F→ - T 

 6 7   6 7   6 7   6 7 

 F -→ T -→ F  T -→ F   T -→ T   T -→ T 
F →
- F →
-   F -→
6F - → F7 6F - → T7 6F - → T7 F→ - F
  
T - → F  T - → F  T - → F 
     
 T -→  T -→  T -→
  
- → - → - →
  
F F  F F  F F 
...
 6 7   6 7   6 7 
T -→ F   T -→ F   T→
- T 

F -→   F -→   F -→

F -→ F F -→ T F→- F

I’ve depicted all four unary functions. There are a total of 42 = 16 binary functions (which
are functions from truth values into the domain of unary functions). Some of them are
common in discussions of PL, whereas others are mostly neglected.

23
Propositional logic

4.3.2.2 Models for PLf


A model for PLf is a pair M = *D, 5·5M +, where D is the hierarchy of functions specified
in section 4.3.2.1 and 5·5M is an interpretation function for the constants (the propositional
letters and connectives).

Interpreting constants I emphasize that 5·5M interprets only the constants. We’ll handle
the complex expressions momentarily.
We place some general conditions on 5·5M , so that it behaves like a propositional logic.
The restrictions give us wiggle room only with the propositional letters.
(4.1) M = *D, 5·5M + is a model for PLf only if the following conditions hold of 5·5M :
5p5M ∈ Dt if p is a propositional letter
   
 T -→ F   T -→ T 
5¬5M =   5I5M =  
 F →
- T   F →
- F 
 T -→ T   T -→ F 
535M =   5⊥5 M
=  
F -→ T F -→ F
6 7 6 7
T -→ T  T -→ T 
 
 T -→ F -→ F   T -→ F -→ T 
 

5∧5M = 5∨5 M
= 
 6 

7   6 7 
T -→ F  T -→ T 
 
F -→  F -→

F -→ F 7 F -→ F 7
 
6 6
T -→ T  T -→ T 
   
T →
- T →
-
 
F -→ F  F -→ F 
   
5→5M =  6 7  5↔5 M
=   6 7 

 F -→ T →
- T 

 
 F -→
 T →
- F 
F -→ T F -→ T

The interpretation function respects typing in the following sense:


(4.2) A PLf expression ϕ is of type σ iff 5ϕ5M ∈ Dσ .

Interpreting complex formulae The interpretation function [[·]]M interprets complex ex-
pressions in the model M, via this recursion:
(4.3) a. [[ϕ]]M = 5ϕ5M if ϕ is a constant.
b. [[(ϕ(ψ))]]M = [[ϕ]]M ([[ψ]]M )
The domain of [[·]]M is the set of all expressions of PLf . It maps those formulae to objects
in the domains. In clause (b), we might immediately end up using 5·5M for both the sub-
expressions. But, if they are complex, then we will again break them down via clause (b)
and interpret those parts as instructed.

24
Propositional logic

Examples Suppose 5p5M = T and 5q5M = F.


6 7
M T →
- F : ;
(4.4) [[(¬(p))]] = T =F
F → - T

 
 T -→ F 
[[¬]]M = 5¬5M =   [[p]]M = 5p5M = T
F -→ T
 
 T -→ T  : ;
(4.5) [[((∧(p))(q))]]M =   F = F
F -→ F

6 7 
T -→ T

 T -→ F →
 
- F 
 : ;
6
T -→ T
7
[[q]]M = 5q5M = F [[(∧(p))]]M = 

6 7  T = F → - F

 F → T → - F 
-
F -→ F

6 7
T→
- T 

 T → -

F→- F 

M M
[[∧]] = 5∧5 =  [[p]]M = 5p5M = T
 6 7 
 T→
- F 
 F -→
F→- F

(4.6) (¬((∨(q)(p)))) : t F
&&%%% **)))
&& %  ***  ))
¬ : *t, t+ ((∨(q))(p)) : t  T -→ T 
&&%%% T
&& % **)))
 
p:t (∨(q)) : *t, t+
F -→ F **
*  ))
('

( '
 T -→ T 
( ' T  
∨ : *t, *t, t++ q : t F -→ F
,+
6 , ,, 7 ++ +
T -→ T 

 T -→ F -→ T  F

 6 7 
T -→ T 

F -→

F -→ F

25
Propositional logic

4.4 Comparing PLf with PL


4.4.1 Functions and truth tables
There is a perfect correspondence between PLf and PL, in the following sense: they assign
identical truth conditions to all type t formulae, and the connectives themselves have the
same logical properties. In a deep sense, the two logics are semantically identical.

4.4.2 Categorematic and syncategorematic introduction


Syncategorematic A symbol is syncategorematic if it is introduced solely by a rule.

Categorematic A symbol is categorematic if it can exist on its own, i.e., it need not be
introduced by a rule.

In the representations we’ve been using so far, a symbol is categorematic iff it can be
a terminal symbol. The major difference between the usual presentation of PLf and the
functional one is that only the functional one introduces the connectives categorematically.

4.4.3 Constituency
PLf and PL assign essentially the same structure to negations and the other unary con-
nectives (though PLf has many more brackets), but they differ in how they analyze binary
connectives:
PL PLf
(p ∧ q) ((∧(p))(q)) : t
"! &&%%%
" ! && %
p q q:t (∧(p)) : *t, t+
('
(( ''
∧ : *t, *t, t++ p:t

4.4.4 Interdefinability
I defined a sightly different set of connectives for PLf than I did for PL. But this is merely a
presentational difference. Any missing truth-functional connectives are definable in terms
of the stock of connectives already given. Indeed, in general, one needs only a negation
and one other connective to do the job, and certain binary connectives can do the job all
Ex. A.14, on their own (see exercise A.11).
A.15
26
Propositional logic

4.5 A linguistic theory


As I stressed in handout 1, setting up the logic is not the same as setting up the linguistic
theory. We still need to venture explicit connections between the language and the logic.
To keep the discussion focused, I consider only the following hypotheses here. Exer-
cises A.16, A.17, and A.18 ask you to consider additional hypotheses like (4.7b).
(4.7) Hypotheses
a. S ! p, where S is a monoclausal declarative sentence and p is a proposi-
tional letter.
b. and ! ∧
Throughout these notes, ! is the translation relation from natural language expressions to
logical expressions.
Ex. A.16,
A.17,
4.6 Assessment of the linguistic theory A.18

To assess the linguistic theory, we check the entailments of the logic and see if they corre-
spond to accurate or inaccurate predictions about the language. This means that we are si-
multaneously investigating both the logic and the language, checking for correspondences
and divergences as we go.

4.6.1 Basic entailments


We do well on the basic entailment facts. It is easy to overlook this, but we should linger
over it — it is a real achievement to find an analysis that matches at this level. Here are
two central results:
(4.8) a. S and S0 entails both S and S 0 individually.
b. S does not generally entail S and S0 .
Generalization (4.8a) is supported by intuitions, and it could be further bolstered with
other kinds of experiment. If we adopt (4.7b), then we see immediately why it holds: for
any [[·]]M M M
i , if [[ϕ ∧ ψ]]i = T, then [[ϕ]]i = [[ψ]]
M
= T, just as a matter of definition (see
section 4.1 or 4.3.2.2).
We also see why (4.8b) is true. Just pick an interpretation function [[·]]M j such that
M M M
[[p]] j = T and [[q]] j = F. Then [[p∧q]] = F. This corresponds to a situation in which it is
true that it is summer time but false that it is Saturday and some fool (or semanticist) says,
“It summer and it is Saturday”. (We might let the speaker get away with the corresponding
or statement, particularly if she is a semanticist.)

27
Propositional logic

One can be a great deal more rigorous than this, but I think these descriptions suffice.
The point is that our hypotheses (4.7) derive for us the properties of natural language that
we aimed to characterize.

4.6.2 Compositionality
PLf is of course a compositional theory of the truth functions — a paradigm case, in fact.
But we can still level charges of noncompositionality against it. The major problem comes
with hypothesis (4.7a), which simply stipulates that declaratives sentences are letters. We
are therefore unable to state any intuitive interconnections between related sentences. For
instance, there can be no principled semantic connection between David laughed and Chris
laughed, nor between Robert taught and Robert laughed. And so forth.
Ex. A.19 Exercise A.19 asks you to push this argument still further.

4.6.3 Categorematicity
If we can look past the problem of declarative sentences, we find that we are not doing too
badly. Just as and is a constituent of natural language, so too does ∧ have its own place in
the logical lexicon and its own meaning.

4.6.4 Constituency
Hypothesis (4.7b) predicts the following:

• and forms a constituent with the right conjunct.

• and is a binary operator.

• The arguments to and are truth-valued.

The first two hypotheses are controversial in syntax. The third is of significant logical
interest. We will return to it in our discussion of extensional lambda calculi (handout 5).
For now, suffice it to say that there is a way to reconcile our current hypotheses with the
Ex. A.20 facts of nonsentential conjunction, but we need a richer logic for that.

4.6.5 Commutativity
In PLf , ∧ is commutative, which just means that we can reverse the order of its arguments
without any change to the truth value of the whole. (It is worth staring at the depiction
in section 4.3.2.2 until this makes sense to you.) It is easy to find data that call this into
Ex. A.21 doubt:

28
Propositional logic

(4.9) a. Sam woke up and Sam fell out of bed.


b. Sam fell out of bed and Sam woke up.
c. Berlin is the capital of Germany, and Moscow is the capital of Russia.
d. If you take Logic for Linguists and The Lexical Semantics of Verbs, then
you’ll be prepared to do sensitive lexical semantic investigation.

4.6.6 Associativity
This property of ∧ allows for rebracketing. In PL, it is the equivalence between, e.g.,
((p ∧ q) ∧ q0 ) and (p ∧ (q ∧ q0 )). Our PLf connective ∧ also has this property, and thus
is too might be called into doubt by examples like the following (which, tellingly, involve
nonsentential coordination):

(4.10) a. I like peanut butter and jelly, and pickles.


b. We invited Sue and Ted, and Jane and Mark.

4.7 The intensionality of propositional logic


PLf is impoverished as a theory of meaning. If we use an interpretation function [[·]]M i such
that [[p]]M
i = T , then [[p]]M
i , [[(p∨q)]]M
i , and [[(q → p)]]M
i are all the same object (namely, T).
We can see that these mean different things by moving to the truth-tables, which present
every logically distinct interpretation function for a given set of propositional letters.
Suppose we shift the emphasis to the truth tables in the following way: we consider
the primary denotations for a formula ϕ to be the set of all i such that [[ϕ]]M i = T:

Intensions via truth-tables

p q ¬p p∧q p∨q p→q p↔q


[[·]]M
1 T T F T T T T
[[·]]M
2 T F F F T F F
[[·]]M
3 F T T F T T F
[[·]]M
4 F F T F F T T
{1, 2} {1, 3} {3, 4} {1} {1, 2, 3} {1, 3, 4} {1, 4}
Now we have all distinct meanings, despite the fact that, for any given i, the meanings
collapse because [[·]]M
i doesn’t make enough distinctions.
This is essentially possible-words semantics. The indices i are the worlds, and the
interpretation functions characterize those worlds. We’ll return to this in handout 7.
Ex. A.22

29
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 5: Extensional lambda calculus


This handout opens by providing a basic extensional lambda calculus. Exten-
sive commentary follows the definitions. I close by summarizing the linguistic
gains of moving to this system — a sort of update to section 4.6 of handout 4.

5.1 Background
The lambda calculus, in general (semi-historical) terms, is a theory of computation. There
are many lambda calculi. In linguistics, people generally work with typed versions.
The lambda calculus is enormously powerful. The chances are small that you will
run up against something you want to do technically but cannot do within its bounds.
Therefore, if we are going to use it to build linguistic theories, we will have to impose
extra conditions, and we will have to be careful to isolate just the functions that we want to
allow into our theory. This is welcome news, of course — it means we get to craft theories,
rather than just inheriting them from logicians.

5.2 Lambda caculi: General definition


I begin by providing the very general definitions for extensional lambda calculi. (They
mostly work for intensional ones as well.) Section 5.3 addresses the issue of how to adapt
the general specification to your specific needs.

5.2.1 Types
The only change from PLf is the new basic type e for entities.

i. e is a (basic) type.

ii. t is a (basic) types.

iii. If σ and τ are types, then *σ, τ+ is a type.

iv. Nothing else is a type.


Ex. A.23

30
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

5.2.2 Expressions
The first two clauses handle the primitives. The second two clauses build complex expres-
sions.
i. There are constant symbols of many different types. Constant symbols are expres-
sions.
ii. For every type τ, we have an infinite stock of variables of type τ. Variables are
expressions.
iii. If α is an expression of type *σ, τ+ and β is an expression of type σ, then (α(β)) is an
expression of type τ.
iv. If α is an expression of type τ and χ is an variable of type σ, then (λχ. α) is an
expression of type *σ, τ+.

Ex. A.24,
5.2.3 Domains A.25

The definition connects the types with the semantic domains.


i. The domain of type e is De , the set of entities.
ii. The domain of type t is Dt = {T, F}.
iii. The domain of a functional type *σ, τ+ is the set of all functions from Dσ into Dτ .

Ex. A.26
5.2.4 Interpretation
5.2.4.1 Models
A model for the lambda calculus is a pair M = *D, 5·5M +, where D is the infinite hierarchy
of domains defined in section 5.2.3 above, and 5·5M is a valuation function interpreting the
constants of the language.

5.2.4.2 Interpreting constants


The valuation function 5·5M maps constants to objects in our domains. Like its PLf coun-
terpart, it respects typing (handout 4, section 4.3.2.2):
(5.1) α is a constant of type σ iff 5α5M ∈ Dσ .
We demand also that 5·5M respect the meanings for the PLf connectives as given in section
4.3.2.2 of handout 4.

31
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

5.2.4.3 Interpreting variables


The assignment function maps variables into objects in our hierarchy of domains, just
as the interpretation function does. We use g, g0 , etc., to name assignments. Like the
interpretation function, the assignment obeys the following constraint:

(5.2) χ is a variable of type σ iff g(χ) ∈ Dσ .

The expression g[x -→ d] names the assignment function that is just like the assignment g
except that the variable x maps to the entity d.

5.2.4.4 Interpretation
We’ve built up what we need to specify the interpretation function, the heart of our theory:

[[·]]M,g

This function provides the interpretation of all expressions (constants, variables, and com-
plex expressions formed from them) in the model M, relative to assignment g:

i. If α is a constant of type σ, then [[α]]M,g = 5α5M .

ii. If χ is a variable of type σ, then [[χ]]M,g = g(χ).

iii. [[(α(β))]]M,g = [[α]]M,g([[β]]M,g )

iv. If (λχ. α) is of type *σ, τ+, then [[(λχ. α)]]M,g = the Φ ∈ D*σ,τ+ such that Φ(!) =
[[α]]Mg[χ-→!] , for all ! ∈ Dσ .

5.3 Defining specific lambda calculi in your work


Let’s suppose you need to be precise about how interpretation works in your proposal. You
can efficiently specify an extensional lambda calculus for this, using the above definitions
(perhaps adapted to your preferred notation). But you might want to be more specific about
your constant expressions, as these are likely to constitute your core hypotheses (once you
indicate how you will get to the logic from the language you are studying).
The following style of definition of constant symbols for some lambda calculus Lλ
gives the reader a good sense for how we will map from language into the logic, and its
ellipsis dots provide the freedom to add new expressions as needed to deal with specific
examples:

32
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

i. bart, lisa, . . . are Lλ constants of type e.


ii. smile, frown, . . . are Lλ constants of type *e, t+.
iii. tease, see, notice, . . . are Lλ constants of type *e, *e, t++.
To interpret these constants, you need to name particular objects or functions. Unfortu-
nately, this will commit you to all sorts of arbitrary and irrelevant things. You’ll have to
say who teased whom for every pair of elements in your domain, for instance, which will
in turn force you to define a specific domain of entities. Outside the context of working
through specific scenarios, these details are likely to be simply distracting.
The following therefore strikes a good balance between precision and information con-
tent:

i. 5bart5M =
ii. 5smile5M = the function Φ such that f (!) = T iff ! smiles.
iii. 5tease5M = the function 6 such that 6(!) = the function Φ such that Φ(") = T iff
" teases !.
Other than this, you are likely to be able to use the general definitions above. But you
should also feel free to tailor them to your needs or the needs of your audience.
Ex. A.27

5.4 Commentary
5.4.1 Types: Your semantic workspace
The types help to establish a lawful connection between the way we organize our expres-
sions and the way the models are organized.

5.4.1.1 Syntax
Semantic types play much the same role as syntactic categories do in the realm of syntax:
they organize the expressions of the theory, thereby allowing us to control their interactions
and state broad generalizations.
(5.3) dog : N ‘the lexical item dog is of category N’
(5.4) dog : *e, t+ ‘the expression dog is of semantic type *e, t+
We can identify the category N with the set of all lexical items with that category specifi-
cation, and we can identify the type *e, t+ with the set of all logical expressions with that
type specification.

33
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

5.4.1.2 Semantics
Types do more than just categorize expressions. They are also important in categorizing
denotations (meanings). In a typed, interpreted system, each type has a corresponding
denotation domain, as in section 5.2.3 above. This in turn leads to a natural constraint on
Ex. A.28 the sort of interpretation functions we are willing to consider: (5.1) and (5.2).

5.4.2 The recursive intepretation processs


The interpretation process is a recursion. Suppose we are looking at a complex expression
(α(β)). We break it apart into its two parts α and β, and we intepret those. This might itself
involve breaking one or both of α and β into their constituent parts. And so forth, all the
way down to the point at which we hit either a constant symbol (and thus get to use 5·5M ),
or else we use the assignment.

Pseudocode depicting (part of) the recursive interpretation

I(ϕ, M, g)
1 if ϕ is a constant
2 then return 5ϕ5M
3 elseif ϕ is a variable
4 then return g(ϕ)
5 elseif ϕ is of the form (α(β)) : ;
6 then return I(α, M, g) I(β, M, g)
7 elseif . . .
Ex. A.29

5.4.3 Functional application


Functional application is arguably the most important mode of semantic composition.
Let’s look more closely at its syntax and it semantics.

5.4.3.1 The syntax of FA


I repeat our definition, along with a tree-sructural view of it.
(5.5) a. If α is of type *σ, τ+ and β is of type σ, then (α(β)) is an expression of type
τ.
b. (α(β)) : τ
('
(
( ''
α : *σ, τ+ β : σ

34
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

In the definition of compositionality, this is usually the “function” that is used to combine
expressions to form new expressions.
Common abbreviations:

(5.6) ((α(β))(γ))

a. α(β)(γ) (VOS)
b. α(γ, β) (VSO)

5.4.3.2 The semantics of FA


Here’s a picture of the semantics of functional application:

 a -→ f   a -→ f   a -→ f 


     
  : ;   : ;   : ;
 b → - g  a = f  b → - g  b = g  b → - g  c = h
     
c -→ h c -→ h c -→ h

In terms of context-free compositionality (handout 2, section 2.2), the daughters are the
function and the argument, and the mother is the value after the equal sign.
Ex. A.30
5.4.4 Abstraction
Abstraction is technically more challenging than application. If you deem it suspect, then
you might pursue a semantic theory without it (Jacobson 1999).

5.4.5 The syntax of abstraction


Three equivalent views:

(5.7) λχ. α : *σ, τ+ where χ : σ λχ. α : *σ, τ+ χ:σ α:τ


$#
$ # λx. α : *σ, τ+
α:τ λχ : σ α : τ

5.4.6 The semantics of abstraction


The intuitive idea is that we are building up a function, by checking the value of the body
of the lambda abstract against every element in the appropriate domain and using those
value to (re)construct the key–value pairs of the function.

35
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

(5.8) a. cyclist : *e, t+


b. [[λx. cyclist(x)]]M,g = the Φ such that Φ(!) = [[cyclist(x)]]Mg[x-→!] , for all ! ∈
De .

Graphically, assuming De = {a, b, c}:

f (x) λx. f (x)


   
 T   a -→ T 
 b →- T
   
  
c -→ F

5.4.7 The role of variable assignments


Variable assignments are often a hurdle to understanding how the interpretive process
works. So let’s pause here to look at some specific examples and explore why we have
assignments at all.

5.4.7.1 An example
Here’s a look at the assignment functions for a logic with just two variables, both type e,
and a domain of entities De consisting of Bart, Lisa and Burns.
         
         
         
 x -→   x -→   x -→   x -→   x -→ 
 
  
  
  
  

         
         
y -→ y -→ y -→ y -→ y -→
       
       
       
 x -→   x -→   x -→   x -→ 
       
       
       
       
y -→ y -→ y -→ y -→

Then we have:
   
   
   
 x → -   x -→ 
  [y -→ ] =  
   
   
   
y → - y -→

36
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

5.4.8 A properly-defined recursion


Let’s think again about the recursion required by the semantics for functional application:
(5.9) [[(α(β))]]M,g = [[α]]M,g ([[β]]M,g)
If β is of the form (γ(δ)), then we’ll apply this rule again to get its interpretation. Ditto for
α. But, if the system is to be well-defined, we have to bottom out eventually. This happens
when we reach the atomic symbols. At that level, it is common to divide the interpretive
labor:
(domain is the set of constants) (domain is the set of variables)
5·5M g
" #
[[·]]M,g
(domain is the full set of well-formed expressions)
This is the central technical reason for the variable assignments: without them, we are left
without a well-defined recursion through to the full set of atomic symbols.
Ex. A.31
Why the split? We could interpret both constants and variables via the same function.
But it is conceptually useful to keep the interpretation function and the assignment sepa-
rate. We can play with these ideas:
• The interpretation function captures something about the language. The assignment
captures something about the context. (See handout 10.)

• The interpretation function is fixed. The assignment function is malleable.

• The interpretation function is a substantive feature of the linguistic analysis. The


assignment is a mere convenience.

5.4.9 Free and bound variables


i. If α is a variable, then Free α = {α}
8 9

ii. If α is a constant, then Free α = { }


8 9

iii. Free (α(β)) = Free α ∪ Free β


8 9 8 9 8 9

iv. Free (λχ. α = Free α − Free χ


8 9 8 9 8 9

• We can think of every expression as paired with a variable store holding all the
variables that are free in that expression (Cooper 1983).

37
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

• It’s worth paying particular attention to clauses (i) and (iv) of the definition. The
first introduces variables. The second discharges them.

• An expression has a nonempty set of free variables iff it is assignment dependent —


its meaning can change as we change the assignment.

• An expression has an empty variable store iff it is assignment independent — its


meaning is constant under different choices of variable assignment.
Ex. A.32

5.5 Linguistic theory


(5.10) Hypotheses

a. Proper names translate as type e constants.


b. Intransitive verbs translate as expressions of type *e, t+.
c. Transitive verbs translate as expressions of type *e, *e, t++.
d. and ! ∧, as before, with the same definition for ∧ as a purely truth-
functional operator denoting in D*t,*t,t++ .

5.6 Assessment
As a logic, the lambda calculus is quite useful, of course. We will not assess its properties.
Rather, we will assess the application of it in section 5.5.

5.6.1 Compositionality
A major shortcoming of the PLf -based linguistic theory of handout 4 was that it did not
reach into sentences. Instead, it treated them all as semantically atomic. Of course, this is
wildly inaccurate.
We’re doing better now. The gains are the result of adding the domain of entities
and allowing ourselves to build functions involving them. We can now do a substantial
amount of decomposition of sentences into their constituent parts. In fact, the hierarchy
of functions specified in section 5.2.3 is so big and rich that the one you are looking for is
almost certainly in there somewhere.

38
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

5.6.2 Constituency
Let’s first address a major benefit of working with this particular class of functions: we
achieve a really nice subtree-to-meaning correspondence with the syntax.
Consider, for instance, VPs containing transitive verbs. If we define transitive verbs
meanings as in section 5.2.2, then VPs denote in *e, t+, and they have intuitively correct
meanings. For instance, [[see(bart)]]M,g denotes the function Φ such that Φ(!) = T iff !
saw Bart.
In fact, quite generally, our theory of unary functions squares extremely well with the
idea that all syntactic structures are binary branching.
Ex. A.33,
A.34
5.6.3 Still not enough meanings
We’re still suffering from way too much meaning collapse. If Bart skateboards and Lisa
studies, then [[skateboard(bart)]]M,g = [[study(lisa)]]M,g = T, and similarly for things that
happen both to be false. The problem traces ultimately to the fact that our logic is geared
towards getting us into the domain Dt , but that domain contains just two values, {T, F}.
Thus, in the end, we end up making binary distinctions.
We will return to this point in more detail when we add intensionality (handout 7). For
now, let’s just examine the conceptual reason for this limitation: the heart of the exten-
sional viewpoint is that we specify everything about a single reality and evaluate things
relative to that single reality. This means that, at some point, we have to specify lexical
entries: happy picks out this function, see picks out that function, and so forth.
But we can and should consider alternative realities. Technically speaking, we can
identify these alternative realities with different interpretation functions: [[·]]M,g
i might say
M,g
that Lisa is happy, whereas [[·]] j might say that she is not. This is the guiding idea behind
intensionality. Let’s try to keep sight of it from here on out.

5.7 Type mismatches


The type mistmatch is one of the most important explanatory tools in the semanticist’s
toolbox, so I close this handout with a look at the thinking behind it.
Broadly speaking, the reasoning works like this:

i. The linguist observes that something is ill-formed in the natural language.

ii. She follows through on her empirical claims about the mapping from that ill-formed
structure into the logical language.

39
Lambdas

iii. She observes that, in the logical language, one has an ill-formed expressions —
or, that the objects involved do not fit together by any admissible rule of semantic
composition.

(5.11) Smile run.

a. smile ! smile : *e, t+


b. run ! run : *e, t+
c. (smile(run)) is an ill-formed expression.
d. (run(smile)) is an ill-formed expression.

It’s perverse, in a sense: one works very hard to reach the point where one has created an
ill-formed expression of the logic. But the force is clear: those things don’t go together
Ex. A.35, linguistically because of something fundamental about their meanings.
A.36

40
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 6: The axioms of the lambda calculus


There isn’t time to cover the proof theory of the lambda calculus in detail, but
its axioms are highly informative about the ways the system works, so it is
worth reviewing them.

6.1 A general note


The axioms for the lambda calculus tell us about the meaning-preserving ways in which
we can manipulate expressions, to simplify them or to make them easier to understand.
Thus, if you employ an axiom to alter a term, make sure that your output names the same
function as your input. If you end up with an altered meaning, then something went awry.

6.2 Substitution
One operation on expressions is central to the statement of these axioms. It is the substi-
tution of one expression for another:

(6.1) ϕ[x!a] ‘ϕ with all occurrences of x turned into a.’

We have to be careful about how we perform substitution. The following clauses both
define and restrict the operation. We use x for any variable, c for any constant, and δ for
any expression. The overall effect is worth keeping in mind: we want to prevent accidental
binding, i.e., we want to prevent our substitution from resulting in a given λ binding more
variables than it did prior to substitution.

i. x[x!δ] is permitted, and x[x!δ] = δ.

ii. y[x!δ] is permitted, and y[x!δ] = y if y $ x.

iii. c[x!δ] is permitted, and x[x!δ] = c

iv. (ϕ(ψ))[x!δ] is permitted iff ϕ[x!δ] and ψ[x!δ] are permitted.


Where permitted, (ϕ(ψ))[x!δ] = (ϕ[x!δ](ψ[x!δ])).

v. (λx. ϕ)[x!δ] is permitted, but it changes nothing: (λx. ϕ)[x!δ] = (λx. ϕ).

vi. Where x $ y, (λy. ϕ)[x!δ] is permitted iff x is bound in ϕ or y is bound in δ


Where permitted, (λy. ϕ)[x!δ] = (λy. ϕ[x!δ])

Ex. A.37
41
The axioms of the lambda calculus

6.3 Alpha conversion


The intuition here is that the names of bound variables never matter, so you can always
rename them, provided you do the renaming uniformly and no new variables get bound in
the process.
α
(α) (λx. ϕ) =⇒ (λy. ϕ[x!y])
Permitted iff ϕ[x!y]) is permitted.

Two terms that are relatable by this axiom are alphabetic variants of one another. Im-
portantly, in pairs of alphabetic variants, the lambdas always bind the same number of
variables.

Intuition The names of bound variables don’t matter.

Sound examples
α
(6.2) λx. happy(x) =⇒ λy. happy(y)
α α α
(6.3) λxλy. see(x)(y) =⇒ λzλy. see(z)(y) =⇒ λzλx. see(z)(x) =⇒ λyλx. see(y)(x)

Illicit conversions
α
(6.4) λx. see(x)(y) 8=⇒ λy. see(y)(y)
α
(6.5) λx. a(man)(λy. see(x)(y)) 8=⇒ λy. a(man)(λy. see(y)(y))

42
The axioms of the lambda calculus

6.4 Beta reduction


This might be the most important of the three axioms when it comes to linguistics pa-
pers. It allows us to simplify lambda terms, so that we can more easily evaluate them and
understand what they are meant to convey.
β
(β) ((λχ. ϕ)(ψ)) =⇒ ϕ[χ!ψ]
Permitted iff ϕ[χ!ψ] is permitted.

In extensional lambda calculi of the sort we are dealing with, the order in which one
does the reductions does not matter. This property is often called confluence, because
two different β-reduction paths can diverge, but they will always merge again. Another
name is the diamond property : if one diagrams all possible paths for reduction, one gets
diamond-shaped graphs wherever there is a choice about which thing to reduce.
β
(6.6) a. (λx. happy(x))(sam) =⇒ happy(sam)
β
b. (λxλy. (see(x))(y))(friend-of(y)) 8=⇒ (λy. ((see(friend-of(y)))(y)))

Alpha-conversion provides an important tool for getting out of jams like the one in (6.6b)
Ex. A.38

6.5 Eta reduction


From a practical point of view, η-reduction tells us a lot about the role that lambdas actually
play in linguistic investigation, because they tell us something about where it is redundant
to use lambdas at all.
η
(η) (λχ. ϕ(χ)) =⇒ ϕ
Permitted iff χ is not free in ϕ.

In the light of this axiom, we can see that the following is not a meaning analysis (though
things like this are commonly found):

(6.7) dog = (λx. dog(x))

If we have a convention for the type of x, and we know what kind of things dog is supposed
to be, then the lambda on the right tells us something about the type of this function. But it
doesn’t get us any closer to the function itself than we were when we started. η-reduction
tells us this right away.
Ex. A.39

43
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 7: Intensions
7.1 The limits of extensional models
In handout 5, section 5.6.3, we began building the case for the claim that the extensional
lambda calculus doesn’t make available to us enough meanings. The root of the problem is
that, as with PLf , everything is geared towards the domain Dt of truth values. This section
discusses this problem, and those that relate to it, in more depth than we did before.

7.1.1 Accidental identity of meaning


This problem has come up a few times. Since everything is about truth values, we make
only a binary distinction in the end.
The problem carries up to our functional expressions as well. If we hypothesize that
creature with a heart and creature with a kidney translate as expressions of type *e, t+, then
we are forced to make an unhappy choice: if we make good on the fact that these two
nouns pick out the same creatures in our world, then we collapse their meanings. If we
avoid collapsing their meanings, we have to make incorrect claims about the functions that
model their interpretations. These facts point to the conclusion that there is more to the
meanings of terms like this than just the mapping from entities to truth values (equivalently,
set of entities).

7.1.2 Attitude predicates


Attitude predicates (think, suspect, be amazed at, etc.) are numerous and common in
discourse. Unfortunately, the extensional lambda calculus is of little use in getting at
their meanings. The best way to see this is to try. Consider the sentence Lisa believes
Bart is smart. The complement to believe is something we can analyze extensionally.
That analysis is likely to assign it a value in Dt . Suppose that Bart is not smart. Then
[[smart(bart)]]M,g = F. If Lisa believes this, then we can only represent this situation as
follows:

([[believe]]M,g(F))( )=T
But now we have attributed to Lisa a believe in every falsity! This might be wildly out of
synch with her behavior. We are predicting that she will endorse the claim that the earth is
flat, for example.
Ex. A.40, Exercise A.40 asks you to make matters much worse for an extensional view of believe.
A.41
44
Intensions

7.2 An intensional logic


It is a matter of just two simple additions to move from the extensional lambda calculus
of handout 5 to an intensional version. It’s what we do with the logic that turns out to be
complex and interesting. Here are the changes that need to be made:
i. The addition of a new basic type s.
ii. The addition of the corresponding domain D s , the set of possible worlds.
All the combinatoric rules can be the same, and we needn’t meddle with the basic design
of the interpretation function either. The changes all come in how we use this to develop a
series of hypotheses about language.

7.2.1 Some constants and their interpretation


The following are suitable meanings for a sample of phrases in this new intensional setting:
i. bart, lisa, . . . are constants of type e.
ii. smile, frown, . . . are constants of type *e, *s, t++.
iii. tease, see, notice, . . . are constants of type *e, *e, *s, t+++.
iv. believe is a constant of type **s, t+, *e, *s, t+++.
Compare these with the same corresponding expressions defined in section 5.3. The only
difference is, wherever the output types was t, it is now *s, t+, the type of propositions.
And, of course, we are now able to work with believe.

i. 5bart5M =
ii. 5smile5M is the function Φ such that, for any entity ! ∈ De , Φ(!) = the function π
such that, for any world 9 ∈ D s , π(9) = T iff ! smiles at 9.
iii. 5tease5M is the function 6 such that, for any entity ! ∈ De , 6(!) = the function
Φ such that, for any entity " ∈ De , Φ(") = the function π such that for any world
9 ∈ D s , π(9) = T iff " teases ! at 9.
iv. 5believe5M = the function Φ such that, for any proposition π ∈ D*s,t+ , Φ(π) is the
function Ψ such that, for any entity ! ∈ De , Ψ(!) = the proposition π0 ∈ D*s,t+ such
that π0 (9) = T iff the set of belief worlds for ! in 9 is a subset of the set of worlds
in which π is true.

Ex. A.42
45
Intensions

7.3 Linguistic theory


Let’s state an overarching hypothesis: the constants defined in section 7.2.1 above are the
translations of the corresponding English words. We can extract from this some more
general claims for evaluation:

(7.1) Hypotheses

a. Proper names are expressions of type e.


b. Predicates take extensional arguments.
c. Belief is a relation between an entity and a proposition, and moreover, an
entity’s beliefs can vary from world to world.

7.4 Commentary
Intensional logics are amazingly rich. I can’t touch upon all aspects of the moves we have
made. But this section suggests a few things, and the exercises can help guide you deeper
into these structures.

7.4.1 Two different formulations


The translations of intransitive verbs and adjectives are of type *e, *s, t++ under the current
set of hypotheses. This means that they map entities to propositions.
However, it is common to find these meanings given the type *s, *e, t++ instead. These
are functions from worlds into extensional properties (equivalently, sets of entities).
The bottom line is that these views are equivalent, because the domains D*s,*e,t++ and
D*e,*s,t++ are isomorphic (handout 3, section 3.4.5).
Our formulation seems superior from the point of view of the connection with syntax.
For example, we can feed the subject to the intransitive verb to get a proposition back —
a suitable meaning for a declarative sentence. If these meanings are instead in the space
*s, *e, t++, then we have to somehow get the world argument in (and perhaps out again)
before the subject. (We will in effect curry the meaning.)

7.4.2 Families of extensional models


Just above, I argued that it is useful to map into propositions, since it avoids having to feed
in world arguments before entities. But if we had purely logical interests in mind, then we
might prefer a formulation in which world arguments always came in first, since it makes
it easy to see that the models are like families of first order models.

46
Intensions

To see this, consider a model in which De contains just Bart and Lisa and suppose we
have just one property, say, the property of being skeptical. Now let’s study this function
a bit:    
   
   

 9 -→ 
 →
- T  
 
 1   
   
   
 -→ T 
   
   
   

 92 -→ 
 -→ T  
   
 
  
 
 
-→ F 
M
5skeptical5 =  
   
   
   
  -→ F  
 93 -→   
   
   
 →
- F 
   
   
   

- F
   
 9 -→   
 4   
   
   
-→ T
We have distributed over the worlds {91 . . . 94 } all the ways in which the world could be
(for this single predicate). So we have four distinct possible worlds (given just this two
entity domain and the one property to talk about). In world 91 they are both skeptical, but
in world 93 , neither of them is. And so forth. (This strongly recalls the logic of truth tables
explored throughout handout 4, especially section 4.7.)
Here’s the point of interest at present: if we fix a world, then we are looking at a
function from entities to truth values — we have our extensional property back!
Ex. A.44
7.4.3 World variables in or out
The system defined above treats its worlds just like regular entities. We can have variables
and constants that pick them out. We can abstract over them. And so forth.
Not all the intensional logics encountered in semantics are so kind to their world vari-
ables. It is common to find them pushed out into the interpretation scheme itself. The
resulting interpretation function is likely to look like this:
[[·]]Mg,w

47
Intensions

where w is the world of interpretation. Thus, if we use the constants defined above, we can
form expressions like bald(bart). For us, this picks out a function from worlds to truth
values. In a system of interpretation like the one suggested by [[·]]Mg,w , we would instead
have a value of T or F for this expression, depending on the value of [[bald(bart)]]Mg,w (w).

7.4.4 Sentence denotations and the actual world


Our logic derives propositional meanings for sentences. But we typically want the value
of these propositions in our world. To satisfy this need, a distinguished constant:
(7.2) @ is a constant of type s [[@]]M,g = ♁ (the actual world)
Ex. A.45

7.4.5 Extensionalization: a useful fiction


It can be useful to ignore intensionality. This is usually done by simply defining an exten-
sional system. But, as we’ve seen, these extensional systems are inadequate. Is there an
alternative to this inadequacy? Yes!
One can say that one has a fully intensional system, but that one is going to silently
saturate all of one’s predicates with the actual world. In our system, this means that we
have to do abstraction:

(7.3) λx. bald(x)(@) : *e, t+ (7.4) λ f. the( f )(@) : **e, *s, t++, e+
One could even specify this as part of the denotations:
(7.5) 5bald5M = the function Φ such that Φ(!) = T iff ! is bald at [[@]]M,g

7.5 Assessment
7.5.1 Entities or individual concepts?
We left our proper names out of the intensional sphere, so to speak, by giving them de-
notations that are independent of the world we are in. It is wise to question this move.
Ex. A.43 Exercise A.43 pushes you in that direction.

7.5.2 Tautologies and contradictions


Despite major enrichments and near model-theoretic overload, it is arguably true that we
still don’t have enough meanings. Intensions distinguish most of the truths from one an-
other, and they distinguish most of the falsities from one another. But they still do not
make enough distinctions:

48
Intensions

(7.6) a. Ed believes Sue squared the circle.


b. Ed believes two and two make seven.
(7.7) a. Ed believes Sue is self-identical.
b. Ed believes two and two make four.
In (7.6), the complement sentences denote necessary falsities; they both denote the empty
set. In (7.7), the complement sentences denote necessary truths; they both denote the full
set of worlds.

7.5.3 The definite determiner


Hypothesis (7.1b) says that the arguments to predicates are of type e. This interacts well
with hypothesis (7.1a), which says that proper names are of type e. It now makes good
semantic sense that proper names are free to be the arguments to almost any predicate.
Quantifiers and definite descriptions call this neat picture into question, though. There
is not space here to discuss the issue of quantified arguments (see Heim and Kratzer 1998).
But we can look briefly at why definites are a challenge for the current theory.
The bottom line is that we can’t get away with saying that definite descriptions are
of type e in the way proper names are. The best way to see this is to place them into
intensional contexts, where we there is a chance of seeing them evaluated with respect to
worlds different from our own:
(7.8) a. Al Gore might have been the president.
b. Sue believes that Bart is the culprit, but in fact the culprit is Maggie.
Evidently, Sue’s belief worlds 9 are all such that Bart is the culprit in 9 — Sue is somehow
out of touch with reality. We can capture this easily by allowing that the culprit varies from
world to world:  
 
 
 ♁
 →
- 

 
 
 ♀ -→
 

 
 

-
 
 ♂ 
 .. 
.
But, by our function–type correspondence, this means that definites must be of type *s, e+,
the type of individual concepts. This leads directly to the prediction — quite false! — that
definites cannot be the arguments of predicates. Something has got to give, and exercise
A.46 asks you what that something is.
Ex. A.46
49
Intensions

7.5.4 Logical omniscience


Our treatment of the attitude predicate believe predicts logical omniscience in the fol-
lowing sense: if you believe p, and p ⊆ q, then you believe q as well. This is by no
means an innocent assumption. If we check basic cases, we get some confirmation that
there is something correct about it. But figuring out whether one thing entails another is
a supremely challenging task in every relevant sense, and it is routine to find that peo-
ple have failed to grasp that their belief in p entails that they must also, if they are to be
Ex. A.47 regarded as rational in the strictest sense, believe q.

7.5.5 On the size of worlds


The name ‘possible world’ is misleading. It would be more accurate to say ‘possible
reality’. When we talk about ‘possible worlds’, we’re not comparing different planets, or
different regions of the solar system, but rather entire realities.
This is rather counterintuitive. If I say to you, Chris is happy, I pick out all the realities
in which Chris is happy (and perhaps indicate that I think ours is one of them). What
seems like a small claim turns out to be a rather large on model-theoretically.
There are semantic theories that do away with possible worlds (realities), in whole
(Barwise and Perry 1983) or in part (Kratzer 2007).

50
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 8: Building a suitable machine


Kaplan (1989:541) is in search of a suitable “machine” — a system that will
capture the generalizations he is after in a way that seems illuminating. This
handout provides some guidance for people who are on such a search.

Very often, one constructs a model only to find that it is not expressive enough to make
certain distinctions. Here are some examples, along with responses.

Individuals
• Problem: Basic propositional logic has only expressions of type t. There is no way
to talk about Bob or Carol or Ted or Alice.

• Response: Add a domain of entities and define function that take members of that
domain (and functions build from them) as arguments. (Handout 5.)

Worlds
• Problem: Even D*e,t+ isn’t sufficient. We cannot, for instance, give a semantics for
belief statements in these terms

• Response: Add a new set of entities, D s = W, the set of possible worlds. Sentence
meanings are now functions from worlds into truth values; VP meanings are now
functions from entities to sentence meanings; and so forth. (Handout 7.)

Times
• Problem: A model with no elements for representing time cannot give a semantics
for any explicitly temporal expressions.

• Response: Add a new set of entities, D j = R, the class of times. Sentence meanings
are now functions from times into something else.

Ex. A.48,
A.49

51
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 9: Quantifiers
This handout is a brief introduction to quantifiers. I can’t hope to be compre-
hensive, but I can try to impart a sense for the deep results of this subfield of
semantics. (For a comprehensive, technical, but nonetheless accessible review
of the field today, I recommend Peters and Westerståhl (2006).) My treatment
is purely extensional; see the exercises for tips on intensionalization.

9.1 The view from first-order logic


First-order logic is generally the touchstone for theories of quantification, so it is worth
getting acquainted with the thinking behind the notation and denotations that people usu-
ally use in those contexts. Here’s a standard look at the universal:

(9.1) [[∀x cyclist(x)]]M,g = T iff every ! ∈ De is such that [[cyclist(x)]]Mg[x-→!] = T

The instructions for the universal tell us to interpret an open sentence under every possible
interpretation of its free variable. The universal force comes our exhaustive search through
the domain. If we syntacticized this, we would see a close connection with conjunction
(though not an exact one; Boolos et al. 2002:§10):

cyclist(bart) ∧ cyclist(lisa) ∧ cyclist(marge) · · · for every individual constant.

Here’s the standard existential:

(9.2) [[∃x cyclist(x)]]M,g = T iff there is a ! ∈ D such that [[cyclist(x)]]Mg[x-→!] = T

Here, the existential force comes from the fact that we are looking for some entities d
such that if we interpret the scope formula with d as the value of x, then we get T. If we
syntacticize, we end up with a disjunctive statement:

cyclist(bart) ∨ cyclist(lisa) ∨ cyclist(marge) · · · for every individual constant.


Ex. A.50

9.2 The view from generalized quantifier theory


Generalized quantifier (GQ) theory gains much of its motivation from the fact that many
quantifiers of natural language are not definable in first-order terms at all.
The central idea behind GQ theory is that quantifiers denote in the space **e, t+, t+. This
is the generalized quantifier domain — sets of sets, if we translate out of the functional

52
Quantifiers

notation. This is the sense in which the theory is ‘generalized’: anything in this functional
domain is quantificational (and there are a lot of functions in this domain!).
Barwise and Cooper (1981) is a classic of GQ theory (of semantics!). Additional piv-
otal work in the primary literature: Keenan and Stavi (1986), Keenan and Falz (1985),
Keenan (2002), and Peters and Westerståhl (2006).

9.2.1 Terminology (by example)


(9.3) a. every is a determiner.
b. [[every]]M,g is a determiner meaning (a function from properties into gen-
eralized quantifiers; a relation on properties)
c. every linguist is a quantificational DP.
d. [[every(linguist)]]M,g is a generalized quantifier.

9.2.2 The type of quantificational determiners


The lambda calculus and generalized quantifier theory combine to give us a theory of
quantification that is in accord with the constituency of natural language.
< = > ?
*e, t+, *e, t+, t
restriction nuclear scope proposition

S
$#
$ #
DP VP
.-
. -
Det NP

9.2.3 Some representative determiner meanings


On the left, I provide the functional perspective given to us by the lambda calculi of hand-
out 5. On the right, I give equivalent set-theoretic formulations of the sort that are very
common in the literature on generalized quantifiers.
def
(9.4) every = λ f λg. ∀x f (x) → g(x) {*X, Y+ | X ⊆ Y}
def
(9.5) bartGQ = λ f. f (bart) {X | [[bart]]M,g ∈ X}
def
(9.6) some = λ f λg. ∃x f (x) ∧ g(x) {*X, Y+ | X ∩ Y $ ∅}

53
Quantifiers

def
(9.7) no = λ f λg. ∀x f (x) → ¬g(x) {*X, Y+ | X ∩ Y = ∅}
def
(9.8) most = λ f λg. |{x | f (x) ∧ g(x)}| > |{x | f (x) ∧ ¬g(x)}|
{*X, Y+ | |X ∩ Y| > |X ∩ (U − X)}
Ex. A.51,
A.52,
A.53
9.3 Conservativity
GQ theory would seem to be open to the following challenge: there are excessively many
objects in **e, t+, t+, to say nothing of the number of quantificational determiner meanings.
The richness of the space seems out of step with the phrases we actually encounter in
language.

(9.9) Assume the domain of entities contains 5 objects.

a. The number of *e, t+ functions is 25 = 32


b. The number of **e, t+, t+ functions is 232 = 4, 294, 967, 296
c. The number of **e, t+, **e, t+, t++ functions is 232∗32 = 21024 .

But theorists have a remarkable, nontrivial response to this objection: they propose the
following universal claim:

(9.10) All natural language determiners are conservative.

One way to define conservativity is as follows:

(9.11) A determiner meaning D is conservative iff D(A)(B) = D(A)(A ∩ B).

In an important sense, a conservative determiner is grounded in its restriction: we needn’t


look beyond that space when evaluating the truth of the entire statement. Keenan and Stavi
(1986) and Keenan (2002) report a massive reduction in the number of determiner mean-
ings if we consider only the conservative ones. Moreover, the restriction makes cognitive
sense: it means that we needn’t search the entire model when evaluating the truth of a
quantified statement. We really need only look at the restriction and its intersection with
other properties.
Testing for conservativity is straightforward: one uses the following template:

(9.12) A determiner Det is conservative iff the following equivalence holds (feel free
to change singular to plural):
Det linguists smoke ⇔ Det linguists are linguists that smoke.
Ex. A.54

54
Quantifiers

9.4 Two other properties of determiners


GQ theorists have discovered an amazing array of ways in which to classify quantifica-
tional determiners, and many of those classifications have been put to use in capturing
restrictions on the distribution of quantifiers involving them. I provide here just a small
sample as well as a few references, as a way of getting started on the literature.

9.4.1 Intersectivity
The property of intersectivity gets at the notion of ‘indefinite’, but it is a broader class that
that (historically) morphological classification suggests.
(9.13) A determiner meaning D is intersective iff

D(A ∩ B)(U) where U is the universe of discourse






D(A)(B) ⇔  D(U)(A ∩ B)


 D(B)(A)

Some examples:
(9.14) a. Some cyclists are bald. ⇔ Some bald cyclists exist.
b. Exactly three cyclists are bald. ⇔ Exactly three bald cyclists exist.
c. No cyclists are bald. ⇔ No bald cyclists exist.
d. Fewer than ten cyclists are bald. ⇔ Fewer than ten bald cyclists exist.
e. Every cyclist is bald. !⇒ Every bald cyclist exists.
(In a situation containing three cyclists, all of whom are hairy, the right
side is trivially true but the left side is false.)
f. Most cyclists are bald. !⇒ Most bald cyclists exist.
Keenan (1996) calls upon intersectivity to formulate a generalization about which deter-
miners can appear as pivots in the English existential construction:
some bald cyclists in the race.




exactly seven bald cyclists in the race.





(9.15) There is/are  fewer than ten bald cyclists in the race.

 ∗
every bald cyclist in the race.





 ∗ most bald cyclists in the race.

The hypothesis is that only intersective determiners can be pivots. However, only is not
intersective, and thus it can be regarded as a challenge to this generalization (but see ex-
ercise A.54). Keenan (2003) takes up the challenge and proposes that all and only the

55
Quantifiers

co-conservative determiners can appear as pivot. A determiner is co-conservative iff it is


conservative on its right argument. only, as a kind of reverse every, has this property.
Keenan (1993) proves that the set of intersective functions is isomorphic to the set of
quantifiers derived by applying a determiner meaning to the domain of individuals. Like
the conservativity result, this makes the space of generalized quantifiers seem much less
wild.

9.4.2 Monotonicity
Montonicity properties tell us important things about inference patterns for quantifiers.
Here are the basic definitions:

• A determiner D is left upward monotone iff D(A)(B) ⇒ D(C)(B), where A ⊆ C

• A determiner D is right upward monotone iff D(A)(B) ⇒ D(A)(C), where B ⊆ C.

• A determiner D is left downward monotone iff D(A)(B) ⇒ D(C)(B), where C ⊆ A.

• A determiner D is right upward monotone iff D(A)(B) ⇒ D(A)(C), where C ⊆ B.

• A determiner D is right (left) non-monotone iff it is neither right (left) upward mono-
tone nor right (left) downward monotone.

It is important to distinguish right and left monotonicity. For instance, every is left down-
ward but right upward. Here are some simple tests:

(9.16) a. Det is left downward monotone iff Det linguists smoke ⇒ Det phonolo-
gists smoke.
b. Det is left upward monotone iff Det phonologists smoke ⇒ Det linguists
smoke.
c. Det is right downward monotone iff Det linguists smoke ⇒ Det linguists
smoke cigars.
d. Det is left upward monotone iff Det linguists smoke cigars ⇒ Det linguists
smoke.

56
Quantifiers

9.4.2.1 DP coordination
Barwise and Cooper (1981) propose that monotonicity is at the heart of our intuitions
about whether to use and or but to coordinate DPs.
(9.17) a. no linguists {but/?? and} many topologists
b. no linguist {?? but/and} no topologists
c. no linguist {but/?? and} every topologist
Exercise A.55 asks you to explore the full range of data (testing left and right) to try to
determine the precise generalization.
Ex. A.55
9.4.2.2 Polarity sensitive items
The most famous application of monotonicity properties is in the area of polarity sensi-
tivity. Negative polarity items prefer ‘negative’ environments, and positive polarity items
prefer ‘positive’ ones.
Ladusaw (1980) proposed that negative polarity items were licensed in downward en-
tailing environments, and his hypothesis has since been refined, expanded, rejected and
resurrected numerous times. Why do people find it so compelling? Because it works in
such a surprising array of cases. For instance, as noted above, every is left downward
and right upward. True to Ladusaw’s generalization, a polarity item like ever is happy in
every ’s restriction but not in its nuclear scope.
(9.18) a. Every linguist who has ever taken a model theory class knows about com-
pactness.

b. Every linguist has ever taken a model theory class.
I recommend van der Wouden 1997 for very fine-grained analysis of various polarity items,
using a variety of different strengths of negation. I also urge readers who find these gener-
alizations compelling to read Giannakidou 1999, which reviews the major problems with
these hypotheses and offers a compelling alternative.

9.5 The terrain not covered


It would be easy to do an entire presession course called ‘Quantifiers for Linguists’. I’ve
only just scratched the surface in this handout. We’ve not looked at all at the nature of
quantifier scope, the role of partial functions, the problems that arise when fitting quanti-
fiers into a compositional linguistic theory, and much else besides.
Carpenter (1997) offers a first-rate overview of more of the theory, Heim and Kratzer
(1998) situate quantifiers in the broader linguistic scene, and Keenan and Falz (1985) and
Peters and Westerståhl (2006) provides a wealth of details about the GQ domain itself.

57
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout 10: Pragmatic connections


This handouts covers some of the most famous ways in which semanticists
have put their hooks into the context.

10.1 Indexicality
Kaplan (1989) is a pioneering work in formal pragmatics. It is the first detailed set of
arguments and theoretical proposals for a robust theory of indexicals like here, now, us,
and me. The paper is big and wide-ranging. It presents a few different formalizations of
the central insight, which is that indexicals are inherently tied to the utterance context and
thus immune to manipulation by any and all operators. (This position has turned out to
be too rigid; see section 10.1.3 below.) The present handout seeks to convey the central
insight with a simple formal system.

10.1.1 Formal system


Kaplanian contexts A context is a tuple c = *cS , cH , cT + such that

i. cS is the speaker of c.
ii. cH is the hearer of c.
iii. cT is the time of c.

I’m leaving out locations for now, to keep things streamlined.

Just to be clear A context is not a tuple of symbols, but rather a tuple of entities. An
example context:
4 5
, ,&

Types We have the usual types for entities and truth values (handout 5) as well as a new
type j for times. (There is no type for contexts!)

58
Pragmatics

Constants The only twist here is that our intensional parameter is a time rather than a
world. The items to watch are the indexicals:

i. bart, lisa, . . . are constants of type e.


ii. burp, giggle, muse are constants of type *e, * j, t++.
iii. me is a constant of type e.
iv. you is a constant of type e.
v. now is a constant of type j.

(There are no constants for contexts!)

Variables We have variables over expressions of any type. (Since there is no type for
contexts, there are no variables for contexts.)

Combinatorics It’s just function application as usual: If α is of type *σ, τ+ and β is of


type σ, then (α(β)) is of type τ.

The interpretation function The interpretation function is now dependent upon a model,
an assignment, and a context:
[[·]]M,g,c

A sampler of interpretations

i. 5burp5M = the function Φ ∈ D*e,* j,t++ such that Φ(!) = the function = ∈ D* j,t+ such
that =(&) = T iff ! burps at time &.

ii. [[me]]M,g,c = cS

iii. [[you]]M,g,c = cH

iv. [[now]]M,g,c = cT

v. [[α(β)]]M,g,c = [[α]]M,g,c [[β]]M,g,c


@ A

(The context never changes during the recursive interpretation!)

59
Pragmatics

10.1.2 Character and content


Character The mapping from contexts into our domains.

Content The functions in those domains.

λc. [[burp(you)(now)]]M,g,c
 
 4 5 
 

 , , & -→ [[burp(you)(now)]]M,g,c 

 
 
 4 5 
, & -→ [[burp(you)(now)]]M,g,c
 
 , 
 
 .. 
.
Without a context, we can’t get at the semantic meaning. Thus, sentences containing
indexicals are inherently tied to the context of utterance.

10.1.3 Indexicals are scopeless


Kaplan’s system predicts that we will never see indexicals shifted during the course of
semantic composition. The basic facts in English seem to confirm this:

(10.1) In some contexts, I am short.

= Interpreted in context c, this says that there are (presumably different) con-
texts in which cA is short.
$ In some contexts c, cA is short.

(10.2) In some contexts, the speaker is short.

The second meaning is the one we would get if we could shift indexicals around. It would
predict that we can interpret (10.1) as the trivially true assertion that we can find contexts
in which the speaker in that context is short; this is the meaning we perceive for (10.2).
But this reading is absent from (10.1), and Kaplan’s system tells us why: there cannot be
an operator of the sort In some contexts c, because we would require at least a variable
Ex. A.56, over contexts for it to work.
A.57

60
Pragmatics

10.2 Deictic pronouns


10.2.1 An initial hypothesis
We’ve not yet addressed the question of how to translate sentences containing deictic pro-
nouns (e.g., He left, She likes him ). One obvious hypothesis is to treat them as free
variables. For example (and setting aside what to do with the gender information on the
pronoun):

(10.3) It is boring ! boring(x)

Our interpretation function [[·]]M,g is ready to interpret such expressions. Extensionally,


[[boring(x)]]M,g = T iff g(x) is boring. Intensionally, [[boring(x)]]M,g denotes the set of
worlds 9 such that [[boring(x)]]M,g (9) = T.
But this treatment presumes that I know exactly which thing you are talking about.
This might be true in many situations. But suppose you tell me that you saw a movie last
night. I can understand your statement even if I cannot name the movie you saw. You
might continue by saying that it was boring. I might still not know which movie you
saw, but I’ve learned something about it nonetheless. You might continue this way: it
starred Marlon Brando, it was in black and white, and so forth. At some point, I might
figure out exactly what movie you are talking about, but that step need not happen for me
to understand your utterances. Thus, we would like something a little less specific than
(10.3) suggests.

10.2.2 Denotations as sets of assignment functions


Let’s return to our extensional lambda calculus of handout 5. I leveled the charge that such
systems don’t provide us with enough meanings, because they collapse everything into
Dt = {T, F} in the end. But we can get at more meanings if we make one subtle change.
Let’s think first about (10.4).

(10.4) [[happy(x)]]M

I’ve deliberately left the assignment function off. One could respond to this by saying,
“Well, I can’t evaluate this. Without a tool for interpreting the variable x, I am stuck at
that point”. But we could also view this as asking us to interpret the formula under every
possible variable assignment, keeping only those that make the formula true. Suppose the
denotation of happy is as in (10.5), and assume the set of assignments is the one from
section 5.4.7.1.

61
Pragmatics

 
 
 

 -→ F 
 

(10.5)
 

 -→ T 
 
 
 
-→ F

Then
   


       



       

 x -→   x → -   x -→


  

(10.6) [[happy(x)]]M = 
      
 

      



      




 y →

- y → - y -→

Thus, we have created more meaning distinctions: two sentences with free variables can
have radically different meanings even if they agree on some gs. Moreover, we have an
interestingly different view of deictic pronouns than the one we started with. Now, I need
not know exactly which entity you are referring to you when you use a pronoun. I might
just be learning things about a given variable — a discourse referent (Karttunen 1976).
This treatment also launches us on our way to exploring dynamic systems like those
of Heim (1983), Kamp and Reyle (1993), and Groenendijk and Stokhof (1991), since the
Ex. A.58, fundamental shift in those theories is the meanings-as-assignment-sets perspective.
A.59

10.3 Propositions and probabilities


So far, everything we have done has been rooted, directly or indirectly, in sets. One place
they feature very prominently is in the realm of propositions. We conceive of them as
functions from worlds into {T, F}, which is equivalent to thinking of them as sets of worlds
(namely, those worlds 9 that map to T; see handout 3, section 3.4.3).
This is the norm in semantics and pragmatics, but, in many ways, it is a funny choice.
Sets are blunt instruments for making meaning distinctions. They treat all their mem-
bers alike, for instance (you’re in or you’re out), and the have incredibly strong logical
properties (handout 7, section 7.5.4).
This section briefly explores a revision to the basic conception. Here, it a minor re-
vision, but one that leads us to see some new relationships. I leave it to you to try more
radical departures from the orthodoxy.

62
Pragmatics

10.3.1 Propositions as probability distributions


(10.7) Where W is a countable set, a function P : ℘(W) -→ [0, 1] is a probability
distribution iff
a. P(W) = 1
b. Probabilities are additive: if p and q are disjoint propositions, then P(p ∪
q) = P(p) + P(q)

(10.8) The probability distribution P mimics the proposition q (a subset of W) iff


i. P({9}) = 0 iff 9 ! q
ii. P({9}) = P({90 }) for all 9, 90 ∈ q

Clause (i) mimics nonmembership, and clause (ii) ensures that the probabilities are evenly
distributed across the worlds in the proposition (maximum entropy) — these distributions,
like the sets they come from, treat all their “members” (things with positive probability)
alike. An example:
(10.9) W = {91 , 92 , 93 }  
 91 → - .5 
p = {w1 , w2 }  92 → - .5 
 
93 → - 0

Of course, probability distributions need not be so uniform, but we’ll concentrate on the
ones that are, so that we keep a tight fit with the semantics.
Ex. A.60
10.3.2 Degrees of belief
The theory of belief statements of handout 7 is very good at getting at perfect belief and
perfect belief. We can of course mimic these extremes with probability distributions:
(10.10) Where Ba is the belief state for the individual a and Pa is the probability distri-
bution that mimics Ba (given some set of worlds W):
a. a believes p: Ba ⊆ p Pa (p) = 1
b. a disbelieves p: Ba ∩ p = ∅ Pa (p) = 0
But there is a great deal of middle ground between these two. In set terms, we can capture
this by saying that Ba is consistent with the content of p (Ba ∩ p $ ∅). But what about more
subtle degrees of belief like suspicion, strong suspicion, doubt, and so forth? It’s hard to
see how to define these in terms of sets alone, but probability distributions provide all the
intermediate ground we could want (assuming a large and rich enough W):

63
References

(10.11) a. a strongly suspects p: Pa (p) = .98


b. a doubts p: Pa (p) = .2
c. a is unbiased about p: Pa (p) = .5
The numerical values here are likely to be context dependent. One can also imagine purely
pragmatic applications. For instance, these values could represent an individual’s degree
of confidence or commitment — which could be vital for notions like assertability.

10.3.3 Degrees of relevance


In the realm of pure pragmatics, relevance is of vital importance. We have strong intuitions
not only about what is relevant to what, but also about degrees of relevance — we can make
comparative judgments about the replies in (10.12):
(10.12) Where does Barbara live?
a. In Russia.
b. Her email address ends in “.ru”.
c. I think she visits Russia a lot.
d. It’s cloudy today.
We have a clear decline in relevance from the first to the last reply to the question. A proba-
bilistic perspective lets us make use of the standard statistical measure of relevance to take
measurements that reflect this. We require just one new concept: conditional probability:
(10.13) P(A|B), read ‘the probability of A given B, is defined as
P(A ∩ B)
P(B)

The conditional probability of p given q can be significantly higher than the probability of
q alone. If we presented us with four possibilities and some information eliminates two of
them, then we have a great gain. And this seems to be what is happening in (10.12). The
questioner has little idea of where Barbara lives. The first utterance eliminates many, many
possibilities. The other answers eliminate some possibilities, at least conditionally. This
is likely true of “It is cloudy”, but its contribution will pale in comparison to the others.
Here is a proposed measure of relevance:
(10.14) The relevance of q to p is given by P(p|q) − P(p).
We can use this mostly for comparative judgments: for saying that p is more relevant to q
than is q0 , and so forth.

64
Bibliography
Barker, Chris. 2007. Direct compositionality on demand. In Barker and Jacobson (2007),
102–131.

Barker, Chris and Pauline Jacobson, eds. 2007. Direct Compositionality. Oxford: Oxford
University Press.

Barwise, Jon and Robin Cooper. 1981. Generalized quantifiers and natural language. Lin-
guistics and Philosophy 4(4):159–219.

Barwise, Jon and John Perry. 1983. Situations and Attitudes. Cambridge, MA: MIT Press.

Beaver, David Ian. 1997. Presupposition. In van Benthem and ter Meulen (1997), 939–
1008.

van Benthem, Johan. 1991. Language in Action: Categories, Lambdas, and Dynamic
Logic. Amsterdam: North-Holland.

Boolos, George S., John P. Burgess, and Richard C. Jeffrey. 2002. Computability and
Logic. Cambridge: Cambridge University Press, 4 ed.

Carpenter, Bob. 1997. Type-Logical Semantics. Cambridge, MA: MIT Press.

Cooper, Robin. 1983. Quantification and Syntactic Theory. Dordrecht: D. Reidel.

Curry, Haskell B. and Robert Feys. 1958. Combinatory Logic, Volume 1. Amsterdam:
North-Holland.

Dowty, David. 2007. Compositionality as an empirical problem. In Barker and Jacobson


(2007), 23–101.

Gamut, L. T. F. 1991a. Logic, Language, and Meaning, Volume 1. Chicago: University of


Chicago Press.

Gamut, L. T. F. 1991b. Logic, Language, and Meaning, Volume 2. Chicago: University


of Chicago Press.

Giannakidou, Anastasia. 1999. Affective dependencies. Linguistics and Philosophy


22(4):367–421.

Ginzburg, Jonathan and Ivan A. Sag. 2001. Interrogative Investigations: The Form, Mean-
ing, and Use of English Interrogatives. Stanford, CA: CSLI.

65
References

Groenendijk, Jeroen and Martin Stokhof. 1991. Dynamic predicate logic. Linguistics and
Philosophy 14(1):39–100.
Halvorsen, Per-Kristian and William A. Ladusaw. 1979. Montague’s ‘Universal grammar’:
An introduction for the linguist. Linguistics and Philosophy 3(2):185–223.

Heim, Irene. 1983. On the projection problem for presuppositions. In Michael Barlow,
Daniel P. Flickinger, and Michael T. Wescoat, eds., Proceedings of the 2nd West Coast
Conference on Formal Linguistics, 114–125. Stanford, MA: Stanford Linguistics Asso-
ciation.

Heim, Irene and Angelika Kratzer. 1998. Semantics in Generative Grammar. Oxford:
Blackwell Publishers.

Hintikka, Jaakko. 1969. Reference and modality. In Leonard Linsky, ed., Philosophical
Logic, 145–167. Oxford: Oxford University Press.
Jackendoff, Ray. 1996. Semantics and cognition. In Shalom Lappin, ed., The Handbook
of Contemporary Semantic Theory, 539–559. Oxford: Blackwell Publishers.
Jacobson, Pauline. 1999. Towards a variable-free semantics. Linguistics and Philosophy
22(2):117–184.

Janssen, Theo M. V. 1997. Compositionality. In Johan van Benthem and Alice ter Meulen,
eds., Handbook of Logic and Language, 417–473. Amsterdam: Elsevier.

Kamp, Hans and Uwe Reyle. 1993. From Discourse to Logic. Introduction to Modeltheo-
retic Semantics of Natural Language, Formal Logic and Discourse Representation The-
ory. Dordrecht: Kluwer.
Kaplan, David. 1989. Demonstratives: An essay on the semantics, logic, metaphysics, and
epistemology of demonstratives and other indexicals. In Joseph Almog, John Perry, and
Howard Wettstein, eds., Themes from Kaplan, 481–614. New York: Oxford University
Press. [Versions of this paper began circulating in 1971].

Karttunen, Lauri. 1976. Discourse referents. In James D. McCawley, ed., Syntax and
Semantics, Volume 7: Notes from the Linguistic Underground, 363–385. New York:
Academic Press.

Keenan, Edward L. 1993. Natural language, sortal reduction, and generalized quantifiers.
Journal of Symbolic Logic 58(1):314–325.
Keenan, Edward L. 1996. The semantics of determiners. In Shalom Lappin, ed., The
Handbook of Contemporary Semantic Theory, 41–63. Oxford: Blackwell.

66
References

Keenan, Edward L. 2002. Some properties of natural language quantifiers: Generalized


quantifier theory. Linguistics and Philosophy 25(5–6):627–654.
Keenan, Edward L. 2003. The definiteness effect: Semantics or pragmatics? Natural
Language Semantics 11(2):187–216.
Keenan, Edward L and Leonard M. Falz. 1985. Boolean Semantics for Natural Language.
Dordrecht: D. Reidel.
Keenan, Edward L. and Jonathan Stavi. 1986. A semantic characterization of natural
language determiners. Linguistics and Philosophy 9(3):253–326.
Kracht, Marcus. 2003. The Mathematics of Language. Berlin: Mouton de Gruyter.
Kracht, Marcus. 2007. The emergence of syntactic structure. Linguistics and Philosophy
30(1):47–95.
Kratzer, Angelika. 2007. Situations in natural language semantics. In Edward N.
Zalta, ed., Stanford Encyclopedia of Philosophy. CSLI, winter 2007 ed. URL
http://plato.stanford.edu/entries/situations-semantics/.
Ladusaw, William A. 1980. Polarity Sensitivity as Inherent Scope Relations. New York:
Garland. [Published version of the 1979 UT Austin PhD thesis].
Lewis, David. 1976. General semantics. In Barbara H. Partee, ed., Montague Grammar,
1–50. New York: Academic Press.
Montague, Richard. 1974. Formal Philosophy: Selected Papers of Richard Montague.
New Haven, CT: Yale University Press. Edited and with an introduction by Richmond
H. Thomason.
Muskens, Reinhard. 1989. A relational formulation of the theory of types. Linguistics and
Philosophy 12(3):325–346.
Muskens, Reinhard. 1995. Meaning and Partiality. Stanford, CA: CSLI/FoLLI.
Partee, Barbara H. 1984. Compositionality. In Fred Landman and Frank Veltman, eds.,
Varieties of Formal Semantics, 281–311. Dordrecht: Foris. Reprinted in Partee (2004),
153–181. Page references to the reprinting.
Partee, Barbara H. 1997. Montague semantics. In van Benthem and ter Meulen (1997),
5–91.
Partee, Barbara H. 2004. Compositionality in Formal Semantics: Selected Papers of Bar-
bara H. Partee, Volume 1 of Explorations in Semantics. Oxford: Blackwell Publishing.

67
References

Partee, Barbara H., Alice ter Meulen, and Robert E. Wall. 1993. Mathematical Methods
in Linguistics. Corrected 1st edition. Dordrecht: Kluwer.

Peters, Stanley and Dag Westerståhl. 2006. Quantifiers in Language and Logic. Oxford:
Blackwell.

Portner, Paul and Barbara H. Partee, eds. 2002. Formal Semantics: The Essential Read-
ings. Oxford: Blackwell Publishing.

Schönfinkel, Moses. 1924/1967. On the building blocks of mathematical logic. In Jean


van Heijenoort, ed., From Frege to Gödel, 355–366. Cambridge, MA: Harvard Univer-
sity Press. Originally published 1924, Über die Bausteine der mathematischen Logik,
Mathematische Annalen 99:342–372.

van der Wouden, Ton. 1997. Negative Contexts: Collocation, Polarity and Multiple Nega-
tion. London and New York: Routledge.

van Benthem, Johan and Alice ter Meulen, eds. 1997. Handbook of Logic and Language.
Cambridge, MA and Amsterdam: MIT Press and North-Holland.

68
Logic for Linguists, LSA Institute 2007, Stanford (Christopher Potts)

Handout A: Problems
Problems marked PRACTICE will push you deeper into the relevant concepts
from the handouts. Problems marked HARD are designed to be challenging.
Problems marked OPEN might not have neat resolutions. (And, in turn, prob-
lems not marked OPEN should be solvable in a reasonable amount of time.)

A.1 Relative truth


PRACTICE
Background Section 2.1.1 of handout 2 introduced the truth values and a highly qual- OPEN
ified perspective on truth. In introductory texts, one often gets the sense that truth is
presumed to be Truth in some absolute sense — truth about our reality. But advanced texts
and the primary literature tend to evaluate for truth in less absolute terms.

Your task Find two situations in which truth is evaluated, not with respect to our reality,
but rather with respect to a (potentially) different one. Might the people in these situations
show some awareness that their reality isn’t the only (or true) one? Might your situations
have an impact on how we design our semantic theory? If so, how? If not, why not?

A.2 Tarski’s hierarchy


PRACTICE
Background In section 2.1.3 of handout 2, I distinguished object languages from meta- OPEN
languages, and I recruited some notation from set theory (handout 3, section 3.1) to play
the role of rigorous metalanguage.

Your task How would you respond to the skeptic who claimed that he needed a seman-
tics for set theory, to feel confident that its interpretation was well defined? Would giving
a semantic theory of ∈, ⊆ and the like satisfy the skeptic? (Why not?)

A.3 Idioms
PRACTICE
Background Complex idioms are obvious challenges for compositionality in any of its OPEN
forms. It seems that, on their idiomatic uses, none of the expressions in (A.1) has a mean-
ing that is predictable from the meanings of its parts:

69
Problems

(A.1) a. Ed kicked the bucket. (Ed died.)


b. Ed bought the farm. (Ed died.)
c. Ed kept tabs on Joe. (Ed tracked Joe’s actions.)
One strategy would be to say that phrases like kick the bucket are lexical items. Thus, they
are where compositionality bottoms out: atomic items with primitive (non-derived) mean-
ings. But the following examples seem to reveal that at least some idioms are syntactically
complex:
??
(A.2) a. The bucket was kicked by Ed. (slightly odd on the idiomtic reading).
b. Tabs were kept on Joe (by Ed).

Your task Articulate why these facts are challenging for compositionality, and outline a
possible resolution (or a few of them).

A.4 Nondeterministic translation


OPEN
HARD Background Throughout section 2.4 of handout 2, I tacitly assumed that translation into
the logical language was functional: though two expressions might map to the same piece
of logic, no single expression mapped to more than one piece of logic. That is, I assumed
translation was functional (handout 3, section 3.4).

Your task Suppose that the translation process can be one-to-many. That is, suppose
a single expression E translates to distinct logical symbols L and L0 , and suppose that L
and L0 denote different model-theoretic objects. What would this mean for the status of
translation? How could a motivated argument for this nondeterministic translation inform
the debate about whether interpretation is direct or indirect?

A.5 A subtlety of predicate notation


PRACTICE
Background The predicate notation for sets (section 3.1, handout 3) is mostly straight-
forward. But it hides some subleties. Let’s tease out one of them.

Your task Describe the following sets in a way that is less obscure:
i. {n | n is a natural number and 3 < 4}
ii. {n | n is a natural number and 3 > 4}
What role does the second conjunct play in each case?

70
Problems

A.6 Exclusive union


PRACTICE
Background The union A ∪ B of two sets A and B contains the members of A ∩ B
(handout 3, section 3.1.4.2). This can be slightly counterintuitive.

Your task
i. Use ⊆ to specify the relation that lawfully holds between A ∩ B and A ∪ B.
ii. Define an exclusive union operator — symbol of your choosing — that excludes
A ∩ B.

A.7 Is it a function?
PRACTICE
Background Functions are defined in section 3.4 of handout 3. They are everywhere in
linguistics, so it is essential that you be able to spot them in the wild.

Your task For each of (A.3)–(A.9), say whether or not it is a function. If it is a function,
say also whether it is an onto function and whether it is a total function.
     
     
     
     
     
     
     
 1   1   1 
(A.3) 
 
 (A.4) 
 
 (A.5) 
 

     
0  0  0 
  
     
     
     
  

   
   
   
   
   
   
   
   
(A.6) 


 (A.7) 



   
   
   
   
   
   

71
Problems

(A.8) the relation R from nodes to nodes in tree structures that maps each node to its
daughter(s)

(A.9) the relation R−1 from nodes to nodes in tree structures that maps each node to its
mother(s)

A.8 Characteristic sets and functions


PRACTICE
Background Semanticists are apt to switch freely back and forth between talking in
terms of sets and talking in terms of functions. So it’s smart to get used to making such
mental remappings.

Your task Specify the characteristic set for the function depicted here:
 
 
 
 
 
 
 
 
 1 
 
 
0 
 
 
 
 


Your task Assume that the domain of inquiry is

' (
, , , , , ,

Specify the characteristic function of this set:

' (
, ,

72
Problems

A.9 Some counting


Background Handout 3 defines lots of different relations on sets. It can be illuminating
to count the number of objects in a given domain (of functions, of sets, of tuples, etc.). It
is revealing of the relationships between domains, and it also gives us a sense of just how
large our models are. PRACTICE
The notation DτDσ is often used to describe total functions from the domain Dσ into the HARD
domain Dτ . So the range is inline, and the domain is a superscript. This is presumably
because the number of such functions is

|Dτ ||Dσ |

where |A| is the cardinality of the set A and the superscript is an exponent.

Your task Let A = {a, b, c} and let B = {T, F}.


i. How many total functions are there from A into B?

ii. How many objects are in the powerset of A? How does your result help us under-
stand why the powerset of A is often given as 2A ?

iii. How many objects are in A × B. And what is the general method for calculating the
number of n-tuples in X1 × · · · × Xn ?

iv. How many objects are in the set of all functions from A × B into A? In general, how
many objects are in the set of all functions from X into the set of all functions from
Y into X (i.e., X -→ (Y -→ Z)?

A.10 Schönfikelization and implications


PRACTICE
Background Section 3.4.5 of handout 3 discusses schönfikelization with only functions HARD
in mind. But the idea is in evidence in natural language as well: OPEN
(S) a. If I take out the trash, then if I mow the lawn, I will get my allowance.
b. If I take out the trash and mow the lawn, I will get my allowance.
c. If I mow the lawn and take out the trash, I will get my allowance.
d. If I mow the lawn, then if I take out the trash, I will get my allowance.
These statements have identical truth conditions. If you write (a) or (d), you do well to
rewrite it as (b) or (c), which are easier to understand. But your choice doesn’t affect the
claim you are making about the world.

73
Problems

Your task Articulate the intuitive connection between the data in (S) and Schönfinkel’s
trick.

Your task Create PL or PLf translations of the sentences in (S) and give their truth table.
What do you see?

A.11 nor
PRACTICE
Background There are many more definable connectives than appear on handout 4. A
couple of them have the magical property of being truth functionally complete, and one
might even be a reasonable translation of an English word. Let’s look.

Your task Define a PL connective that seems suitable for the English expression nei-
ther. . . nor. (You can imagine that it’s just nor you’re defining, so that you have a binary
operator akin to ∨.)
i. State the translation hypothesis in a form comparable to that of hypothesis (4.7b).
(You can make up your own symbol.)
ii. Give the type for your connective, using the system of section 4.3.1.1.
iii. Provide the interpretation for your connective, in the manner of section 4.3.2.2.

A.12 The type definition for PLf


PRACTICE
Background The type definition given in section 4.3.1.1 of handout 4 is restrictive in
two ways. First, all types have t inputs (where the input is the left member). The input is
never anything more complex like *t, t+. Second, it is finite: only t, *t, t+, and *t, *t, t++ are
types.

Your task Generalize the type definition so that it specifies infinitely many types, but
maintain the restriction that inputs are always t.

A.13 A more readable PLf


PRACTICE
HARD Background Section 4.3.1.2 of handout 4 defines the syntax of PLf . It is an efficient
definition, and I proposed it because it means that we can fit this system into the ones that
come later with no fuss. But the results quickly become hard to parse. Some examples,
with their more intuitive versions at right:

74
Problems

i. ((∧(p))(q)) (q ∧ p)

ii. (∨((∧(p))(q))(p)) (p ∨ (q ∧ p))

iii. ((∧(¬(p)))(p)) (p ∧ ¬p)

The expressions are just tools for helping people understand what is happening with the
denotations (functions). At present, they are not particularly illuminating.

Your task Devise some new syntactic rules, replacements for those in (4.3.1.2), that
determine expressions of the sort at right but maintain the virtue of the current system that
∧, ∧p, and the like are well formed.

A.14 Interdefinability
PRACTICE
Background It is possible to make due entirely with just one binary connective and a
negation. All others are definable in terms of combinations of them. For instance, it is
common to treat (ϕ → ψ) as an abbreviation for (¬ϕ ∨ ψ).

Your task Using truth tables, show that treating (ϕ → ψ) as an abbreviation for (¬ϕ ∨ ψ)
gives us the arrow defined on handout 4. Then show how this definition works for the PLf
functions as well. If you want to push still further by combining this answer with exercise
A.11, then see how much mileage you can get out of a nor -like connective.

A.15 Relating PL and PLf


HARD
Background It would be useful to establish that PL and PLf are logically equivalent, so
that we can be confident that our use of PLf doesn’t affect our predicted truth conditions.

Your task

i. Establish a translation function from the expressions of PL to those of PLf .

ii. Show that this translation function preserves truth.

(The reverse direction is harder because not all PLf expressions correspond to well-formed
formulae of PL. One must concentrate on the truth-valued expressions.)

75
Problems

A.16 PLf and negation


Background Suppose we hypothesize that not/n’t ! ¬, where ¬ is the PLf operator
PRACTICE whose semantics is given in section 4.3.2.2 of handout 4.
OPEN
Your task How does this hypothesis fare in light of the natural language data you know
about? (The best way to answer this is to create a list of properties of ¬ and check them
against the linguistic facts.)

A.17 PLf and implication


PRACTICE
OPEN Background Suppose we hypothesize that if. . . then ! →, where → is the PLf operator
whose semantics is given in section 4.3.2.2 of handout 4.

Your task How does this hypothesis fare in light of the natural language data you know
about? (The best way to answer this is to create a list of properties of → and check them
against the linguistic facts.)

A.18 Exclusive disjunction


PRACTICE
Background Linguists and laymen alike are constantly tempted to assume that sentences
involving or entail that exactly one of the disjuncts is true. It is therefore tempting to define
an exclusive disjunction with those truth conditions and hypothesize that it is the basis for
the semantics of or. This exercise is likely lead you to the conclusion that this approach
won’t work.

Your task

i. The PLf operator ∨ is defined so that [[p ∨ q]]M = T if [[p]]M = [[q]]M = T. Define
a corresponding exclusive disjunction operator that excludes this case (symbol of
your choosing).

ii. Draw a truth table for a formula consisting of two exclusive disjunctions. What is
odd about the values this turns up?

iii. Suppose that your exclusive disjunction provides the meaning for English or. What
prediction would this make about the sentence Sam is at the store, or Sam is on his
cell phone, or Sam is inspecting broccoli ?

76
Problems

A.19 PLf and compositionality


PRACTICE
Background Our hypothesis is that every declarative sentence S translates as some OPEN
propositional letter p.

Your task Amass as many arguments as you can think of for why this is a hopelessly
bad hypothesis.

A.20 Conjunctions and constituency


PRACTICE
Background The conjunction operator of PLf takes first its right argument and then HARD
its left argument. This strongly suggest that the syntax is asymmetric in a parallel way.
Suppose we deny this. That is, suppose that the syntactic structure of coordinations is
more like

S
((''
( '
S and S

Your task How might we devise a semantics that does justice to these structures and
predicts the same truth conditions as our binary version? (It might be useful to think about
currying in this case (section 3.4.5 of handout 3).
If you’re looking for an additional challenge: what would it take it generalize your
definition of and to n-ary conjunction, for any finite n? You might try to write down a
lambda term. Be sure to confront, in prose if not in symbols, the fact that we can’t fix n
ahead of time.

A.21 Coordination and function composition


PRACTICE
Background Function composition is defined in handout 3, section 3.4.4. It’s also help- HARD
ful to see what it looks like when expressed in lambda terms:
def
Where ϕ : *σ, ρ+ and ψ : *ρ, τ+, ψ ◦ ϕ = λχ. ψ(ϕ(χ))

Your task Is function composition commutative? If it is, then prove that claim. If it is
not, then find two functions f and g for which ( f ◦ g) and (g ◦ f ) are well defined, but
( f ◦ g) $ (g ◦ f ). What would happen if we modeled coordination in these terms?

77
Problems

A.22 PL intensions
PRACTICE
HARD Background This course is building quickly to an intensional perspective on meanings.
It is important to see that the roots of this idea are present in PL and PLf as well. This
exercise asks you to draw that perspective out, by redefining the logical constants so that
they are less about truth than about possibilities.

Your task Suppose we wanted to take more seriously the metalogical observations about
intensionality in PL, as summarized in section 4.7. Suppose we wanted to do interpretation
in terms of the sets of indices at the bottom of that truth table.

i. What would be an appropriate domain for the type t in that case?

ii. What would be appropriate denotations for the following connectives in light of your
reformulation of Dt ?

a. ¬
b. ∧
c. ∨
d. →
e. ↔

iii. What has happened to truth in this reformulation?

A.23 Alternative type definition


PRACTICE
Background The type definition of handout 5 is so common that one starts passing over
the types without a glance. But they can have an interesting logic in their own right, often
delimiting the space of meanings in important ways. So it is worth getting accustomed to
thinking about their freedoms and limitations.

Your task Consider the following type definition:

i. ◦ and • are types

ii. If σ is a type, then *†, σ+ is a type.

iii. Nothing else is a type.

For each of the following, say whether it is in the above type space:

78
Problems

i. *†, ◦+

ii. *◦, †+

iii. *◦, •+
= B C>
iv. †, †, *†, ◦+

A.24 Possible types given assumptions


PRACTICE
Background Semantic types have a very strict logic, often forcing decisions upon us, for
good or for ill. It is important to see this — important for problem solving, and important
for thinking about the models (which are arguably too big and complicated to reason about
directly).

Your task What are the two possible types for α if the mode of composition is functional
application? What is the model-theoretic reason for this limitation?

γ : *e, t+
('
(
( ''
α : β : *e, *e, t++

A.25 Vacuous abstraction


OPEN
Background The lambda calculus allow vacuous abstraction. Should we allow our lin-
guistic theory to inherit this freedom?

Your task Try to articulate what aspects of the system allow vacuous abstraction, and
try also to find evidence for or against allowing it in our linguistic theory as well. The
following examples might be useful in this regard.

(V1) People would call all the time and ask for Ali and we would say,“it’s not our
company,” she said. Usually, the calls were complaints, Ms. Tams said, adding:
“It’s been one of those things where we were going to go to him and talk to him
about having him change his fictitious name, but it’s something we never got
around to doing. And I wish we did.”1
1
Officials Puzzled About Motive of Airport Gunman Who Killed 2 By Rick Lyman and Nick Madigan,
New York Times, July 6, 2002, National section.

79
Problems

(V2) “Johnny, believe me. We may be dealing with something neither one of us
should get near, something way up in the clouds that we — I — don’t have the
knowledge to make a proper decision.”2

A.26 Partiality
OPEN
HARD Background It is very common to find that an author is implicitly or explicitly depend-
ing on some functions in the functional domains being partial, rather than total (handout
3, section 3.4). Such functions can provide an elegant formal basis for a theory of pre-
suppositions (Beaver 1997). But it has logical consequences that we should be aware of
(Muskens 1989, 1995).

Your task Suppose that the denotes a partial function from properties into entities (type
**e, t+, e+, one that is defined for f iff f is true of just one entity in De (uniqueness). What
are the consequences of this for expressions like the(dog), where we assume that dog
is of type *e, t+? What consequences does this have for our link between the types of
expressions and their domains (see exercise A.28)?

A.27 Novel types and meanings


PRACTICE
Background This exercise involves both ingredients required to build a new logical con-
stant: a type and a meaning. (It is important to keep sight of the fact that a type is not a
meaning. A type only delimits the space of potential meanings for the expression.)

Your task Fill out the following semantic analysis with types and logical expressions,
and then provide a denotation for bet that makes it a reasonable hypothesis for the trans-
lation of English bet.

Chris bet Ali $500 that the earth is flat.


**))))
***
* ))
chris
&&%%%
&&
& %
%
flat(the(earth))
$#
$
$ ##
$500
.-
. -
bet ali
2
Robert Ludlum. 1986. The Bourne Supremacy. New York: Bantam paperback, p. 266.

80
Problems

A.28 Types, expressions, and domains


PRACTICE
Background In the logic we work with, the types appear in the definition of well-formed
expressions and in the definitions for the hierarchy of semantic domains. There is, there-
fore, a sense in which they organize both the expressions and the models. Let’s consider
again definition (4.2) from section 4.3.2.2 of handout 4:

(4.2) A PLf expression ϕ is of type σ iff 5ϕ5M ∈ Dσ .

Your task Try to articulate the nature of the connection established by (4.2). If we write
down (ϕ(ψ)) where ϕ is of type *σ, τ+ and ψ is of type ρ, where ρ $ σ, what happens
when we try to interpret that formula? In what sense is our type-theoretic problem also a
semantic (model-theoretic) problem?

A.29 Recursive interpretation


PRACTICE
Background In section 5.4.2, we looked briefly at some pseudo computer code for re- OPEN
cursively interpreting lambdas terms. The code is meant to bring out the idea that inter-
pretation is a recursion, with very complex things flowing from a few simple operations.

Your task Provide the missing clause for interpreting lambda abstracts, i.e., the case in
which ϕ is of the form (λχ. ψ). Please feel free also to write an actual program for parsing
and interpreting lambda terms!

A.30 An alternative mode of composition


PRACTICE
Background One sometimes encounters a rule of predicate modification (Heim and Kratzer OPEN
1998). Here is a general formulation of the rule and its semantics:

(A.10) a. If α : *σ, t+ and β : *σ, t+, then α > β : *σ, t+


b. [[α > β]]M,g = the function that maps any ! ∈ Dσ to T iff [[α]]M,g (!) =
[[β]]M,g(!) = T

Your task

i. In what sense, if any, does predicate modification expand what we can do with the
logic? (Could we define this rule using just functional application?)

81
Problems

ii. What predictions does this rule make about the dependencies between α and β? Does
it allow that the interpretation of one might be conditioned by the interpretation of
the other. (This aspect of the problem is ‘HARD’.)
iii. The rule is stated so as to allow any type ending in t. How might be characterize this
class of domains?
iv. Could we generalize this rule to any type *σ, τ+?

A.31 Assignments
PRACTICE
Background The act of changing an assignment according to an instruction can seem
complicated, but in fact it is quite minimal and easy to visualize with a little practice.

Your task Fill out these equality expressions (I’ve not relativized to a model to keep
things simple):
 
 
 
 
x -→
 
 
 
 
 
 
 

i. [[x]] y -→
 
=
 
 
 
 
x -→
 
 
 
 
 
 
 

ii. [[y]] y -→
 
=
 
 
 
 
x -→
 D E
 
  x-→
 
 
 
 

iii. [[x]] y -→
 
=
 
 
 
 
x -→
 D E
 
  x-→
 
 
 
 

iv. [[happy(y)]] y -→
 
=

82
Problems

 
 
 
 
x -→
 D E
 
  x-→
 
 
 
 

v. [[λy. happy(y)]] y -→
 
=

A.32 Variable names


PRACTICE
Background The axioms of the lambda calculus (handout 6) have some important con-
sequences for what we can and cannot do with variables by way of making meaning dis-
tinctions. This exercise is a glimpse at that realm (see also handout 10, section 10.2).

Your task For each pair, say whether its members can differ model-theoretically. If they
can, exemplify the difference. If they can’t, try to articulate why they can’t.
X a. happy(x)
b. happy(y)
Y a. λx. happy(x)
b. λy. happy(y)

A.33 Cross-categorial and


PRACTICE
Background With the lambda calculus, we can improve on a major shortcoming of the HARD
hypothesis that and translates as ∧. Recall that, in the context of PLf , this meant that
we could have only sentence-level coordination. In a sense, it is easy to show that this
prediction is false. Any major category can be conjoined. It is much closer to the truth to
say that and can conjoin two things as long as they have the same type. This generalization
is now within reach in an extensional lambda calculus.

Your task Define a cross-categorial and that is build up from our ∧ from PLf but can
take any pair of arguments in *σ, t+.

A.34 A relational reinterpretation


HARD
Background Muskens (1989, 1995) argues at length for a relational perspective on de-
notations, rather than a functional one. One of his arguments appeals to the desire to
construct theories that are easy to grasp. He writes:

83
Problems

This is all very well, until we realise that we have coded binary relations be-
tween ternary relations as functions from functions from individuals to func-
tions from individuals to functions from individuals to truth values to func-
tions from functions from individuals to functions from individuals to truth
values. In other words, we have replaced objects that we have some intu-
itive grasp on by monsters that we can reason about only in an abstract way.
(Muskens 1995:12)
It’s a point well taken. One might object that these monsters are necessary if we want a the-
ory that assigns a meaning to each syntactic phrase. But Muskens answers that objection.
See if you can do so as well.

Your task Formulate an operation on relational meanings that essentially abstracts over
one of the coordinates in the tuples it contains, so that we can maintain our usual theory of
composition with these (arguably) simpler relational objects.

A.35 What’s the source of the ill-formedness?


PRACTICE
OPEN Background A linguistic theory might offer many different potential ways of explaining
the deviance of some example. It can be very difficult to settle on one of the options, and it
can be even more difficult to figure out how, or whether, to remove redundancies in one’s
account. This exercise gives you a glimpse of such challenges.

Your task The task is to provide explanations for the deviance of each of the examples
in (a)–(e).

(D) a. Ed devoured. (but what about Ed ate ?)

b. I saw Sue and that it was raining.

c. Ed glimpsed the dog the printer.
#
d. It’s not raining, but Sue realizes it’s raining.
#
e. The A-train suffered an existential crisis.
(cf. I dreamed that the A-train suffered an existential crisis.)
Two things to keep in mind:
• We are not (necessarily) after a unified theory of the deviance seen in (D).
• If you’re unsure of how to analyze a constituent semantically and it isn’t important
to your argument how it is analyzed, then translate it into a single predicate. For
example, The A-train ! the-train.

84
Problems

A.36 Building a fragment


PRACTICE
Background In linguistic semantics, a fragment is a complete theory of a (very) small OPEN
chunk of a natural language.

Your task Your goal for this part is to construct a fragment that handles all the intransi-
tive verb constructions in (F). (Ignore all issues relating to tense.)

(F) a. Bart burps.


b. Maggie giggles.
c. Lisa muses.

Your fragment should have the following parts:

i. A specification of the class of well-formed expressions of your logical language.

ii. A type theory for organizing the logical expressions.

iii. A specification of the domains for each of the types.

iv. An interpretation function that takes the logical expressions to objects in the domains
for the types (in a way that respects typing).

v. A translation procedure for mapping English phrases to expressions of the logical


language.

To show readers how your fragment works, you should provide a derivation of some kind
for one of the sentences in (F).
Strive for generality. If your fragment works for the examples in (F), it will also work
for lots of other intransitive sentences. Either sketch how your fragment could be gen-
eralized to new intransitive sentences with proper-name subjects or (better) define your
fragment so that it has this level of generality built into it.

A.37 Substitution
Background Substitution is an apparently simple operation on formulae that is nonethe-
less complicated by the conditions geared towards ensuring that no accidental binding
takes place. It’s worth practicing a bit. PRACTICE

85
Problems

Your task For each of the following perform the substitution operation if it is permitted,
else indicate what blocks the substitution.

i. y[y!(cyclist(x))]

ii. (cyclist(x))[x!y]

iii. (λx. (λy. (( f (y))(y))))[y!x]

A.38 Beta reductions


PRACTICE
Your task Beta reduction (lambda conversion) is the workhorse of the proof system
when it comes to linguistic analysis. We often build up big lambda terms and need to
reduce them to fully understand what meanings they pick out. This exercise gives you
some practice with such reductions.

Background Reduce each of the following expressions as far as possible, indicating


each step.

i. (λx. like(x))(y)

ii. (λx. like(y))(x)

iii. (λx. run)(x)

iv. (λ f. f (ali))(λy. fast(y))


F : ;G@
v. λP. λx. P λy. admire(y)(x) λ f. f (ali) chris
A@ A

A.39 Eta conversion and distinguishable meanings


PRACTICE
Background Lambdas are so closely associated with meaning analysis that it can be
hard for people to see that they are not constitutive of a proposal about a particular deno-
tation.

Your task Try to articulate the model-theoretic grounding for η-conversion. Why is it
guaranteed to work, given the formulation of functional application and functional abstrac-
tion. (It might help to think about what happens when you abstract over x in an expression
like happy(x).)

86
Problems

A.40 Extensional beliefs?


PRACTICE
Background Section 7.1 of handout 7 begins building the case that we cannot have an OPEN
extensional theory of predicates like believe.

Your task See if you can make matters worse for “extensional believe ”, by perhaps
deriving meanings that run directly counter to our intuitions. Can you make Lisa both
believe and disbelieve every truth?

A.41 Modals
PRACTICE
Background Modal verbs seem to say something about propositional content. But,
within an extensional model, our ‘propositions’ are just truth values.

Your task Why can’t we just use one of the functions from Dt into Dt to analyze modals?

A.42 Hintikka’s believe


PRACTICE
Background The proposed meaning of believe in handout 7 is essentially Hintikka’s HARD
(1969). Hintikka proposed to associate with every individual a a subset of the set of
possible worlds representing a’s belief state. This function has since come to be called
Dox, for ‘doxastic’. Extensionally, Dox maps entities to sets of possible worlds.

Your task Formulate Dox precisely, but make sure that you take into account, some-
how, that an individual’s beliefs can vary from world to world. Rework the definition
of [[believe]]M,g using this new Dox, and explain your decisions about how to handle the
world arguments throughout.

A.43 Individual concepts


OPEN
Background The linguistic hypotheses of handout 7 (section 7.3) leave proper names HARD
out of the intensional sphere, by assigning them to a single domain De that is independent
of specific possible worlds. But many have argued that the domain of entities is relativized
to specific possible worlds.

87
Problems

Your task Where do you come down on this issue? Why? What impact, if any, does your
position have on the treatment of proper names? (You might widen your scope enough to
include fictional names and the like — it depends on how daring your are.)

A.44 How many worlds are there?


PRACTICE
HARD Background It is worth trying to figure out how big our intensional models are. The
answer could impact our view of how semantics and cognitive science link up, and it
might push us towards a proof-theoretic semantics, rather than a model-theoretic one.

Your task Suppose there are n individuals. How many different properties can there be?
How many worlds should we have to ensure that we can make all the distinctions among
meaning that we want to be able to make?

A.45 Finding common ground


PRACTICE
HARD Background Handout 7, section 7.4.4, suggests that we can get back our old notion of
truth for sentences by having a designated constant for the real world, @. The idea is that
when speakers assert something, they assert that it holds in @.
But the move is somewhat puzzling. We don’t know which is the actual world. Know-
ing that would involve knowing absolutely everything about our reality. However, as soon
as we admit that we don’t know, for example, who is standing at the northmost corner of
Prospect Park right now, we admit that we don’t quite know which world we occupy.
It is more accurate to say that we have a set of worlds that we consider contenders for
the actual world. Let’s call that set the common ground.

Your task Define a notion of truth relative to a common ground, so that we can still have
truth/assertion after giving up on @.

A.46 Definites and semantic composition


OPEN
HARD Background Section 7.5.3 of handout 7 briefly motivates the idea that definite descrip-
tions are of type *s, e+. But this leaves us with a problem: it is not clear how they combine
with our predicates’ meanings.

88
Problems

Your task Propose a solution to this problem. You might change our assumptions about
predicates. You might sneak in a free variable over worlds. You might define a new rule
of semantic composition. Feel free to think freely, but do try to motivate the choice you
make.

A.47 Contradictory beliefs


PRACTICE
Background This question continues section 7.5.4’s critical examination of our theory
of belief predications.

Your task Suppose I believe something impossible. What does my belief state look like
then? Is this realistic? If no, how might we do better?

A.48 Degree constructions


OPEN
Background Handout 8 emphasizes that the systems defined here are by no means the HARD
end of the story. It is routine to find constructions that demand extensions of these basic
systems. Degree constructions are an excellent case in point. Here are some basic cases.
(A.11) a. That mouse is tall.
b. That elephant is tall.
c. That elephant is ten-feet tall.
d. Sam is taller than Sue.
It is extremely hard to see how we would fit these into our current type system and asso-
ciated domains. We seem to need new objects, ones that can help us get at notions like
Sam’s degree of height.

Your task Extend an extensional or intensional lambda calculus with a type for degrees
and an associated domain, then use this enriched set of tools to give a meaning for tall.
Strive for a meaning that will work in a broad range of cases. (It might seem easiest to
start with examples like That mouse is tall, but in fact it is easier to start with comparative
data.)

A.49 Singular and plural


OPEN
Background It is easy to find predicates that care about whether their arguments are in- HARD
tuitively singular or plural (independently of issues concerning morphological agreement).

89
Problems

i. The crowd gathered in the town square.

ii.∗ Sam gathered in the town square.

Your task How should we account for this pattern? What changes to the logical system
does your answer require?

A.50 Exactly 1 in first-order logic?


PRACTICE
Background The usual first-order quantifiers are ∃ and ∀. While we can’t define all the
natural language determiners in their terms, the pair of them can be remarkably expressive.

Your task Give a semantics for expressions like ∃!x ϕ that makes them true iff there is
exactly one entity with the property ϕ. And then describe how you would generalize this
to exactly n, for any natural number n.

A.51 A closer look at the universal


PRACTICE
OPEN Background The universal quantifier has been around, in one form or another, for cen-
turies, and it has always been tied to natural language words like every. But it has proper-
ties that its presumed natural language exponents do not obviously have, if they have them
at all. We should be somewhat skeptical of the connection.

Your task

• What if the restriction to every is empty (maps all elements to F)?

• What if the restriction and nuclear scope have the same extension?

A.52 All and only Lisa’s properties


PRACTICE
Background It can be a bit surprising at first that proper names can denote in the gen-
eralized quantifier domain. So let’s look a bit more closely at what we do when we view
Lisa from the perspective of the set of all her properties.

90
Problems

Your task Suppose that Lisa is young, intelligent, and literate. She is not angry, and she
is not tall. Assume that there are no other properties besides these. Using these facts, draw
a picture of the function specified in (A.12).
(A.12) λ f. f (lisa)

A.53 Intensional quantifiers


PRACTICE
Background The quantifiers given throughout handout 9 are purely extensional. Thus, HARD
they won’t fit easily into the intensional setting of handout 7. We should fix this.

Your task Provide a type for intensionalized quantificational determiners, and provide
the meanings for every and most in these new terms. What did you decide to do with the
world arguments? Why?

A.54 Nonconservative determiners?


OPEN
Background Are there nonconservative determiners in natural language? The proposed HARD
universal that all natural language determiners are conservative is surprising, and surpris-
ingly robust. Does it hold up?
“With at most a few exceptionsa English Dets denote conservative
functions.” (Keenan 1996:55)
a
“All putative counterexamples to Conservativity in the literature are ones in which a sentence
of the form Det A’s are B’s is interpreted as D(B)(A), where D is conservative. So the problem is
not that Det fails to be conservative, rather it lies with matching the Noun and Predicate properties
with the arguments of the Det denotation.”

Keenan has in mind examples like the following:


(C) a. Only dogs bark
b. Many Scandinavians have won the Nobel Prize.

Your task
i. Run the conservativity test on each sentence in (C), and indicate which if any of the
entailments go through and which don’t.
ii. Articulate why your results seem problematic for the conservativity generalization.
iii. Propose a resolution (reject the generalization, follow Keenan’s advice in the quota-
tion, something else entirely).

91
Problems

A.55 Coordination and monotonicity


OPEN
Background Barwise and Cooper (1981) propose to link preferences concerning the
choice of and and but in DP coordinations with the monotonicity properties of the DP
arguments. I started to provide relevant data in section 9.4.2.1 of handout 9, but I didn’t
go far enough. We must check systematically both right and left monotonicity properties
before we can be sure of the generalization’s proper formulation.

Your task Provide the data needed to obtain a generalization concerning the choice of
coordinating element and formulate that generalization, and then try to formulate a suitable
generalization.

A.56 Indexicals as proper names?


OPEN
HARD Background The Kaplanian context (the c parameter) is an enrichment of our theory of
interpretation. We should try to be sure that the move is justified.

Your task Suppose, then, that we tried to analyze indexicals as proper names. What (if
anything) would this analysis get right, and what (if anything) would it get wrong?

A.57 Indexicals and constants: A crucial difference


PRACTICE
Background Kaplan (1989:506) writes, “Indexicals have a context-sensitive character.
[. . . ] Nonindexical have a fixed character.”

Your task Explain what this means in the context of the theory described in section 10.1
of handout 10.

A.58 Denotations as sets of assignments


PRACTICE
Background In the text, I suggested how we can assign rich denotations to expressions
that contain free variables. This is an important part of this new perspective on interpreta-
tion, but it doesn’t cover all the cases.

Your task Provide denotations of the following in terms of sets of assignments, keeping
in mind that an assignment g is in the denotation of an expression ϕ iff interpreting ϕ
relative to g produces T.

92
Problems

i. [[happy(sam) ∨ ¬happy(sam)]]M

ii. [[happy(sam) ∧ ¬happy(sam)]]M

A.59 Dynamic indefinites


OPEN
Background The discussion in section 10.2 of handout 10 provides a theory of free HARD
pronouns that depends on conceiving of denotations as sets of assignment functions. This
increases the meaning distinctions we can make, and it is also the first step towards a
dynamic theory. But my discussion leaves one wondering how new discourse referents are
introduced into the discourse, so that pronouns can pick up on them.
The linguistic answer is that indefinites are the prototypical vehicles for new discourse
referents. You say, “I saw a movie last night”, and we now have a new discourse referent
to talk about.

Your task Devise an assignment-based theory of indefinites that captures the fact that
they introduce new information. A first step might be to assume that assignments are
partial functions on infinite lists of variables. When one uses an indefinite, one adds new
variable–entity mappings in some uniform fashion.

A.60 Probabilities and sets


Background The second clause of the definition of probability distributions in (10.7) of
handout 10 connects set union with additivity. How deep do the connections between sets
and probabilities go?

Your task What are the analogues of seta intersection, set difference, and subset in the
realm of probabilities? Can you find important ways in which the pairs you propose differ?

93

Vous aimerez peut-être aussi