Vous êtes sur la page 1sur 4

CS 611 Advanced Programming Languages Class Notes 29

Fall 1991 Robert Constable Nov. 20, 1991


Lorenzo Alvisi, Scribe

1 Scott Model Concluded


We will end our discussion of the Scott model by proving the following
Theorem 1 S [ ] ; the Scott model, is a model for PCF.
We have started last time to develop a proof by induction on the terms by looking at some of the constant
terms: we were up to examining Y and we were trying to show that S [ Y ]  is continuous. This will turn out
to be the most delicate part of the proof, since the compound terms for abstraction and application, that
cause the real work in most inductive arguments, are easy to handle in the case. On the other hand, the
Y combinator requiring special attention comes as no surprise, since it caused us so much trouble when we
were trying to build the model, and we couldn't nd a classical function space that modeled it.
Then the lemma we have to prove is:
C
Lemma 1 S [ Y ] , the semantic meaning of the Y operator, is continuou s. i.e. S [ Y ]  2 (S [ A]  !
C
S [ A] ) ! S [ A] 
In order to prove this lemma, let's rst remind that
S [ Y ] (F ) = lim
i
F i (?)
by de nition. We have continuity, if we can show that
S [ Y ] (lim F ) = lim
j j j
S [ Y ] !] (Fj )
If we just replace the de nitions on both sides, the equation comes out looking like this:
lim(lim F )i (?) = lim
i j j
(lim F )i (?)
j i j
The whole proof is just showing that we can switch the order of the limits in i and j. We can do that
by looking at both the directions of this equality: since it is a rather tedious proof, we will just sketch its
structure by looking at one of the directions. We have to show then that
lim(lim F )i (?) v lim
i j j
(lim F )i (?)
j i j
.
We will proceed by induction on i. Let us notice rst of all that we know we have monotonicity both on
i and j, that is:
Fji(?) v Fji+1(?)
Fji(?) v Fji+1(?)

1
Now we can consider the base case,
lim F 0(?) v lim
j j
(lim F )i (?)
j i j

If i is zero in the right-hand side, the v relation certainly holds by equality, and since the chain 'goes up'
when we increase i, the base case is in general true because of monotonicity. Now, for the induction step,
assume
lim F i(?) v lim
j j
(lim F )i (?)
j i j
then, again because of monotonicity, it follows that
lim
j j
F i+1(?) v lim (lim F )i+1 (?)
j i+1 j

Given this structure, the rest of the proof is straightforward. 2


For compound terms we have to show that the meaning of S [ x:b]  is continuous. This follows directly
from an induction step, since we know that the body is continuous with respect to the environment. An
analogous argument can be used in the case of applications, where
S [ ap(f ; a)]] = S [ f ] (S [ a] )
Here, f is a continuous function that maps Scott models to Scott models, and then continuity for applications
follows by the induction hypothesis on S [ f ] . 2

2 Generality of Fixed Point Theory

The continuous function model is applicable not only to Scott model for PCF but to a wide range of semantic
situations. We can, for instance, apply it back to the theory of Herbrandt-Godel equations and indeed, as
people get comfortable with the theory, it turns out to be useful in a variety of non standard applications.
As an example, Gunther and Scott discuss its application to the theory of context free grammars and to
cardinality. (A complete account of this discussion can be found in the hand-out distributed in class).
Consider the grammar G = (E; fa; bg; P; E), where E consists of
E := ajbEb
Let L(G) be the language generated by G, where L(G)  63 = fa; bg3. In classical textbooks like Hopcroft
and Ullman the de nition of this grammar and of its language is given inductively. Gunther and Scott point
out that we can also think of G as providing us with a function G : P (63 ) ! P (63). The way in which it
operates is to accept as input a set corresponding to the non-terminal E, and then use the right-hand side of
the production to obtain a new set for E: every application of G adds something to the set that is submitted
in input.
Example 1 Let E be initially equal to the empty set. Then
'0 = ; '1 = G('0) = fag '2 = G('1 ) = fa; babg '3 = G('2 ) = : : :
and so G is monotone in the inclusion ordering of sets (i.e.  ).

2
Since G is a monotone map, we can now apply the theory to obtain the least xed point and say that
L(G) = `Y (G)'
Notice that there might be other xed points, that could be obtained, for instance, by fedding in initially a
string that is not in the language: the only one we care about in language theory, though, is the least xed
point, and this will be the language L(G).

3 Discussion of Models
We have spent a considerable amount of time discussing the Scott model for PCF. It is quite natural then
to raise the question: how good is the Scott model for PCF? Are there any other good models? And what
it means, for a model, to be `good'? The Scott model has certainly some advantages: it has an elegant
mathematical content and abstracts from many grubby details, like stacks, pointers and activation records
in its account for recursion. Let us introduce some de nitions that will help our discussion.

3.1 Concepts for analyzing models

Soundness Consider two terms, M and N, equal under the relation R generated by all the reductions in
the language (i.e. M =R N ) This synctatic notion of equality is, with respect to reductions, the symbolic
computational equality in the system: you cannot tell terms apart if they are equal this way We say that a
model M is sound if it captures this notion of basic computational equality: in other words, M is sound if
M =R N =) M[ M ] = M[ N ]

Soundness is the most basic property that we wold like to know about a model. It is infact so basic, that
we can have sound models that are, under many respect, not satisfactory. As an example, consider a model
T such that, for any term t,
T [ t] = t
These models are called term models and even if they don't enjoy a good reputation among semanticists,
that don't consider them `real Semantics', they contain a notion of computation, that can be made slighter
more respectable by having each term mapped not exactly to itself, but to the set of all terms in the language
modulo some equivalence relation (say, -equality). Despite of their limits, term models are sound: infact,
the relation =R is coarser than any equivalence relation that groups together terms in these models, and
then the requirement for soundness is satis ed.

Observational equality We can enrich our set of tools for judging the goodness of a model if we abstract
from the symbolic computational equality =R and introduce the notion of observational equality We write
M  N (LR ; O)
and we read \M is equal to N with respect to a language L with the relation R de ned as in the previous
paragraph, and to a set of observables O" if there is no context in which M and N can be distinguished
with respect to observations regarding elements of O. Then, for example, in the case of PCF we can
consider observing numerals, or natural number output, and say that two program segments are equal if
their behaviour over the natural numbers is the same. It is very easy to see that, with respect to PCF

3
anyway, this notion of observational equality is much more abstract and identi es many more term than
=R . Our old idea of equality was able to account for terms that had the same binding structure, with the
notion of -equality, or that could be, in general symbolically R-reduced to be the same. The new notion
we have introduced, on the other hand, groups terms as equal if we can not tell them apart by looking at
the numbers they produce: it is an equality under true computational terms.
Example 2 Consider the terms
x:x + 2 and y:y + 1 + 1
Our symbolic notion of equality recognizes these terms as equal, since
y:y + 1 + 1 = x:x + 1 + 1 =R x:x + 2
but will not consider equal the terms x:b1 and y:b2 if b1 6=R b2
Observational equality, on the other hand, will recognize them as equal as long as
x:b1(n) = y:b2 (n) 8n 2 N 2

Observational equality, sometimes called extensional equality, is then more inclusive than syntactic equal-
ity: infact, as we have seen, two functions now are equal as long as they compute the same value. (Notice
that this notion is undecidable, since it is equivalent to asking if two Turing machines recognize the same
language)

Abstractness We can nally use the notion of observational equality to introduce a concept that we will
discuss more deeply in the next lecture: abstractness. We will say that a model is abstract if observational
equality implies semantic equality, i.e.
M  N (LR; O) =) M[ M ] = M[ N ]
Note that term models are not abstract and that this is actually a way of distinguishing between syntactic
models and mathematical models: the mathematical ones are abstract. As a matter of fact there is a very
trivial abstract model, the one that maps everything to the same object. Abstraction consists of throwing
away details: in this case we throw away so many details that there is nothing left! We will see in the next
lecture how to avoid excesses by introducing the concept of adequately abstract (fully abstract).

Vous aimerez peut-être aussi