A Model for Syntactic Control of Interference

Two imperative programming language phrases interfere when one writes to a storage variable that the other reads from or writes to. Reynolds has described an elegant linguistic approach to controlling interference in which a re(cid:12)nement of typed (cid:21)-calculus is used to limit sharing of storage variables; in particular, di(cid:11)erent identi(cid:12)ers are required never to interfere. This paper examines semantic foundations of the approach. We describe a category that has (an abstraction of) interference information built into all objects and maps. This information is used to de(cid:12)ne a \tensor" product whose components are required never to interfere. Environments are de(cid:12)ned using the tensor, and procedure types are obtained via a suitable adjunction. The category is a model of intuitionistic linear logic. Reynolds' concept of passive type { i


Introduction
The ability to update the state is the source of much of the exibility and e ciency of imperative programming, but also many of its di culties.Undisciplined sharing of storage variables can lead to subtle program errors that are di cult to detect and trace, and just the possibility of this kind of interference, even when absent, can have a signi cant detrimental impact on ease of reading and reasoning about programs (e.g.Hoare, 1974b;Reynolds, 1978).The well-known dishevelment caused by variable aliasing { when di erent identi ers name the same storage variable { in Hoare-style proof systems for reasoning about assignment and procedures (Hoare, 1971) is a kind of theoretical symptom of the problems brought on by interference.
There are related, and perhaps even more vivid, problems in the presence of concurrency, where uncontrolled interference can be a serious obstacle to program predictability.As a result, a number of authors (e.g.Hoare (1974a) andBrinch Hansen (1973)) have argued that all interference between concurrent processes should be mediated by, say, monitors or communication primitives.
Reynolds ' (1978,1989) syntactic control of interference approaches these issues from a linguistic viewpoint, by using syntactic constraints that limit interaction between di erent program parts.The aim is not to eliminate interference entirely, but is to make it more manageable by arranging matters so that (a conservative approximation to) noninterference is easy for the programmer or compiler to recognize.Programs can still use state, but there is a tighter control over sharing of storage variables.
The purpose of this paper is to show that the interference constraints that form the basis of the approach have good semantic properties.This is done in two steps.First, we describe a category in which interference properties of semantic entities, as well as types, are made explicit.Then, using this category we examine interference control principles from a semantic perspective, and familiar category-theoretic structure falls out quite directly.The structure we obtain amounts to a model of (intuitionistic) linear logic (Girard, 1987), and a bit more.
A model for syntactic control of interference was previously proposed in (Tennent, 1983).This was basically an untyped model, in the \Curry" style.While this kind of interpretation can be useful for proving properties of interference constraints, we believe that the \Church-style" model presented here gives a more satisfactory semantic account of the type-theoretic basis of the approach (especially in the close relationship between categorical structure and syntactic constraints).
Controlling interference is an old problem in programming languages.It dates as far back as Fortran and Concurrent Pascal, with their anti-aliasing restrictions (Brinch Hansen, 1973;ANSI, 1978; see also Hoare, 1971), and plays an important role in such languages as Euclid, Turing and occam (Cordy, 1984;Popek et. al., 1977;Holt et. al., 1987;INMOS, 1988).We will not attempt to survey here the growing body of recent work on interference control and related topics.The reader is referred to the papers by Lucassen and Gi ord (1988), Wadler (1990Wadler ( ,1992)), Guzm an and Hudak (1990), Swarup et. al. (1991), and their references for further discussion of this.Syntactic control of interference represents an important step toward our understanding of how a programming language could provide the bene ts of state, while avoiding many of the di culties it causes in present-day languages.It should be mentioned, however, that the approach has not yet been perfected.In particular, there are presently di culties with recursion and jumps (Reynolds, 1989); this will be discussed brie y in Section 9. Nevertheless, we feel it remarkable that the syntactic restrictions at the core of the approach are semantically so well-behaved, especially since interference is often regarded as a low-level operational concept.This encourages our belief in the possibility of clean, and yet practical, methods for harnessing the power of assignment.
We will outline the main features of our model later in this introductory section, after discussing background on interference control.
1.1.Background on Interference Control Syntactic control of interference is based on a re nement of typed -calculus, where typing constraints are used to limit the manner in which interference can arise.The constraints are motivated by a number of \principles of interference control" described by Reynolds, which are chosen so as to ensure that interference is easily detectable.
The rst principle is I. if no identi er free in phrase P interferes with any identi er free in phrase Q, then P and Q don't interfere.This is an assumption about the nature of the language which says, in e ect, that all \channels" of interference must be named by identi ers.In particular, closed terms don't interfere with any other terms.(We use \phrase" and \term" interchangeably.) The second principle is what necessitates syntactic constraints.II.distinct identi ers never interfere.Combined with I, this provides the programmer with a particularly simple method of predicting non-interference, and meets head on problems caused by such phenomena as aliasing of storage variables.For example, Principle II implies that running the assignment statements x := 1 and y := 2 in any order, or in parallel, is determinate at an appropriate level of abstraction.This would not be the case if aliasing between x and y was allowed, because the same storage variable would be destructively altered by each statement.
Principle II may seem overly restrictive at rst, but interference is not forbidden altogether.For example, an abstract data structure can be represented by a collection of interfering procedures that are di erent quali cations of the same identi er, such as di erent components of a \record" or \object" (Reynolds, 1978;Dahl, 1972).In e ect, interference is treated as an exceptional case, which requires e ort from the programmer to indicate explicitly.In contrast, most imperative languages have (the possibility of) interference as the default case, with determination of non-interference requiring e ort.
The nal principle allows a limited amount of sharing.III.passive phrases, which don't write to any (global) variables, don't interfere with one another.
For example, if y is a \read-only" expression then, according to all of the Principles I-III, the assignment statements x := y + 1 and z := y + 2 won't interfere.Note that sharing of read access is consistent with Principle II: two identi ers can have read access to the same storage variable, as long as neither has write access to it.
Notice that these principles do not attempt to predict all interference relationships between program phrases, such as whether di erent uses of the same non-passive identi er interfere.More \ ne-grained" interference detection is often used in parallel program optimization, e.g. to determine if di erent uses of an array identi er don't interfere (Padua and Wolfe, 1989).Of course, linguistic interference control and algorithmic interference detection have di erent aims (and should be considered complementary), with simplicity being of the upmost importance in the former, enabling a programmer to recognize interference easily in many cases.

Semantic Aspects of State Dependence
The backbone of our analysis of interference control is a notion of the support of a semantic entity a as a pair R; W of nite sets of locations (storage variables).Intuitively, R consists of the locations that a reads from, and W consists of the locations that a writes to.Once support is de ned it is then straightforward to formulate a semantic counterpart of non-interference, which we will call independence, by comparing the supports of semantic entities to determine whether one writes to a location that another reads from or writes to.This notion of support is inspired in part by earlier work of Halpern et. al. (1983) and Meyer and Sieber (1988).There the support of a semantic entity is identi ed as, intuitively, the set of locations upon which it depends.Our formulation is di erent in two ways.The rst is that we separate the read and write capabilities of locations; this will turn out to be crucial for the treatment of passivity.The second is that our formulation uses the functor-category approach to program semantics initiated by Reynolds (1981) and Oles (1982Oles ( , 1985)).In fact, the treatment of support given by Meyer and Sieber can be considered to have functors at its core.Making this explicit leads to a cleaner and simpler treatment.We refer to the expository article (O'Hearn and Tennent, 1992) for further discussion of this.See also (Tennent, 1990(Tennent, ,1991;;O'Hearn andTennent 1992,1993) for other related work on functors and non-interference.
The reader may wonder why we are invoking this category-theoretic machinery.The reason is that the notion of support is surprisingly di cult to make precise in standard cpo semantics.There is no trouble de ning it for meanings of basic program types, such as commands (i.e.state transformations) or expressions, but getting a de nition that works well at higher types is problematic (try it!).As in the just-mentioned work on the semantics of state, the key role of functors here will be to explicitly include support information in the de nition of what count as meanings at higher types.
The basic idea of our semantics is to have a category of \possible worlds" in which the objects are supports.The worlds are used to partition semantic entities according to the variables that they read or write, and semantic maps are required to respect this structure.This is done by interpreting types as functors from this category to a category of domains, and terms as natural transformations between these functors.So types come with interference information \built-in."This opens up the possibility of de ning type constructors, as operations on a functor category, in a way that takes interference information into account.

Overview of the Model
A main aim of our model will be to make Principle II evident.This is an an assumption about the nature of the environment, one that we shall take as our starting point.We do this by de ning the notion of environment so that the only environments are ones in which distinct identi ers denote non-interfering (independent) meanings.The principle is thus regarded as a prior assumption about the semantic character of the language, present in the structure of all semantic maps (de nable or not), rather than as a property to be proved about valuations.
Environments will be built up using a product-like operation as usual.But Principle II tells us right away that a cartesian (categorical) product would not be appropriate in this context.The reason is that we do not want a \diagonal" map that takes an environment and forms a bigger environment in which there are two copies (b; b), denoted by di erent identi ers, of a single component b from the smaller environment: if b interferes with itself then this would violate II.
We will instead use a restricted form of product which, while not cartesian, satis es the requirements for symmetric monoidal categories.The intuition about this \tensor" product A B is that its elements are pairs whose components don't interfere with one another.A term t with, say, two free identi ers of types A and B is interpreted as a map of the form A B C

t] ]
thus achieving our basic aim of making Principle II evident.Procedure types are then obtained via a suitable exponential adjunction: Hom(A B; C) = Hom(A; B? C): This gives a very satisfactory explanation of the interaction of procedures and environments in the presence of interference constraints.
For Principle III, we describe a notion of passive object in our category, where a type is passive i none of its \elements" writes to any storage variables.If A and B are passive objects then the key property is that A B is isomorphic to the (categorical) product A B. In particular, for such a passive A there is a diagonal map from A to A A, corresponding to the intuition that sharing of read access is permissible.In categorical terminology, for each passive type (and only for passive types) there is a canonical commutative comonoid structure.This semantic treatment of passivity amounts to a categorical interpretation of the modality \!" from linear logic (Lafont, 1988;Seeley, 1989).
This categorical structure suggests a semantic view of the typing rules of interference control that is independent of the interference-speci c aspects of the underlying category.This abstraction is especially useful because the details of the semantics of non-interference that we present are, in truth, quite complicated in certain respects.However, we believe that it is important to see how this categorical structure arises from an account of non-interference in a speci c case.It is the interplay between the concrete model and categorical principles that is important here, with the former giving assurance that the abstraction provided by the latter is indeed faithful to more concrete programming intuitions.Our semantic analysis would be incomplete if it did not include both of these perspectives.

Outline
The remainder of the paper is organized as follows.
We begin in Section 2 by describing a simple language that incorporates Principles I and II.It should be mentioned that the type system used in this study is actually a stripped-down version of those described by Reynolds.In particular, we do not consider record types or intersection types, which were used by Reynolds (1989) to cope with subtleties in the syntactic treatment of passivity.These omissions are made mainly to simplify the exposition.
The use of functors in explicating certain aspects of state dependence is the topic of Sections 3-5.Section 3 begins with the de nition of the category of worlds, and then provides an illustrative example of the category at work.Sections 4 and 5 are devoted to de ning and exposing basic properties of support and independence.
In Section 6 we use the independence predicate to describe the tensor and its corresponding exponential adjunction.An interpretation of the typing rules from Section 2 is given in Section 7.This follows fairly directly from categorical considerations.We do not carry out a detailed study of properties of the interpretation itself, but we do look at some semantic equivalences that illustrate reasoning principles that are sound in the presence of interference constraints, principles that would be unsound otherwise.
Sections 8 and 9 are concerned with passivity.Section 8 is devoted to categorical matters, including the connection to intuitionistic linear logic.Section 9 gives interpretations for typing rules that incorporate Principle III into the language.

Interference Constraints
We will use A; B to range over types in our language: A; B ::= exp j commj var j A ! B. var, comm and exp are the types of (storage) variables, commands and expressions, respectively.Commands denote state transformations, while expressions denote functions from states to values.The language is Algol-like in the sense of (Reynolds, 1981): all side-e ects are encompassed in the type comm, expressions are side-e ect-free (but state-dependent), and procedures are called by-name.A procedure can cause side-e ects only indirectly, when used within a phrase of type comm.
A typing context ? is a (possibly empty) list x 1 : A 1 ; :::; x n : A n , where the x i are distinct identi ers and the A i are types.(We assume an in nite, but otherwise unspeci ed, x : A `x : A Id ?;y : B;x : A; `t : A ?;x : A; y : B; `t : A Exchange ?`t : A ?; x : A `t : A Weakening ?;x : A `t : B ? ` x:t : A ! B !I ?`p : A ! B `q : A ?; `p(q) : B ! E ?`p : comm ?`q : comm ?`p;q : comm Sequencing ?`p : comm `q : comm ?; `p k q : comm Par ?`V : var ?`E : exp ?`V := E : comm Assignment Table 1 Typing Rules collection of identi ers.)We write ?; for the concatenation of ? and .This assumes that identi ers in ?and are disjoint, which is the implicit assumption whenever we write ?; .Typing judgements are of the form ? `t : A, where t is a term, A is a type, and ? is a typing context.Some typing rules are in Table 1.Other rules are in Sections 7 and 9.The use of di erent contexts for the procedure and the argument in the elimination rule !E plays a central role in the approach.An application p(q) can be typed using this rule only if the free identi ers in p and q are disjoint.This ensures that binding using preserves the property that distinct identi ers don't interfere.For example, in ( x : x y )q if we assume that no identi er free in q interferes with y then, using II, we can conclude by Principle I that q doesn't interfere with y.Thus, the bound identi er x won't interfere with y either, preserving Principle II.(So A ! B is the type of procedures that never interfere with their arguments.) Put another way, this constraint on application prevents the typing of ( x : x y := 2 )y because y is free in both the argument and procedure.This is how aliasing between x and y in the body of the procedure is avoided.
As in (Reynolds, 1978(Reynolds, , 1989)), for illustrative purposes we consider a simple form of parallel composition, whereby p k q is well-formed only when p and q don't interfere.
The intention is that this will ensure that their parallel execution is determinate.This is achieved with the di erent contexts for p and q in the rule Par.For example, we cannot type something like x := 1 k x := 2.
Principle III has not yet been taken into account.It will allow a limited form of sharing between concurrent commands, and between procedures and their arguments.This is deferred to Section 9.

Functors and State Dependence
In the possible-world approach to program semantics, types are interpreted as functors from a small category W of \possible worlds" to a category D of domains, and terms are given by natural transformations.Instead of a single set or domain, types are families of suitably related domains.
This approach was used initially by Reynolds and Oles in the semantics of local-variable declarations.Their idea was that allocation of a local variable in a block such as new x : C induces a change in the \shape" of the store, because it results in there being an extra storage variable (bound to x) that was not available previously.Thus, the collection of possible states varies as storage variables get allocated and de-allocated.This variation was modeled using a category of worlds in which the objects were abstract store shapes and the morphisms were \expansions" that allocated additional variables.We will use possible-world structure to partition semantic entities according to the storage variables (or \locations") that they read from or write to.This will be accomplished with a suitable choice of W. The worlds will be pairs (R; W) of nite sets of locations, with R representing \readable" locations and W \writable" ones.The underlying intuition is that, for a functor A] ] : W ?! D corresponding to a type A, A] ](R; W) is a domain of meanings appropriate to A which may read from (only) locations in R, and write to (only) locations in W.
Notation: We write A 2 C to indicate that A is an object of a category C. D is the category of directed-complete posets (predomains) and continuous functions.Semicolon denotes composition in diagrammatic order in various categories.Our model will be a full subcategory of the functor category D W .

The Category of Worlds
We assume a xed in nite set Loc (of locations).The category W has Objects: The W-objects are pairs (R; W) of nite subsets of Loc.
Morphisms: A W-morphism f : (R; W) !(R 0 ; W 0 ) is an injective function from R W to R 0 W 0 such that f(R) R 0 and f(W) W 0 .(Though we won't be explicit on this point, to be completely precise we should label each morphism with its domain and co-domain in W, to disambiguate cases when a single function can serve as a number of W-morphisms.) Here, f(R) is the image of f on locations in R, and similarly for f(W).We will also write f(X) for the world (f(R); f(W)), when X = (R; W).Composition is just composition of set-theoretic functions.We denote the identity W-morphism on world X by id X .
We extend set-theoretic notion for subset inclusion ( ), union ( ), intersection (\) and the inclusion function L , !L 0 between sets L and L 0 (when L L 0 ) to worlds as follows: W-morphisms will be used to convert semantic entities at one world to another world where possibly additional locations are readable or writable.The requirements f(R) R 0 and f(W) W 0 mean that, in general, we cannot do such a conversion to a world where fewer locations are readable or writable.The injectivity requirement ensures that distinct locations do not get identi ed when passing from one world to another, a restriction that is crucial for the de nition of morphism parts of various functors for program types.
W is similar in some respects to the category of nite sets and injective functions, a category that can be used to treat local-variable declarations (Moggi, 1989;O'Hearn and Tennent, 1992).The part of our development having mainly to do with principles I and II, i.e. up until the end of Section 7, could in fact be carried out using this simpler category.However, the separation of read and write aspects of locations will prove to be important for the treatment of passivity.

Command Meanings
We now illustrate the use of state-dependence information in W by considering a functor comm] ] of command meanings.For any world (R; W), comm] ](R; W) will consist of state transformations for commands that, intuitively, only read from locations in R and write to locations in W. Accordingly, two conditions on state transformations will be imposed, one concerned with locations that are not writable and the other with locations that are not readable.Suppose c is a state transformation.If c doesn't write to a location, then the location should have the same value in the output state as in the input state.For read access, if applying c to state s doesn't read location `then the result should be independent of the value of `.This \independence" splits into two cases: (i) the value of `is left unchanged, and this does not depend on the particular value that `has, or (ii) `is set to a speci c value that does not depend on the initial value of `. (i) corresponds to the case when c neither reads nor writes `, and (ii) is the case when c writes but does not read `.These points are incorporated into the de nition of comm] ] below.
If L Loc then we de ne the states for L as a set of functions where Values is some set of storable data values (e.g.integers).For simplicity, we have assumed that there is only one kind of value that can be stored as the contents of a location.To cope with more than one kind of storable value, we could tag each location with a type, indicating the kind of value it can hold, and then require that states and W-morphisms respect these tags.
We now de ne comm] ] on W-objects , ordered by graph inclusion.
Here ; is the partial-function space and (s j `7 !v) is the state like s except that `maps to v.
In the case that W = R = L this de nition reduces to S(L) ; S(L), which is similar to the notion of command meaning that we would use if nite sets and injective functions was to serve as the category of worlds (O'Hearn and Tennent, 1992;Oles, 1985).The extra conditions in the de nition correspond to the points (i) and (ii) discussed above, and are due to our separation of read and write aspects of locations.
For c 2 comm] ](R; W) and s 2 That is, up to location-renaming by f, comm] ](f)c extends c by the identity on extra locations in world (R 0 ; W 0 ) that are not in the image of f.
To illustrate this de nition, consider the meaning c0 2 comm] ](fx; yg; fx; yg) corresponding to the assignment statement x := 1. (For notational simplicity, in examples we will use x; y; ::: for identi ers as well as the locations that they denote.)c0 takes an input state s 2 S(fx; yg) and produces as output s 0 2 S(fx; yg), where s 0 (x) = 1 and s 0 (y) = s(y).
Clearly, x := 1 doesn't read from x or y, so we should be able to remove x and y from the read component of the world.Recalling our two cases about read access, we note that (i) c acts like the identity on y, and the nal value of x doesn't depend on y, and (ii) it sets x to 1, irrespective of what values are in the initial state.Therefore, from the de nition of comm] ] it follows that c0 is also in comm] ](;; fx; yg) and c0 = comm] ] (;; fx; yg) , ! (fx; yg; fx; yg) c0: On the other hand, since x := 1 doesn't write to y we should be able to remove y from the write component.The result is a meaning c1 2 comm] ](;; fxg) that maps s 2 S(fxg) to s 0 , where s 0 (x) is 1. comm] ] is de ned on morphisms so that, for a morphism f : X !Y and c 0 2 comm] ]X, comm] ]fc 0 is the identity on any locations not in the image of f.

Support
In this section we de ne a notion of support that, intuitively, identi es the locations that a semantic entity reads from or writes to.The strategy will be to generalize the treatment of support for command meanings sketched at the end of the previous section.This notion of support will be applicable to all functors in a full subcategory of the functor category D W .

Pullbacks and Support
Since types in our model will be functors from W to D, the support of a semantic entity will be a concept de ned relative to a possible world, i.e. a W-object.For such a functor A and element a 2 A(X), where X is a world, we want to identify the \smallest" world that a \comes from."As a prelude to the de nition of support we make this \comes from" intuition precise.
Suppose A : W ?! D, X 2 W, and a 2 A(X).When Y X, we de ne Y j = a () 9 a 0 2 A(Y ) : A(Y , !X) a 0 = a.Y j = a is read \a comes from Y ."(The notation Y j = a does not indicate the relevant functor (A) or world (X), but no ambiguity is likely to arise as these will always be clear from context.Similar remarks apply to the notations support(a), a4b and passive(a) to be de ned later.) We are going to de ne the support of a 2 A(X) as the smallest Y X such that Y j = a.Such a smallest world is not guaranteed to exist for arbitrary functors A. However, we can get a satisfactory de nition when the following property holds ( ) for all worlds X, Y; Z Y , and a 2 A(X), Y j= a ^Z j= a =) Y \ Z j= a: When ( ) holds, the intersection of the ( nite) collection of Y 's such that Y j = a will be the support of a.
We can ensure property ( ) by making use of a standard connection between intersections and pullbacks.Recall that, in the usual category of sets, if Y; Z X, then X Y \ Z Y Z @ @ I ??
? ?@ @ I is a pullback square, where the unlabeled arrows are inclusion functions.This is also a pullback square in W, when the unlabeled arrows are what we called the inclusion morphisms and \ is the componentwise intersection of W-objects.
Note also that the category W has all pullbacks.Given W-morphisms j : Y !X and k : Z !X, a pullback square X X 0 Y Z @ @ I h ??i ? ?j @ @ I k is obtained by setting X 0 = j(Y ) \k(Z) and de ning h and i in the evident fashion so that h(`) = j ?1 (`) and i(`) = k ?1 (`).Now, property ( ) can be guaranteed to hold whenever the functor A : W ?! D preserves pullbacks.
Proof of ( ).If Y j = a and Z j = a then there are a 1 2 A(Y ) and a 2 2 A(Z) such that A(Y , !X)a 1 = a and A(Z , !X)a 2 = a.Let f g be a one-point domain, and de ne f : f g !A(Y ) and g : f g !A(Z) by f( ) = a 1 and g( ) = a 2 .Consider the following diagram, where the unlabeled arrows are A(i) for the appropriate inclusions i. A(X) The outer diagram commutes by the de nitions of f and g so, since A preserves pullbacks, there is a unique k making the whole diagram commute.In particular, k( ) 2 A(Y \ Z) must be such that A(Y \ Z , !Z) k( ) = a 2 , and this implies Y \ Z j = a. 2 Thus, if A : W ?! D preserves pullbacks, X is a world, and a 2 A(X), we de ne support(a) to be the unique world such that support(a) j = a, and 8Y X : Y j = a =) support(a) Y .
That is, support(a) is the \smallest" world that a comes from.
These could be denotations of commands x := x + 1 and if x = 1 then diverge.

The Semantic Category
We de ne the K to be the category whose objects are pullback-preserving functors from W to D and whose morphisms are all natural transformations between such functors, with the usual (vertical) componentwise composition.One can easily verify that comm] ] preserves pullbacks.
K has nite products, which are calculated pointwise as is typical in functor categories.A terminal object 1 2 K can be de ned as 1(X) = f g and 1(f) = the identity function on f g, for some singleton domain f g and any world X and f : X !Y .If A; B 2 K, X 2 W and f is a W-morphism then where on the right-hand side is the (categorical) product in D. A consequence of the pullback-preservation requirement is that the morphism parts of functors in K are automatically order-re ecting, where a map m : D !E of predomains is order-re ecting i 8d; d 0 2 D : m(d) E m(d 0 ) ) d D d 0 .This result (which was pointed out by A. Pitts) will play an important role in our development, ensuring that domain-theoretic structure is respected when we de ne the tensor product, exponential, and passive types in K. Lemma 1 Suppose A 2 K and f : X !Y is a W-morphism.Then the map A(f) is order-re ecting.As a result, it is injective and its image is directed-complete.
Proof.First, note that for any map f : X !Y in W there are maps g and h such that X Y Y @ @ I f ??f ??g @ @ I h is a pullback square.Second, note that if D E E @ @ I m ??m ??n @ @ I o is a pullback square in D then m is necessarily order-re ecting; this can be shown by a straightforward calculation using the isomorphism between D and the standard construction of the pullback object as a suitable sub-poset of E E. That A(f) is order-re ecting then follows because A preserves pullbacks.Since A(f) is order-re ecting, injectivity follows from monotonicity, and directedcompleteness of the image follows from continuity. 2 The next result shows that supports of semantic entities are respected by all natural transformations and by the morphism parts of functors in K.These properties form the technical underpinning for almost all of what follows.
Lemma 2 Suppose A; B 2 K, X is a world, a 2 A(X), f : X !Z, and : A : !B.
Proof.(i).Suppose Y j = a.Then A(Y , !X)a 0 = a for some a 0 2 A(Y ).Naturality of then guarantees that B(Y , !X) Y a 0 = X a, and the result follows.
(ii).We show 8Y X : Y j = a () f(Y ) j = A(f)a; the desired result follows from this.Suppose Y X and f : X !Z.One can easily verify that the left-hand diagram This means that A(Y , !X)a 0 = a for some a 0 .The right diagram then gives A(f(Y ) , !Z) A(f )a 0 = A(f)a, and so f(Y ) j = A(f)a.
(=: Assume f(Y ) j = A(f)a.Then the right-hand diagram implies A(Y , !X; f)a 0 = A(f)a for some a 0 , because A(f ) is an isomorphism in D (since f is iso in W).Y j = a follows from the injectivity of A(Y , !X) (Lemma 1). 2 4.3.Discussion: On the Level of Granularity With the de nition of command meanings as state transformations, the words \a command writes (reads) a location `" must be understood at a level of abstraction where one ignores intermediate states.For example, the \do-nothing" command skip and the composite x := x+1; x := x?1 would be semantically equal.However, our mathematical notion of support does not depend on the particulars of this treatment of commands because it applies to any functor in K, and so this frees us to study properties of support in relative isolation from details of how commands or other types are interpreted.Indeed, it would possible to interpret commands on a level of abstraction where intermediate states are made visible, and the relevant parts of our semantic theory would still apply.
But one important consequence of interference control is that, in the absence of controlled interaction (e.g. through monitors), it is consistent to view concurrent commands on the level of abstraction of state transformations.We believe that it is simpler and more informative to formulate the model in a way that makes this clear.

Independence
We can now use the support predicate to de ne independence, a semantic counterpart of non-interference.(We have chosen to use a somewhat neutral term, \independence," to avoid confusion that may arise from possible operational, or implementation oriented, predispositions with regard to the concept of non-interference; c.f. the comments at the end of the last section.) We begin by de ning a relation 4 of independence between possible worlds.If (R 1 ; W 1 ) and (R 2 ; W 2 ) are W-objects then (R 1 ; W 1 ) 4 (R 2 ; W 2 ) () If two worlds are independent then any writable location in one is not in the other.Independence between semantic entities is de ned in terms of their supports.This is again a notion that is relative to a possible world.If A; B are K-objects, a 2 A(X), and b 2 B(X), then a4b () support(a)4support(b): Example.Suppose c 1 ; c 2 2 comm] ](R; W) and c 1 4c 2 .We de ne a state transformation c 1 k c 2 2 comm] ](R; W) that represents the joint, parallel, capabilities of c 1 and c 2 .
Suppose support(c 1 ) = (R 1 ; W 1 ), support(c 2 ) = (R 2 ; W 2 ), and s 2 S(R W).If c 1 (s)" or c 2 (s)" then (c 1 k c 2 )s".Otherwise, Note that W 1 and W 2 are disjoint since c 1 4c 2 , and so c 1 k c 2 is well-de ned.From the de nitions of comm] ] and 4 one can easily show that c 1 k c 2 = c 1 ; c 2 = c 2 ; c 1 when c 1 4c 2 , where semicolon here is composition of partial functions.2 The next lemma states basic properties independence, as it relates to the functorcategory structure in K. Part (i) says that independence is preserved and re ected by the morphism parts of functors in K; as a result, changing worlds does not alter independence relationships between semantic entities.(The =) direction is essentially the usual \Kripke monotonicity" property that intuitionistic predicates in presheaf toposes Sets X must satisfy.)Part (ii) states that independence is preserved, though not necessarily re ected, by all maps in our category.From the programming perspective, this says that if you apply a closed term of procedural type to two non-interfering entities, then the two resulting terms must still be non-interfering.(To see why the converse should fail, consider a constant procedure that takes an argument and simply returns the numeral 1: 1 doesn't interfere with itself, but this does not imply that arguments to di erent calls of the constant procedure do not interfere.)Lemma 3 Suppose A; B 2 K, a 2 A(X), b 2 B(X), f : X !Y , and : A : Furthermore, if (a 0 ; b 0 ) = F D 0 (taking limits in (A B)X) then support(a 0 ) Y and support(b 0 ) Z, because of the componentwise calculation of limits in the product and because the morphism parts of A and B preserve and re ect order (Lemma 1).So this limit is in (A B)X, and the result follows since F D = F D 0 (as D 0 is co nal). 2 Finally, we consider how 4 interacts with nite (categorical) products.This will prove to be important when we de ne the tensor product in the next section.
Lemma 4 Suppose A; B; C 2 K, a 2 A(X), b 2 B(X), c 2 C(X), and is the unique element of 1(X).(In (iii) (b; c) is considered as an element of (B C)X, and similarly for (iv).) The result follows.

The Symmetric Monoidal Closed Structure
We now use the independence predicate to de ne a tensor product on K.This will be used to interpret contexts on the left-hand side of `in typing judgements.Then we construct the corresponding exponential adjunction ?that will model procedure types.

The Tensor Product
The bifunctor on K is a subfunctor of the categorical product , restricted so that di erent components are independent of one another.
If A; B are K-objects then (A B)X = f(a; b) 2 A(X) B(X) j a4bg, ordered componentwise for worlds X and W-morphisms f : X !Y .(A B)X is directed-complete by Lemma 3(iii), and (A B)f is well-de ned { i.e. (A(f)a; B(f)b) 2 (A B)Y { by the Kripke monotonicity property for 4 (Lemma 3(i)).Pullback-preservation is immediate from the de nition of A B on W-morphisms.
If : A : ! A 0 and : B : !B 0 are maps in K then the natural transformation when (a; b) 2 (A B)X.This is well-de ned by Lemmas 3(ii) and 4(ii).Preservation of identities and composites for A B and { { is straightforward.
We can obtain \projections" pr i : A 1 A 2 : ! A i , i = 1; 2, by using the evident inclusion map from to .The direct de nition is pr 1 X (a; b) = a and pr 2 X (a; b) = b. is not a categorical product because there is no pairing (or diagonal).However, it does have symmetric monoidal structure.That is, there are symmetry, associativity, and unity isomorphisms that commute in an appropriately coherent fashion.(The \projections" for can also explained by the fact that the terminal object 1 is the unit of this monoidal structure.)Proposition 5 There are isomorphisms satisfying the Mac Lane-Kelly equations for symmetric monoidal categories.

The Exponential
The description of ?will follow the standard de nition of exponentiation in functor categories, with some alterations to re ect the request that B? { be adjoint to { B, instead of { B.
First, we recall how (the object part) of the exponentiation A ) B in a presheaf category Sets C is typically de ned (e.g.Lambek and Scott, 1986).For each X 2 C, there is a representable functor h X = Hom C (X; {) from C to Sets, and the Yoneda lemma tells us that, no matter how exponentiation ) is de ned, we must have that (A ) B)X = Hom Sets C (h X ; A ) B): Thus, if A ) { is to be right adjoint to { A then (using Currying) we must have that (A ) B)X is isomorphic to Hom Sets C (h X A; B), and we can simply take this last Hom set to be the de nition of (A !B)X.Our case will be treated similarly, using in place of .
If X is a W-object then the functor h X : W ?! D is de ned by h X = Hom W (X; {) ; F where F is the embedding functor from the category of sets and functions to that equips a set with the discrete order.An element of h X (Y ) is a W-morphism f : X !Y .The morphism part of h X is such that if f : X !Y and g : Y !Z then (h X g f) 2 h X (Z) is just the composite f; g.Pullback preservation is a consequence of the standard fact that representable functors preserve limits (Mac Lane, 1971).So h X is in fact a K-object.
If A; B are K-objects and X is a world then we de ne (A? B)X = Hom K (h X A; B); ordered pointwise.
Here, by the pointwise order we mean that, for p 1 ; p 2 2 (A? B)X, p 1 p 2 () 8Y 2 W : 8(g; a) 2 (h X A)Y : p 1 Y (g; a) p 2 Y (g; a) : A result of this use of in place of is that procedure meanings can only be applied to arguments that they are independent of, as will become evident below when we consider the application map.
Now we de ne the morphism parts of A? B and {? {.If f : X !Y , (g; a) 2 (h X A)Z and m 2 (A? B)X then (A? B) f m Z (g; a) = m Z (f; g); a Notice that (f; g; a) 2 (h X A)Z because g4a.One can show by straightforward calculations that A? B preserves pullbacks.If : A 0 : !A, : B : !B 0 and p 2 (A? B)X then ( ?)X p is the bottom of the following diagram.app is more subtle, however, because of the use of in the de nition of ? .Application for presheaf exponentiation is given using identity morphisms: appX (p; a) = p X (id X ; a).We cannot use this equation here, because the de nition of ?would require that id X 4a, and this is not always the case.
However, if p 2 (A? B)X then, by injectivity of the morphism part of A? B (Lemma 1), there is a unique element dpe 2 (A? B)support(p) such that support(p) , !X dpe = p: Furthermore, we clearly have that (support(p) , !X)4a whenever p4a, and so the pair ((support(p) , !X); a) is in (h support(p) A)X.These observations lead to the following de nition of application: Proposition 6 For all B 2 K, { B is left adjoint to B? {.

Interpretation of Typing Rules
In this section we de ne a semantics for the language from Section 2. The meaning of a term will be given by a natural transformation between functors in K.More speci cally, each derivation of a typing judgement ?`t : A will determine a natural transformation t] ] from a functor ?] ] 2 K of environments appropriate to ? to a functor A] ] 2 K of meanings appropriate to A.
(To be completely precise we would decorate these meanings t] ] with data indicating a derivation, and then prove a coherence result stating that di erent derivations of a judgement always lead to the same meaning.See Breazu-Tannen et. al. (1989) for discussion of coherence in this type-theoretic sense.)

Types and Environments
Now we de ne suitable functors A] ] and ?] ] for types A and typing contexts ?.The functor comm] ] of command meanings has already been speci ed in Section 3.
Variables are locations that are both readable and writable.We have opted for a \simple" semantics here that cannot handle, e.g., state-dependent variables such as conditional variables.On W-morphisms var] ] is de ned by var] ] f `= f(`).

The functor exp] ] of expression meanings is
exp] ](R; W) = S(R) ; Values, ordered by graph inclusion exp] ] f e s = e(f R ; s).
where f R : R !R 0 is the evident function obtained by restricting the W-morphism f : (R; W) !(R 0 ; W 0 ).Procedure types are interpreted as A ! B] ] = A] ]? B] ].
For simplicity, we will regard products of the form A (B C) and (A B) C as being identical (in light of Proposition 5), and write A B C.
The environment functors are x 1 : A 1 ; :::; x n : ]] ] = 1 where ] is the empty typing context.Intuitively, an environment u 2 ?] ]X at world X is a tuple (u 1 ; :::; u n ) of meanings, the components of which don't interfere with one another.
Example: Suppose that `1; `2 2 var] ](R; W) and `14`2.Since both of these locations are in R \ W, the de nition of independence between worlds means that `1 6 = `2.Thus, the de nition of environments using ensures that there is no aliasing.

-Calculus Rules
The pure -calculus rules from Table 1 are interpreted as follows, where id, exch and proj are appropriate identity, exchange and projection maps (recall that has \projections").

Id A] ]
A] ] The reader will see that we have suppressed some trivial applications of unity isomorphisms in the interpretations of these rules.
The placement of in the interpretation of !E is the semantic counterpart of the syntactic requirement that a procedure and its argument don't interfere.
The usual and laws of -calculus are valid according to this interpretation, because of the adjunction between { B and B? {.The validity of re ects the call-by-name nature of the language.
Principle II is evident from the de nition of environments.As for Principle I, that closed terms don't interfere with any other terms can be explained semantically as follows.
A closed term should correspond to a map of the form m : 1 : !A, for some A. Given any world X, B 2 K, and b 2 B(X), Lemma 4(i) guarantees that 4b, and since maps in K preserve independence (Lemma 3(ii)) it follows that m(X) 4b.Thus the meanings of closed terms are independent (in the 4 sense) of the meanings of other terms.(The principle can be explained similarly for open terms, using Lemma 4(iii) and 4(i) to show that an environment doesn't interfere with a semantic entity if its components don't, and then using the fact that the meaning of a term, as a map in K, must preserve independence.)

Selected Algol-like Rules
The other rules in Table 1 are interpreted as follows.
Here, h ; i is pairing for the product in K, and par : comm] ] comm] ] : !comm] ], seq : comm] ] comm] ] : ! comm] ], and ass : var] ] exp] ] : ! comm] ] are de ned as follows, where c 1 k c 2 is as in Section 5, c 1 ; c 2 is composition of partial functions, and sj R is the restriction of state s 2 S(R W) to R: Notice the roles of in Par and in Seq: concurrent commands may not interfere with one another, while sequentially-composed commands may.
A dereferencing coercion that converts a variable to an expression can be given by the map j : var] ] : ! exp] ] such that j(X) `s = s(`).
Two \global" commands are skip : comm diverge : comm They are interpreted by maps skip; diverge : 1 : ! comm] ] such that skip(R; W) = the identity function on S(R W) diverge(R; W) = the everywhere-unde ned partial function These are the only maps from 1 to comm] ].Now we consider variable declarations.(This is a good test case for our 4.) To be consistent with Principle II, we will need to ensure that, in a block of the form newx : C, the meaning of the locally-declared identi er x is independent of the meanings of other identi ers.We would certainly expect this to be the case, since the intention is that x denotes a newly allocated variable that is inaccessible by non-local entities.
Matters are simpli ed if we regard new x : C as sugar for new( x:C), where new is a combinator of type (var !comm) !comm.For the semantics of new we de ne a map new : ( var] ]? comm] ]) : ! comm] ].By adjointness (Proposition 6) this determines a map from 1 to (var !comm) The idea here is that the morphism f connects the non-local world to the expanded world with the additional variable.The procedure p is executed in this expanded world with the fresh location `passed as an argument, and this location is de-allocated on termination.The de-allocation is performed using f, obtaining the state f; s 2 2 S(R; W) from the state s 2 2 S(Y ) at the expanded world.Notice that f4`, which is necessary for the argument (f; `) to be of the right semantic type; one might say that the principle that non-local entities don't interfere with local variables is forced on us by the use of ? in the semantic type of new.We refer to (Oles, 1982(Oles, ,1985;;O'Hearn and Tennent, 1992;Tennent, 1991) for further discussion of this form of local-variable semantics.(We mention only that a speci c choice of fresh location `need not be given because of the naturality of p: any `6 2 R W will do.) Other valuations, e.g. for conditionals and while loops, are as usual.

Discussion
There are simple equivalences, valid in the model, that illustrate reasoning principles that are sound in the presence of interference constraints.For example, x := 1; y := 2 y := 2; x := 1 when x; y : var are di erent identi ers.Because of the use of in environments, x and y must denote independent locations, so assigning to one won't a ect the other.This equivalence would not hold in a language that allowed aliasing.
Principle II applies to types other than var, so it is more than just a statement about aliasing.For example, (assuming the obvious interpretation of if) the following equivalence is valid if e = 0 then ( c ; if e = 0 then diverge) diverge else diverge for identi ers c : comm and e : exp.The intuition that is captured here is that execution of c won't change the value of e because c and e are di erent identi ers.
It is straightforward to prove an adequacy correspondence with a suitable operational semantics (Lent, 1992).However, the model is not fully abstract.Some of the di cult test equivalences for local variables described by Meyer and Sieber (1988) are not valid here (speci cally, their Examples 5 and 7).

Semantical Passivity
The presentation thus far has not dealt with typing rules that permit any sharing between identi ers.In this section and the next we extend our analysis to account for Principle III from the Introduction.This principle allows for a limited amount of sharing, where read, but not write, access is involved.The main semantic concept that must be explained is that of passivity, a property of types and phrases that amounts to the absence of write-access capabilities.
This section is concerned with an analysis of basic semantic properties of passivity.Typing rules are considered in Section 9.

Passive Elements
A program phrase is passive if it doesn't write to any (global) locations.We wish to explain this semantically by saying when an \element" of a semantic domain is passive.As with the concept of independence, this will be relative to a possible world.
If A is a K-object and a 2 A(R; W) then we de ne passive(a) () (R; ;) j = a.
a is passive if comes from a world in which there are no writable locations.
Example.Returning to the command meanings from Sections 3 and 4, examining their support shows passive(c3); passive(c4); and passive(c5), while :passive(c2) and :passive(c0).The commands diverge, skip, and if x = 1 then diverge are passive, while x := x + 1 and x := 1 are not. 2 The following result describes basic properties of passivity.Part (i) says essentially that closed terms, given by maps out of 1, are passive.(ii) relates passivity to products, and (iii) is Principle III.(iv) connects passivity and independence, and in particular implies that passivity is preserved and re ected by morphism parts of K-objects, and preserved by K-maps (as in Lemma 3).Lemma 7 Suppose A; B are K-objects, a 2 A(X), b 2 B(X), : A : !B, f : X !Y , and is the unique element of 1(X).The passivity predicate says when an \element" is passive.We call a K-object A passive if all of its elements are: A 2 K is passive () 8X 2 W : 8a 2 A(X) : passive(a).The functor exp] ] is easily seen to be passive, because its de nition does not mention the write components of worlds at all.comm] ] and var] ] are not passive.
Passive objects are manufactured by an endofunctor ! on K: !A X = fa 2 A(X) j passive(a)g, with ordering inherited from A(X) where f is a W-morphism and : A : !A 0 is a map in K. !A X is directed-complete by Lemmas 7(iv) and 3(iii).!A f and !X are well-de ned by Lemmas 7(iv) and 3(i) and (ii).The functoriality of ! and !A are straightforward, and pullback preservation follows directly from the de nition of !A f and pullback preservation for A.
We now consider the relationship with the !modality from linear logic.We do this by interpreting the usual logical rules for !.
? where id is the identity (since ! 2 = !).In the next section we will use the Dereliction map to interpret application for passive procedures, the diagonal map for Contraction for passive types, and R! for -abstraction for passive procedures.Thus, there are maps of the right functionality for interpreting Dereliction, Contraction, and R!.These maps also satisfy the usual categorical axioms for !, amounting on the logical level to equivalences between proofs (e.g.Seely, 1989).
(ii) !carries the canonical commutative comonoid structure for to a commutative comonoid structure for .
Proof.(i).The comonad structure is given by de ning A X : !A X !A X as the evident inclusion map and taking as the identity (since ! 2 = !).
(ii).The canonical comonoid structure (wrt ) on an object A is given by the diagonal diag A : A : !A A and the unique map : A : ! 1. ! takes the diagonal to !diagA : !A : !!(A A), and this is just the diagonal map diag !A : !A : !!A !A (note the equality !(A A) = !A !A).Also, since !1 = 1, !m : !A : ! 1 is the unique map, and so (!A; !diagA ; !m) is the canonical commutative comonoid structure (wrt ) for !A.Finally, observing that !A !A = !A !A and recalling that 1 is the unit of , we get that it is a commutative comonoid wrt ( ; 1) as well. 2 To sum up, the structure on the category K that has been found is that of a symmetric monoidal closed category (1; ; ? ) with nite products (1; ) and a functor !satisfying the conditions of Proposition 9.
Theorem 10 Our category is a model of intuitionistic linear logic.
(Since Weakening is valid, we actually have a model of a ne logic with \of course" types.There are also additional properties satis ed by our !that are not valid in all intuistionic linear models, such as the isomorphism !A !B = !A !B and the stronger condition of Lafont (1988) that !A is the cofree commutative comonoid over A (the ( direction of 7(iv) is important for this).) This relation to linear logic is interesting.There is in fact a striking similarity in the goals of syntactic control of interference and linear functional programming, as set out in (Lafont, 1988;Holmstr om, 1988;Wadler, 1990;Abramsky, 1993).These might be considered as two heads of the same coin.One aims to make imperative programming more elegant, by limiting di culties caused by aliasing and interference, while the other aims to make functional programming more e cient, by permitting destructive updating in a purely functional context and by limiting the need for garbage collection.That they have similar formal structure is perhaps more than coincidence.(A preliminary, not entirely satisfactory, syntactic study of this relationship has been attempted in (O'Hearn, 1991).)

Passive Types
This section considers syntax rules that take Principle III into account.The most important addition will be a restricted form of the structural rule of Contraction, which was conspicuously absent in Section 2. Contraction is the source of sharing in -calculus, so to maintain Principle II we will allow it only for passive types.
It should be mentioned that the presentation in this section departs somewhat from (Reynolds, 1989).One di erence is that we have chosen to use explicit structural rules in our formulation, while Reynolds' systems are in a more familiar format where these rules are left implicit.This is for the most part a minor point, though focusing on structural rules perhaps more clearly illustrates the logical avour of the approach (e.g. the restricted Contraction).A more signi cant departure is that we do not consider the use of intersection types.We will comment brie y on this at the end of the section.?; x : A; y : A `t : B ?; z : A `t z=x; z=y] : B Contraction (A is passive) ?`p : A ! P B `q : A ?; `p(q) : B ! P E ?; x : A `t : B ? ` x:t : A ! P B ! P I (? is passive) The grammar of types is extended to include types for passive procedures A; B ::= A ! P B.
The intention is that a procedure of type A ! P B must not write to any (global) variables.For example, x:x := y is of type var !P comm, when y : exp, because the only free identi er y is in a read-only position.On the other hand, y : x := y is not of type exp !P comm when x : var, because the procedure has write access to the global variable denoted by x.
A ! P B] ] is de ned as !(A] ]? B] ]).We call types of the form exp and A ! P B passive.(Incidentally, if B is a passive type then A ! B] ] and A ! P B] ] are isomorphic, so there is a certain amount of redundancy in the types; Reynolds (1989) in fact disallows types of the form A ! B when B is passive.)A context x 1 : A 1 ; :::; x n : A n is termed passive if each A i is a passive type.The empty context is considered passive.Some typing rules are in Table 2.In Contraction, t z=x; z=y] is t with z substituted for x and y.
Lemma 11 If A is a passive type then A] ] is a passive K-object.If ? is a passive typing context then ?] ] is a passive K-object.Proof.exp] ] is passive, and A ! P B] ] is passive by Proposition 8(i) and (ii).The result for ?] ] then follows from Proposition 8(iii) and (iv) 2 Using Contraction, passive identi ers can be shared between a procedure and its argument, or between concurrent commands.For example, assuming typical rules for + and 1, we can type x := z k y := z + 1: . . . . . .x:var; z 1 :exp `x := z 1 : comm . . . . . .y:var; z 2 :exp `y := z 2 + 1 : comm x:var; y:var; z 1 :exp; z 2 :exp `x := z 1 k y := z 2 + 1 : comm Par x : var; y : var; z : exp `x := z k y := z + 1 : comm Contraction The restriction of Contraction to passive types is essential.If it were allowed for var then we could type x := 1 k x := 2, or a procedure call like ( x : y x := 1 ) y which would lead to variable aliasing.
Contraction is interpreted by the diagonal map diag : A] ] : ! A] ] A] ], which exists by Proposition 9 and Lemma 11: For ! P I, we obtain as usual, and then apply ! to get Since ? is a passive context, ?] ] is a passive K-object.Thus, !?] ] = ?]] and the map !curry t] ] is of the right functionality for the !P I rule.
Finally, we remark that the interpretation of passive procedure types using !(A? B) can be characterized via an adjunction.Let { p B be the the restriction of { B to the subcategory Pass of passive objects in K (B need not be passive here).Then !(B? {), as a functor from K to Pass, is right adjoint to { p B. (This follows straightforwardly from Proposition 6 and the fact that maps in K preserve passivity.)Thus, we have an isomorphism of hom sets Hom K (A B; C) = Hom K (A; !(B? C)) which holds in general only when A is passive.This means also that Hom K (A; B) = Hom K (1; !(A? B)) since 1 is passive.(We will use this last isomorphism implicitly when interpreting block expressions below.)

Block Expressions
We illustrate passive procedure types with a form of block expression: blkexp : (var !P comm) !P exp: Intuitively, execution of blkexp(t) proceeds by rst allocating a new (local) location `, then executing t(`) in an extended state in which `is initialized to some value, and on termination returning the nal value of `as the value of the expression block.The intention is that passivity of t should ensure that there are no changes to non-local variables, and so the use of side-e ects in the body of a block expression should be invisible outside its scope.The treatment of block expressions here is inspired by (Tennent, 1991).
As an example block expression, if n : exp then blkexp fact : new( k: fact := 1 ; k := n ; while k 6 = 0 do fact := fact k ; k := k ? 1 ) calculates the factorial of a non-negative integer n in a side-e ect-free fashion.
We give the semantics by de ning a map blkexp :!( var] ]? comm] ]) : ! exp] ].If t 2!( var] ]? comm] ])(R; W) and s 2 S(R), then blkexp (R; W) t s = s 2 (`) if dte Y (f; `) s 1 = s 2 unde ned if dte Y (f; `) s 1 " where `6 2 R W is any fresh location, f = (R; ;) , !Y , where Y = (R f`g; f`g), var !P comm] ]((R; ;) , ! (R; W))dte = t, and s 1 = (s j `7 !0).dte exists because t is passive, and is unique by Lemma 1.Since the command meaning dte Y (f; `) lives at the world (R f`g; f`g), by the de nition of comm] ] this means that the values of global variables in R W are not altered.That is, the fresh location `is the only location that can have a di erent value in s 2 than in s 1 .Thus, the passivity of t ensures that the expression block is side-e ect-free when viewed from outside the scope of the declaration, where changes to local variables aren't visible.

Recursion
As stated in (Reynolds, 1978(Reynolds, ,1989)), it is not possible to include a general xed-point combinator in syntactic control of interference as it presently stands.If F = f:t assigns to a global variable denoted by a free identi er, then f and this identi er will interfere in a xed-point de nition YF, violating Principle II.Another way to see the problem is to notice that the right-hand side of the xed-point equation YF = F(YF) violates the restriction that a procedure never interfere with its argument.This di culty is mitigated somewhat by the fact that we can de ne xed-points of passive procedures.If f:t is passive then there will be no assignments (to global variables) in the body that could cause interference with f.Similarly, there is no problem with the xed-point equation.
Jumps cause related problems.If we take the position that a \label" denotes a continuation, then it interferes with any variables that are assigned to \later."This seems di cult to reconcile with the principle that distinct identi ers don't interfere, without relaxing the principle or introducing a naming convention that groups interfering continuations and variables together into a common collection.
These problems are the subject of current research.Here we are going to simply indicate that the relevant xed-points for passive procedures do exist in our model.
Fixed-points are calculated in the full subcategory K 0 of K whose objects A are such that A(X) has a least element, for each W-object X, and A(f) is strict, for each W-morphism f.The strictness requirement applies only to the objects of K 0 ; a component (X) of a natural transformation in K 0 need not be strict.K is an analogue of the category of \predomains," while K 0 is a category of \domains." Notice that the \simple" var] ] that we have opted for does not lie in K', though comm] ] and exp] ] do.If A is any K-object and B is a K 0 -object then A? B is in K 0 .
In fact, all of the structure ( , ? , !) cuts down to this smaller category, including the exponential adjunction and the comonoid structure for !.
The strictness requirement has two (related) purposes.First, \global" least elements are needed in B for (A? B)X to have a least element (Oles, 1982).Second, for the xed-point combinator to be natural the calculation of xed-points must be preserved by the morphism parts of functors, and strictness is essential for this.Now we can de ne the xed-point map Y A :!(A?A) : ! A for objects A in K 0 .If m 2!(A?A)(R; ;) then x(m) = G f(F i ?j i is a natural numberg where F 0 (d) = d and F i+1 (d) = m (R; ;) (id (R;;) ; F i (d)), for d 2 A(R; ;).
Notice that id (R;;) 4d for such a d since both are passive, and F i (d) 2 A(R; ;) because m is passive.Notice also that id (R;;) 4? because support(?)= (;; ;), so ? 2 A] ](R; ;) and fF i ?g is in fact have a chain in A] ](R; ;).We then de ne Y A (X)m = A(support(m) , !X)( xdme): 9.4.Discussion The syntactic treatment of passivity in this section is not entirely satisfactory.As in (Reynolds, 1978), -reduction does not preserve typings.For example, it is easy to derive p : comm !P exp; c : comm `p(c) : exp, and ` x: y:x : exp !P (comm !P exp), and therefore p : comm !P exp; c : comm `( x: y:x)(p(c)) : comm !P exp.
But we cannot derive the judgement that results from -reduction p : comm !P exp; c : comm ` y:p(c) : comm !P exp because the identi er c is non-passive, and so we cannot use the rule !P I to infer that the -abstraction has passive type.Reynolds (1989) has shown how this di culty can be overcome very neatly using a variant of the intersection type discipline of Coppo and Dezani (1978).An elegant category-theoretic interpretation of intersection types has been discussed in (Reynolds, 1987(Reynolds, ,1991) ) X) (X).Pullback preservation for A B follows straightforwardly from the de nition of the W-morphism part of A B.
below commutes in W { where f : Y !f(Y ) is the evident map induced by the restricting f { and the right-hand diagram commutes by the functoriality of A.
Proof.(i) and (ii) follow from Lemma 2. For (iii) suppose D (A B)X is non-empty directed.Since the maps (a; b) 7 !support(a) and (a; b) 7 !support(b) from D to the set of worlds have nite image, there is a co nal subset D 0 D on which they are constant, with values, say, Y and Z. (i.e.8d 2 D 9d 0 2 D 0 : d d 0 , and 8(a; b) 2 D 0 : support(a) = Y ^support(b) = Z) Clearly Y 4Z, since a4b whenever (a; b) 2 D 0 .
b4a () a4b (iii) a4(b; c) () a4b ^a4c (iv) (a; b)4c ^a4b () a4(b; c) ^b4c Proof.(i): Since support( ) = (;; ;) it follows that support( )4support(a) no matter what support(a) is.(ii): Immediate.(iii) =): If Z j = (b; c) then the de nition of the morphism part of B C implies Z j = b and Z j = c, and so support(b) support(b; c) and support(c) support(b; c).Thus, if support(a)4support(b; c) then we may conclude that support(a)4support(b) and support(a)4support(c) since, for arbitrary worlds W 1 ; W 2 ; W 3 , it is clear that W 1 W 2 ^W2 4W 3 =) W 1 4W 3 : (iii) (=: Clearly support(b; c) = support(b) support(c), by the de nition of B C on morphisms.If a4b and a4c then support(a)4support(b) support(c) since, for arbitrary worlds W 1 ; W the equation (currym) X a Y (f; b) = m Y A(f)a; b : Note that A(f)a4b by the assumption that (f; b) 2 (h X B)Y , so the argument (A(f)a; b) is of the right type.These de nitions are very similar to the usual ones associated with exponentiation (as adjoint to ) in functor categories.The application map (A? B) A B appX (p; a) = dpe X (support(p) , !X; a): With these de nitions it is then routine to show that, for m : C A : (i) passive( ) (ii) passive(a) ^passive(b) () passive(a; b) (iii) passive(a) ^passive(b) =) a4b (iv) passive(a) () a4a Proof.(i) and (ii) are immediate from Lemma 4. (iii) and (iv) follow from the de nitions of support( ) and 4, and the functoriality of A and B. 2 8.2.Passive Objects 't need to consider the Weakening rule for !, because it is already covered by the general Weakening for .)Dereliction is given semantically by the map in A :!A :!A that simply includes the \passive subset" of A into A. Contraction is given by the diagonal map diag !A :!A : !!A !A.This exists since !A !B =!A !B, and so we can in fact just use diagonal from !A to !A !A.For R!, given a map m : !A :

Table 2
Rules for Passive Types 9.1.Typing Rules and their Interpretations