Frames and Theories III: Motivating the Semantic Clauses

Ok, so I had to take a break to grade finals and then to do the whole Christmas thing. Now I'm back on the wagon.

Let's dive right in with some intuition pumping for clauses 4 and 5 from the definition of truth we covered last time.

To begin, I need to point out that only prime theories *have* complements. I don't think I said that before. I'm saying it now. The reason is simple: the complement of a theory needs to be a theory. And, intuitively, the complement of the theory t is the theory containing every sentence A whose negation is not in t. In a motto, the complement of t agrees to all the things t doesn't reject. The problem this gives rise to is that if 'A or B' is in t, but neither A nor B is, then 'not A' and 'not B' must both be in the complement of t. But then, on minimal assumptions, 'not A and not B' will be in the complement of t. So, 'not (not A and not B)' must not be in t. But (again on minimal assumptions) this just is 'A or B', which we already said *was* in t.

That's a contradiction. To avoid it, we restrict complementation to being a function from primes to primes. Consider this officially added to the description of frames and models from here on.

Now take a prime extension u of a theory t. Since u is prime, u decides every disjunction in t. The complement of u accepts everything u doesn't reject. So if the complement of u doesn't accept A, then u rejects A.

So the basic idea behind the negation clause is that a theory t makes the sentence A false just when each way, u, of settling all the matters that t raises (that is, each prime extension u of t) rejects A. Such a t has effectively made A impossible. So it makes sense to say that this theory makes A false.

So much for clause 4. Clause 5 is a bit easier to motivate. It says that the conditional `if A, then B' is true in a theory t just when every time we close a theory where A is true under t, the result is a theory where B is true. This is a nonstandard, nontruthfunctional, but nonetheless plausible and fairly simple story about what it takes for a certain sort of conditional to be true.

Frames and Theories II

Tarski's 1936 account of logical consequence is at the heart of the discussion I want to try to have. In particular, I want to highlight a discussion that happened (for the most part) in the 1980s and 1990s about what exactly Tarski's account *is*. A good gateway into this debate is Greg Ray's 1996 paper.

As with any moderately interesting philosophical debate, there are a lot of subtleties and technicalities that play a central role. As is standard in any moderately interesting summary of a philosophical debate, I will ignore most of these. I will also focus on the easier case of theorems rather than on logical consequence. This is for simplicity as well, but I admit that in this case the sacrifice is a bit harder to swallow.

Given these simplifications the (yet more oversimplified) core issue in the Ray-Etchemendy debate was which of the following was the actual Tarskian account of logical consequence:

1. Let U be the universal domain -- the class of all objects. Then s is a theorem just if no matter how we assign members of U to our variables, sets of members of U to predicates, etc., s always turns out true.

2. As above, but rather than restricting U to be the unique universal domain, we allow it to be any class of objects at all.

The question, that is, is about whether Tarski allowed domain relative assignments. Etchemendy's Tarski endorses the first account, rejecting domain relative assignments. Ray's Tarski endorses the second, accepting domain relative assignments. I find Ray's arguments convincing. But I'm not a historian, so take that with a grain of salt.

The framework we're using here is obviously quite different from the Tarskian framework being debated in the Etchemenday-Ray episode. But we are going to have to ask a similar question. This is because there are two natural ways we could define logical consequence. The first involves varying frames; the second involves a universal frame. To be more explicit, let's first define truth for the zero-order language.

Let [F,D,v] be a model, and let t be a theory in F. The recursive definition of truth-in-F-at-t is as follows:

  1. Pa1...an is true in F at t just if (recall our assumptions from before!) [a1,...,an] is in the valuation of P at t.

  2. 'A and B' is true in F at t just if A is true in F at t and B is true in F at t.

  3. 'A or B' is true in F at t just if A is true in F at t or B is true in F at t.

  4. 'not A' is true in F at t just if A fails to be true at the complement of any prime theory extending t.

  5. 'if A then B' is true in F at t just if A is true in F at u only if B is true in F at the closure of u under t.

Dang. We're already over 500 words. I thought we'd be able to get further. To keep things manageable I'll stop there. Next time, I'll give some motivation for these clauses and return to the thread.

Frames and Theories I

Philosophy-wise, there are a few things that need to be motivated. The first is frames; the second is frame morphisms. Once the motivations for these are in place, much of the rest of the philosophical picture will follow along.

Regarding the first bit, recall first that frames have theories in them and that theories are related to each other in a few ways. The natural question to ask is then "what are these theories theories of?"

Whenever I get to this point in doing the philosophy bit, I start to regret my vocabulary choices. Calling the elements of a frame 'theories' is fantastic for motivating the technical shenaniganry we've been engaged in -- it makes sense to close one theory under another, to talk about one theory containing another, etc. The problem, though, is that theories as paradigmatic syntactic entities. They're sets of sentences. And what we're trying to do is build a semantics. If we were building a proof-theoretic semantics, it would be totally fine to ground out our semantics in something syntactic. But we're giving a model-theoretic semantics. So we need to tell some story about what theories are that doesn't amount to their being, well, theories.

To tell that story, it helps to first remember that a frame is only a part of a model. Models also have, in addition to their frame-bits, a domain-bit and a valuation-bit. What these additional pieces do is tell us what objects there are and what features those objects have at each theory in the frame. So theories can be thought of as states of affairs -- they're places where things can be and can have properties. Corresponding to any such state of affairs (semantic entity) is a natural syntactic entity: the set containing p whenever p is true in that state of affairs and 'not p' whenever p is false in that state of affairs.

So that's what we have to say about theories. The natural thing to say about frames, given this, is that if we’re investigating subject matter S, the correct frame to use is the one that contains all the theories compatible with our information about S. But what we want to think about, really, is logic. And I think it’s not at all obvious what frame/class of frames this leaves us with. Next time around, I want to talk about two plausible answers to this question. I also think there is an interesting philosophical debate to be had about which of these is right, and that this debate will have overtones that should be recognizable to anyone familiar with the debates about whether Tarski’s account of logical consequence was domain relative. (This Greg Ray paper, together with the citations it contains, makes a great intro to the area if you’re unfamiliar.)

That's enough for one day. Next up I'll actually give the two answers. Then I'll turn to giving philosophical motivation for frame morphisms. After that we'll return to our categorification quest.

Arrows for Real

I want to preface this post by saying that there is almost certainly a more categorial way to say everything I'm going to say today. It would probably also be a worthwhile exercise to say things in the more categorial way. But I'll leave that to either a future post or to someone else. If you have thoughts on how to categorify the construction here, though, do let me know either via email or (better!) in the comments.

Recall from last time that a frame is a model without the domain or the valuation. So a frame has six pieces: a set of theories, a subset of which are the normal theories, a designated base theory, a binary closure operation acting on the theories, a binary containment relation holding among the theories, and a complementation function. A frame morphism is a function from the theories in one frame to the theories in another frame that (a) is monotonic, (b) is unital, and (c) commutes with reciprocation. More explicitly, we require first that if u extends t, then the image of u extends the image of t; second that the image of the base theory in the first frame be the base theory in the second frame; and third that the reciprocal of the image of a theory t is the image of the reciprocal of t. We say that a frame morphism is normal when the image of the set of prime theories in the first frame is exactly the set of prime theories in the second frame. We say a frame morphism is exact when the image of the closure of t under u is the closure of the image of t under the image of u.

One of the more intimidating aspects of Fine's paper is all the uparrows and downarrows running around in it. Here's an excerpt to illustrate the point:

OMGarrows.png

What these are, it turns out, is two whole famlies of functions. And, as the arrows suggest, they go in opposite directions: where the downarrow-functions might go from this to that, the uparrow functions will go from that to this. The fact that we need both families is what makes most straightforward categorifications either impossible or unwieldy.

The key to getting things to work nicely is to work in a category where arrows just are pairs of functions going in opposite directsion. To say more about this, we first need to bring valuations and domains back on the scene. We'll do so by thinking about models as being triples composed of a frame, a domain, and a valuation. First, define what it means for the pair consisting of the domain D1 and the valuation V1 to be a restriction of the pair [D2, V2] in the 'obvious' way: D1 is contained in D2 and V1 is what we get when we restrict V2 to D1. We only want arrows between models [F1,D1,V1] and [F2,D2,V2] when [D2,V2] is a restriction of [D1,V1]. When that does happen, an arrow is a pair of frame morphisms: a normal frame morphism from F1 to F2 and an exact frame morphism from F2 to F1.

Not just any old pair of this form will do, however. We require that they satisfy six conditions. But I'm not going to tell you what they are. Instead, I'm going to stop for today, because that was all technical stuff and no philosophical stuff. So in the next couple posts, I'm going to do philosophy instead. Eventually, we'll come back to the thread and actually give the conditions. After that, we'll check to make sure we've actually got a category on our hands. Of course, even then we're only halfway done -- the game is to build stratified models as some sort of presheaves, and all we'll have done is specified the categories we're interested in. So we'll still need to say something about which presheaves we want and why, and how to use them to do the whole semantics shtick. Then, if I'm feeling ambitious and things are going well, maybe I'll work through soundness and completeness proofs for the logic BQ with respect to this semantics. Or maybe I won't. Who knows? The world is a wild place.

Arrows! (jk, Arrows are tomorrow)

The first thing to note is that Fine actually gives three different versions of his theory. In the work I’ve done so far on stratified semantics (work that I’m extending with Andrew Tedder) I’ve used Fine’s third option: ternary-relation frames a la Routley and Meyer. But I’ve recently become convinced that Fine was right to focus (both in this paper and in his earlier work) on operational models. So that’s where I’m going to hang out in this series. Also I lied yesterday. I won’t get you all the way to arrows today; all I’ll actually do is do some setup.

As I said in the previous post, the category we’re looking for will have models of the quantifier-free (aka zero-order) fragment of a first-order language as its objects. From the operational perspective, such models have eight pieces: a set-sized domain, a set-sized collection of theories, a designated subset of the theories called the prime theories, a binary closure operation that acts on the theories, a binary containment relation among the theories, a base or ground theory, a complementation function, and a valuation. From top to bottom, here’s what we require of these:

  1. The domain must be nonempty.
  2. The set of theories must be nonempty.
  3. The prime theories must be dense among the theories. This means that if the closure of the theory t under the theory u is contained in the prime theory p, then there must be prime theories pt (extending t) and pu (extending u) so that the closure of pt under u is still contained in p and the closure of t under pu is also still contained in p. The intuition to have here is that prime theories are theories that take a stand of every disjunction they accept. (So if they accept 'A or B', then they also either accept A or accept B.) The density requirement then captures the following intuition: if we close t under u, then take a stand on each of the disjunctions we end up with, the resulting theory is compatible with some way of having taken a stand on all the disjunctions in t and some way of having taken a stand on all the disjunctions in u.
  4. The closure operation must be monotonic: if t is contained in u (so that u is an extension of t) then the closure of t under v should be contained in the closure of u under v.
  5. The containment relation must be a partial ordering.
  6. Closing the base theory under any theory t results in a theory that extends t.
  7. The complementation function must be order reversing (if t extends u, then the complement of u extends the complement of t), decreasing above the base theory (if t extends the base theory, then t extends its own complement as well), and have order two (so the complement of the complement of t is t).
  8. The valuation function maps each predicates to its extension at each theory and maps variables to objects in the usual way.

I think these are fairly natural conditions given the intended interpretation of, e.g. ‘theory’, ‘prime’, etc. Nonetheless, they allow some surprising things to be models. First, say that a frame is what we get when we strip off the domain and the valuation from a model. Then we might build a kind of funny frame by taking the set of ‘theories’ to be the positive rational numbers, the set of ‘prime’ theories to be the set of fractions (that is, the non-whole (unnatural?) numbers), the ‘closure’ operation to be ordinary multiplication, the ‘base’ theory to be the number 1, and complementation to be reciprocation.

Despite the fact that we have unusual examples of frames (and thus of models), for the most part this sort of operational model is pretty well-behaved. The category we need — the one whose arrows I said I’d describe today, but that I will actually describe tomorrow — has these models as its objects. Actually, to make things a bit simpler, I’m going to restrict my attention to models that are a bit simpler. Specifically, I will always take the domain to be the set of variables and suppose that every valuation maps every variable to itself. These assumptions can be done away with by the usual tricks, so are, despite being weird, totally harmless.

Stratified Models as Presheaves

I gave a few talks this past summer about Kit Fine’s Semantics for Quantified Relevance Logics. I mentioned a few times that the models in Fine’s semantics are pretty “sheafy”. For a variety of reasons, I’ve finally decided to work out the details of how this goes. Since a sheaf is a special type of presheaf and a presheaf is a type of contravariant functor and contravariant functors are ways of hooking together two categories, a natural starting point is figuring out which categories we need to look at.

If you comb through the details of the completeness proof Fine gives while keeping the general idea of what a sheaf is at the front of your mind, one of the categories yells its identity at you and the other whispers a few hints. The one that yells at you is the category whose objects are finite sets of variables and whose arrows are inclusions among these. (Aside: Fine’s definition allows us to use something more general that, interestingly, has (to me) a Grothendieck-y smell; I’m ignoring this for simplicity.) The presheaves we’re interested in will be contravariant functors from this category (let’s call it Fin) to the aforementioned category-that-whispers.

What’s clear about the whisperer is that its objects should be models for a polyadic unquantified language. What’s unclear is what the arrows in this category should be. But this morning I think I saw what the answer is, so today I’ll try to work out the details and then tomorrow, I’ll say things about what I learn.