Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
( works fine for me, does it not show up for you or something?)
Can you give some example judgments, so I can get a feel of what you’re going for?
Also, what is the intended semantic meaning of your judgment ?
is working. I thought I tested it, but I guess I messed up.
should basically mean , where the usage sets get considered as multisets of types and tensored up.
You have:
or
from intro and two variable rules. Is that the kind of example you want?
fails, since is not available.
Since my judgment form is intended to be a fancier version of the one in Allais’ paper, it might help to read explanations from there. Allais writes my as , and my and as and . He leaves implicit, since it can be inferred from the type of his . The new features in my judgment form are and scope delimiters.
Does that mean the types in are meaningless? What is the difference between
and
? Also it seems wrong to have on the right-hand side of the , since according to it is actually saying something about the domain of the morphism. How is this judgment different from just giving together with a list of extra types?
Well the resources in cannot be used in the term you’re checking. Normally you’d fire up these rules with having all of , so as you go along, is the resources that have already been used.
What is the difference between…
Those are basically the same, semantically. The first can be obtained from the second by weakening. Allais says “weakening” for adding to the contexts, and “framing” for adding to the usage sets without affecting the contexts.
Also it seems wrong to have on the right-hand side of the ,
For understanding the design of the rules, you should think of a together as a state. on the left is the initial state; on the right is the final state. It doesn’t need to match up perfectly with the semantics. Formally, is a six-place relation.
How is this judgment different from just giving together with a list of extra types?
I don’t understand the question.
I suppose you could think of as the same six-place relation, written differently. I prefer my notation.
There’s another paper where they did actually put the output context in between a and the . (They actually used a different symbol than turnstile.) That paper doesn’t keep a separate usage set. I tried that first, but those last two hairy rules seem to need it.
I guess it would be pretty insane to actually interpret a term as a morphism of type like I said. It’s probably important that this morphism equals where . So you really want to just deal with that .
I was not thinking of as syntax but as an operation performed on these contexts. What do you gain by decomposing the context of the term into “”? Why do you bother keeping track of “resources that have already been used” if you can’t use them any more?
From my perspective it looks like you’re describing the domain of a morphism in a hugely overcomplicated way: start with a list of objects (), write down another list of objects that you aren’t going to use at all but you decided you wanted to write down anyway for some reason (), add to the first list another list of objects that for some reason you neglected to include the first time (), and then remove some objects from the second list that you changed your mind about and aren’t going to use after all (). Why all the detours?
What do you mean by referring to my system as “not-especially-algorithmic”? What sort of algorithm are you looking for? I did describe a type-checking algorithm, didn’t I?
Re #14:
Well there are two questions, I guess. One is “Why not just ?” and the other is “Why not just ?”. The second is easier to answer: is an input to the algorithm while is an output. How would you find otherwise?
As for the first question, the problem is that projections get added to the context. That is can have projections not in . Suppose one of them is used. For example:
I want to know that the resources used by the term are . This is . If I try to avoid usage sets, I suppose I might have:
But now how do I tell what I’ve used? is just . Well maybe I don’t really need to know. But there’s an example where it helps to know: should be OK, I think, with appropriately-typed generators. You don’t need another label for the application of . But the only thing uses is itself. If that doesn’t count, the rules would require another label for .
Re #16: You sketched a proof, using a combination of rules and English, that an algorithm exists that is essentially implementing your rules. I would say one can not just read off a type checker from your rules, which is what I expect for algorithmic rules. BTW, I’m aware that my rules are a different algorithm from yours. edit: Even once you remove .
I assume that in your example
the generator has type . But according to the principle this judgment should be describing a morphism . How do you get such a morphism from ?
Re #18, it’s true that the algorithm has to make two passes, first to calculate which types are active. But I think it’s misleading to describe the entire system as “not especially algorithmic” just because the algorithm is a bit more complicated.
Re #20: If the rules were algorithmic, and the algorithm has two passes, then the rules would have two passes. To me it seems clear cut that your rules are not themselves an algorithm. But you can change the page.
Re #19: Good point. So I guess my #13 is all wet. As you can see, I haven’t given the semantics as much thought. My reason for confidence in these rules is that I think all the terms that are well typed by them can be translated to well typed ordinary MILL.
Re #19: So my first guess, that the interpretation has type , says , which is also wrong. I should try and actually figure this out. But guess three is . :)
I could understand as saying that the algorithm typechecks a term in context and deduces along the way the other terms that “should” occur in it but don’t. But I don’t follow the argument in #17 for why you want the extra unused stuff in and as well.
Re #25: I don’t know a justification other than what I said in #17 about figuring out when a label is needed. But Allais also has a difference between the context and usage. It’s similar to how, in your mode theory stuff with Licata and Riley, there’s a difference between the context and the “context descriptor”, which is a term in the mode theory.
In Allais, the handling of the plain context is entirely standard for fully structural intuitionistic logic. Usage is propagated left to right, rather than (I assume) leaves to root for a context descriptor.
In my setup, the context is also updated, going from left to right, to add projections. This is meant to correspond to realizing we’re inside the implicit tensor match expressions.
It’s starting to seem very plausible to me that guess three () is actually right. You want to compose these in the same roughly left-to-right order the rules use, and then it should fit. This was not intuitive to me because it’s not using most of the extra information the algorithm maintains, and it’s intuitively a very asymmetrical approach to constructing the morphism, where we’ve arbitrarily attributed a generator instance to its textually leftmost appearance.
Oh wait, no, it probably isn’t. Consider the example from the page:
The subexpression gets checked in the usage and context . The result is .
Guess three would be to interpret this as a morphism . Such a morphism would need to involve the application . But this is wrong. That was the whole point of the example: the application has to happen outside the .
It seems you cannot compositionally interpret a term as just a morphism in my system. I should enhance the rules into elaboration rules that put in the tensor matches. Assuming the interpretation of ordinary MILL is compositional, this elaboration should also indicate how to interpret.
Huh. I have to say that sounds to me like a death knell for usability as an internal language of symmetric monoidal categories.
If I can’t figure out what morphism a term represents without translating it into match syntax, then what good is it to have a syntax without matches?
But you can figure out what morphism a term represents without translating it into match syntax. It’s just that this process is perfectly analogous to translating it to match syntax. In other words, the translation is a representative special case of interpretation.
The match syntax is the “true” syntax, and we’re both cheating to use projections. You cheated one way; I’m proposing another way of cheating. Don’t be too fussy.
I don’t see any kind of cheat about my approach, or any reason to believe that the match syntax is “truer”.
When you have a complete syntax with a rigorous and clearly explained interpretation into symmetric monoidal categories, then I’ll be happy to look at it again and see whether I think it is useful.
Well, I don’t have a lot of time to try to puzzle it out, and I’m not optimistic that it’ll work, but if you do I’ll have a look at it.
Re #34: Let me try to summarize what I think are the similarities and differences of our approaches:
We both start with approximately the same system of raw terms.
My system is syntax-directed, yours is not.
In your system, derivations represent PROP morphisms; in my system, derivations don’t themselves seem to have categorical significance.
So it seems like your approach is to have the derivation rules stay close to the semantics, whereas my approach is to have the derivation rules stay close to the syntax.
It seems that neither of us manage a compositional interpretation of terms into morphisms. You have a compositional interpretation of derivations. It sounds like you know a way to make your rules syntax-directed, but this change might mess up the compositionality of the interpretation.
This is why I say we’re both cheating. With match expressions in the raw syntax, the whole process from syntax to semantics could be compositional. Maybe “true” was a bad choice of word for matching syntax. I meant that it corresponds most closely to the categorical operations you have by assumption in the semantics.
Apparently by “syntax-directed” you mean “requires only one pass to type-check”? That’s a strange usage. But anyway, the whole point of how I set things up is so that terms in my system have unique derivations, so they do compositionally interpret into morphisms as well. No cheating.
Match expressions arguably correspond most closely to the categorical operations in a symmetric multicategory. But the operations in my syntax correspond most closely to the categorical operations in a prop.
FWIW, I think there is an alternative version of my system that only requires one pass and dispenses with the activeness annotations, by requiring generators to be applied as late as possible instead of as early as possible. I’ve put up a brief sketch here (link not long-term stable). In hindsight, probably this is simpler overall and I should just do it this way, unless there is some obstacle that I’m not seeing right now.
But anyway, the whole point of how I set things up is so that terms in my system have unique derivations, so they do compositionally interpret into morphisms as well.
That is a very useful comment; thanks.
Apparently by “syntax-directed” you mean “requires only one pass to type-check”? That’s a strange usage.
That’s definitely not what I meant by “syntax-directed”. I don’t know what I meant in hindsight; I misused it. What I think it usually means is that the correct rule to apply to a goal can be chosen by a shallow examination of the inputs. What I was getting at though is that the tree structure of your derivations is very different from that of the terms. (And this is at least unusual for syntax-directed rules.)
…terms in my system have unique derivations, so they do compositionally interpret into morphisms as well. No cheating.
I did say you have a compositional interpretation of derivations. I attempted to explain why I consider it cheating anyway, and it seems I was entirely unsuccessful. One more time: the interpretation of terms is not compositional. (At least not in a strong sense that I probably shouldn’t’ve brought up.)
I have been trying to say that we have two different ways of dealing with this.
Match expressions arguably correspond most closely to the categorical operations in a symmetric multicategory. But the operations in my syntax correspond most closely to the categorical operations in a prop.
Match expressions are raw term formers. For an apples-to-apples comparison, you’d be saying that your raw term formers, like projecting an individual component of a generator instance, correspond to categorical operations in a prop. But unless I’m completely confused, that is totally false.
edit: I realize that your operations on derivations correspond well to prop operations. This is a goal for my elaboration rules.
Re #42, I’ll take a look.
Re #42:
I like this better. I think you should not require to preserve the relative order of . And you have too many derivations of , since the main rule can also derive that.
It still seems virtually impossible to add type constructors.
Oh darn, were you testing me? Now I don’t think it works at all. Consider something like The rule doesn’t allow passing through .
Match expressions are raw term formers. For an apples-to-apples comparison, you’d be saying that your raw term formers, like projecting an individual component of a generator instance, correspond to categorical operations in a prop.
That’s not the right comparison. It may seem like the right comparison, but only because we’re more used to thinking about type theories in which a morphism is characterized by a single term. The role played by terms in ordinary MILL (for instance) is played in my system by lists of terms: in each case it is a raw syntactic notion that, when well-typed, suffices to characterize a derivation and hence a morphism in a free category. It’s an accident that my “lists of terms” are lists of things each of which individually has some formal similarity to MILL terms; the proper thing to look at when comparing them is the role that they play in the type theory. Is particular, the proper question to ask is whether the operations on lists of terms correspond compositionally to categorical operations in a prop, and they do: as I showed, composition in a prop corresponds to substitution of lists of terms, and so on. Similarly, the tree structure of derivations maps straightforwardly onto the tree (“forest”?) structure of lists of terms.
Consider something like The rule doesn’t allow passing through .
Yes, you’re right, that’s a problem. I’ll have to think about whether there’s a way to solve it.
Having too many derivations of should be easy to solve, just require in the main rule that there is at least one generator or variable. But why do you say shouldn’t preserve the order of ? Note that need not be order-preserving.
I think there is a one-pass algorithm that does what I want, but I’m not sure right away how to write it as inference rules. Just look at the list of terms and peel off those that include all the components of a given generator application, leaving alone those that are missing some of the components.
that the correct rule to apply to a goal can be chosen by a shallow examination of the inputs.
Ah, yes, I think I see what you mean now. That is, you match the inputs against some pattern and that determines the rule to be fired; if that rule then fails the entire typechecking fails, without any need or possibility to “backtrack”. (Perhaps this means the algorithm is, in some sense, a simple application of the induction principle for the raw syntax?) So, for instance, ordinary dependent type theory doesn’t have this property, because you can’t guess when to apply the conversion rule; but a bidirectional form of it might be.
At the moment, it seems unlikely to me that any set of rules for directly typechecking the terms I have in mind could have this property. Consider for instance
where and . The only way to typecheck this is to peel off first and then afterwards both ’s. But I don’t see any way you could deduce from a shallow pattern-match that you have to do that versus, for instance, considering the initial and the final as two components of a single application of , which seems initially plausible from the fact that their head-symbols and match the co-binary nature of the generator .
The system I wrote in my paper solves this problem by first traversing the terms to calculate depths, and then requiring the terms of greatest depth (i.e. the “active” ones) to be dealt with first. Another alternative might be to assign labels to all generator applications, not just the nullary ones, producing for instance
in which you can tell from shallow analysis that and don’t match because they have different labels. The labels on positive-ary generator applications could then be left off in practice and calculated by a preliminary traversal, like the depths/activeness. At the moment I don’t see much reason to prefer one approach over the other.
Having too many derivations of should be easy to solve,
Agreed.
But why do you say shouldn’t preserve the order of ?
Oh I see. You’re right. The order of is obtained from the list of terms, using the constraint.
you match the inputs against some pattern and that determines the rule to be fired … without any need or possibility to “backtrack”
Yeah, I think so. I’m fuzzy on what exactly should count as “shallow examination”. For example, nonlinear pattern matching performs deep equality tests. I implicitly use deep equality to test whether a projection is in the context.
(Perhaps this means the algorithm is, in some sense, a simple application of the induction principle for the raw syntax?)
Technically no, I don’t think so. That would imply the algorithm always terminates, which isn’t necessary for syntax-directedness.
At the moment, it seems unlikely to me that any set of rules for directly typechecking the terms I have in mind could have this property…
I think you’re over-thinking it. Once you know you can peel off , you know you have to use your non-base-case rule. Whether you need deep equality tests at all is another issue. You do. So even if it turns out you need equality just to pick a rule, I don’t care.
That’s not the right comparison.
Maybe it’s not the right comparison to make your point, but it is the right one to make mine, which is simply that MILL provides a compositional interpretation of its terms, whereas your system doesn’t provide a compositional interpretation of its terms, as I understand it. But I’m hoping for a reply to #45 to clarify.
I’m not talking about compositionality of derivations or the gadgets they’re about. I already knew about that.
It’s an accident that my “lists of terms” are lists of things each of which individually has some formal similarity to MILL terms;
But as I understand, one of the main selling points of your system is that it supports tensor projections in the raw syntax. You can’t say the raw syntax is both a selling point and an accident. If it’s an accident, I can replace it with something easier and you’re not allowed to complain. But I already asked about that in the last thread and you seemed insistent on keeping your raw syntax. It seems like a worthwhile point whether this syntax is interpreted compositionally.
My system keeps roughly the same raw terms as yours and completely changes the derivations. This subthread was started by a certain observation about something our systems had in common (they “cheat”), setting them apart from MILL, so no wonder it was about raw terms.
Re #49: OK I think I get it. Yeah, I don’t know a cute way to express that. For the actual implementation, you’d want a clever-ish loop over the list of terms that figures out the set of indexes to peel and the permutation. I suppose you can put all the variables back in the base case.
That would imply the algorithm always terminates
That’s one of the things I meant to encompass by “in some sense”. E.g. you can define it in Agda by a simple pattern-match if you turn off the termination-checker. (-:
Maybe it’s not the right comparison to make your point, but it is the right one to make mine, which is simply that MILL provides a compositional interpretation of its terms, whereas your system doesn’t provide a compositional interpretation of its terms, as I understand it.
My point is that your point is the wrong one to make. (-: This is my reply to #45: my interpretation of lists of terms is compositional, and that’s what you should want. Lists of terms are the “raw syntax” in my theory. That they are what they are is not an accident. What I was calling an accident is the fact that because a list-of-terms is defined as a list of some things that are called ’terms’, you can get confused and think of these individual “terms” as if they were the “raw syntax” that ought to be interpreted compositionally. But that’s wrong: it’s the entire list that should be, and is, interpreted compositionally.
E.g. you can define it in Agda by a simple pattern-match if you turn off the termination-checker. (-:
I still don’t think so. What about bidirectional type checking? I think that’s syntax-directed, even if you don’t split the syntax into two classes to match the two judgment forms. Also, I think the part about weak head normalization is also syntax-directed although it’s nothing like a simple induction on terms.
This “informula” summarizes how I think of it:
structural recursion pure functional syntax-directed logic programming proof search
(And meanwhile: pure functional object-oriented imperative)
OK, well I think there is a reason why it’s harder to design a type system based on tensor projections, and I think it’s because they’re noncompositional. But I give up trying to convince you.
Thank you for answering #45. It sounds like the compositionality you’re claiming is just a way of thinking about the operations and properties in your paper. In #30 you said:
Huh. I have to say that sounds to me like a death knell for usability as an internal language of symmetric monoidal categories.
I assume this was in response to:
It seems you cannot compositionally interpret a term as just a morphism in my system.
from #29. But I think this might’ve been a miscommunication. I’ve tried to explain why your system also doesn’t compositionally interpret terms in that sense. So the question is: Did you think my system would not have some analogue of the operations and properties that make your system compositional?
Re #54: It looks like you fooled me again. How does #49 reject instead of infinite looping?
edit: Oh, maybe it works to just check that you’ve made progress. If not, it had better be the base case.
Sorry, I think I assumed that in #29 you were saying you can’t interpret all the terms appearing in a judgment compositionally as a morphism, since it was obvious to me that no one would ever imagine you could interpret a single term as a morphism in a system like this. But maybe you were just expressing the latter fact.
Yeah, I guess I was roughly just expressing the latter fact, because it seems like I came close. If it weren’t for lambdas, the () guess might’ve worked. That seems hard to believe though, so maybe there’s something else wrong with it.
1 to 60 of 60