Yeah, I guess I was roughly just expressing the latter fact, because it seems like I came close. If it weren’t for lambdas, the () guess might’ve worked. That seems hard to believe though, so maybe there’s something else wrong with it.
]]>Sorry, I think I assumed that in #29 you were saying you can’t interpret all the terms appearing in a judgment compositionally as a morphism, since it was obvious to me that no one would ever imagine you could interpret a single term as a morphism in a system like this. But maybe you were just expressing the latter fact.
]]>Re #54: It looks like you fooled me again. How does #49 reject instead of infinite looping?
edit: Oh, maybe it works to just check that you’ve made progress. If not, it had better be the base case.
]]>OK, well I think there is a reason why it’s harder to design a type system based on tensor projections, and I think it’s because they’re noncompositional. But I give up trying to convince you.
Thank you for answering #45. It sounds like the compositionality you’re claiming is just a way of thinking about the operations and properties in your paper. In #30 you said:
Huh. I have to say that sounds to me like a death knell for usability as an internal language of symmetric monoidal categories.
I assume this was in response to:
It seems you cannot compositionally interpret a term as just a morphism in my system.
from #29. But I think this might’ve been a miscommunication. I’ve tried to explain why your system also doesn’t compositionally interpret terms in that sense. So the question is: Did you think my system would not have some analogue of the operations and properties that make your system compositional?
]]>E.g. you can define it in Agda by a simple pattern-match if you turn off the termination-checker. (-:
I still don’t think so. What about bidirectional type checking? I think that’s syntax-directed, even if you don’t split the syntax into two classes to match the two judgment forms. Also, I think the part about weak head normalization is also syntax-directed although it’s nothing like a simple induction on terms.
This “informula” summarizes how I think of it:
structural recursion pure functional syntax-directed logic programming proof search
(And meanwhile: pure functional object-oriented imperative)
]]>That would imply the algorithm always terminates
That’s one of the things I meant to encompass by “in some sense”. E.g. you can define it in Agda by a simple pattern-match if you turn off the termination-checker. (-:
Maybe it’s not the right comparison to make your point, but it is the right one to make mine, which is simply that MILL provides a compositional interpretation of its terms, whereas your system doesn’t provide a compositional interpretation of its terms, as I understand it.
My point is that your point is the wrong one to make. (-: This is my reply to #45: my interpretation of lists of terms is compositional, and that’s what you should want. Lists of terms are the “raw syntax” in my theory. That they are what they are is not an accident. What I was calling an accident is the fact that because a list-of-terms is defined as a list of some things that are called ’terms’, you can get confused and think of these individual “terms” as if they were the “raw syntax” that ought to be interpreted compositionally. But that’s wrong: it’s the entire list that should be, and is, interpreted compositionally.
]]>Re #49: OK I think I get it. Yeah, I don’t know a cute way to express that. For the actual implementation, you’d want a clever-ish loop over the list of terms that figures out the set of indexes to peel and the permutation. I suppose you can put all the variables back in the base case.
]]>That’s not the right comparison.
Maybe it’s not the right comparison to make your point, but it is the right one to make mine, which is simply that MILL provides a compositional interpretation of its terms, whereas your system doesn’t provide a compositional interpretation of its terms, as I understand it. But I’m hoping for a reply to #45 to clarify.
I’m not talking about compositionality of derivations or the gadgets they’re about. I already knew about that.
It’s an accident that my “lists of terms” are lists of things each of which individually has some formal similarity to MILL terms;
But as I understand, one of the main selling points of your system is that it supports tensor projections in the raw syntax. You can’t say the raw syntax is both a selling point and an accident. If it’s an accident, I can replace it with something easier and you’re not allowed to complain. But I already asked about that in the last thread and you seemed insistent on keeping your raw syntax. It seems like a worthwhile point whether this syntax is interpreted compositionally.
My system keeps roughly the same raw terms as yours and completely changes the derivations. This subthread was started by a certain observation about something our systems had in common (they “cheat”), setting them apart from MILL, so no wonder it was about raw terms.
]]>you match the inputs against some pattern and that determines the rule to be fired … without any need or possibility to “backtrack”
Yeah, I think so. I’m fuzzy on what exactly should count as “shallow examination”. For example, nonlinear pattern matching performs deep equality tests. I implicitly use deep equality to test whether a projection is in the context.
(Perhaps this means the algorithm is, in some sense, a simple application of the induction principle for the raw syntax?)
Technically no, I don’t think so. That would imply the algorithm always terminates, which isn’t necessary for syntax-directedness.
At the moment, it seems unlikely to me that any set of rules for directly typechecking the terms I have in mind could have this property…
I think you’re over-thinking it. Once you know you can peel off , you know you have to use your non-base-case rule. Whether you need deep equality tests at all is another issue. You do. So even if it turns out you need equality just to pick a rule, I don’t care.
]]>Having too many derivations of should be easy to solve,
Agreed.
But why do you say shouldn’t preserve the order of ?
Oh I see. You’re right. The order of is obtained from the list of terms, using the constraint.
]]>that the correct rule to apply to a goal can be chosen by a shallow examination of the inputs.
Ah, yes, I think I see what you mean now. That is, you match the inputs against some pattern and that determines the rule to be fired; if that rule then fails the entire typechecking fails, without any need or possibility to “backtrack”. (Perhaps this means the algorithm is, in some sense, a simple application of the induction principle for the raw syntax?) So, for instance, ordinary dependent type theory doesn’t have this property, because you can’t guess when to apply the conversion rule; but a bidirectional form of it might be.
At the moment, it seems unlikely to me that any set of rules for directly typechecking the terms I have in mind could have this property. Consider for instance
where and . The only way to typecheck this is to peel off first and then afterwards both ’s. But I don’t see any way you could deduce from a shallow pattern-match that you have to do that versus, for instance, considering the initial and the final as two components of a single application of , which seems initially plausible from the fact that their head-symbols and match the co-binary nature of the generator .
The system I wrote in my paper solves this problem by first traversing the terms to calculate depths, and then requiring the terms of greatest depth (i.e. the “active” ones) to be dealt with first. Another alternative might be to assign labels to all generator applications, not just the nullary ones, producing for instance
in which you can tell from shallow analysis that and don’t match because they have different labels. The labels on positive-ary generator applications could then be left off in practice and calculated by a preliminary traversal, like the depths/activeness. At the moment I don’t see much reason to prefer one approach over the other.
]]>I think there is a one-pass algorithm that does what I want, but I’m not sure right away how to write it as inference rules. Just look at the list of terms and peel off those that include all the components of a given generator application, leaving alone those that are missing some of the components.
]]>Match expressions are raw term formers. For an apples-to-apples comparison, you’d be saying that your raw term formers, like projecting an individual component of a generator instance, correspond to categorical operations in a prop.
That’s not the right comparison. It may seem like the right comparison, but only because we’re more used to thinking about type theories in which a morphism is characterized by a single term. The role played by terms in ordinary MILL (for instance) is played in my system by lists of terms: in each case it is a raw syntactic notion that, when well-typed, suffices to characterize a derivation and hence a morphism in a free category. It’s an accident that my “lists of terms” are lists of things each of which individually has some formal similarity to MILL terms; the proper thing to look at when comparing them is the role that they play in the type theory. Is particular, the proper question to ask is whether the operations on lists of terms correspond compositionally to categorical operations in a prop, and they do: as I showed, composition in a prop corresponds to substitution of lists of terms, and so on. Similarly, the tree structure of derivations maps straightforwardly onto the tree (“forest”?) structure of lists of terms.
Consider something like The rule doesn’t allow passing through .
Yes, you’re right, that’s a problem. I’ll have to think about whether there’s a way to solve it.
Having too many derivations of should be easy to solve, just require in the main rule that there is at least one generator or variable. But why do you say shouldn’t preserve the order of ? Note that need not be order-preserving.
]]>Oh darn, were you testing me? Now I don’t think it works at all. Consider something like The rule doesn’t allow passing through .
]]>Re #42:
I like this better. I think you should not require to preserve the relative order of . And you have too many derivations of , since the main rule can also derive that.
It still seems virtually impossible to add type constructors.
]]>Apparently by “syntax-directed” you mean “requires only one pass to type-check”? That’s a strange usage.
That’s definitely not what I meant by “syntax-directed”. I don’t know what I meant in hindsight; I misused it. What I think it usually means is that the correct rule to apply to a goal can be chosen by a shallow examination of the inputs. What I was getting at though is that the tree structure of your derivations is very different from that of the terms. (And this is at least unusual for syntax-directed rules.)
…terms in my system have unique derivations, so they do compositionally interpret into morphisms as well. No cheating.
I did say you have a compositional interpretation of derivations. I attempted to explain why I consider it cheating anyway, and it seems I was entirely unsuccessful. One more time: the interpretation of terms is not compositional. (At least not in a strong sense that I probably shouldn’t’ve brought up.)
I have been trying to say that we have two different ways of dealing with this.
Match expressions arguably correspond most closely to the categorical operations in a symmetric multicategory. But the operations in my syntax correspond most closely to the categorical operations in a prop.
Match expressions are raw term formers. For an apples-to-apples comparison, you’d be saying that your raw term formers, like projecting an individual component of a generator instance, correspond to categorical operations in a prop. But unless I’m completely confused, that is totally false.
edit: I realize that your operations on derivations correspond well to prop operations. This is a goal for my elaboration rules.
Re #42, I’ll take a look.
]]>But anyway, the whole point of how I set things up is so that terms in my system have unique derivations, so they do compositionally interpret into morphisms as well.
That is a very useful comment; thanks.
]]>FWIW, I think there is an alternative version of my system that only requires one pass and dispenses with the activeness annotations, by requiring generators to be applied as late as possible instead of as early as possible. I’ve put up a brief sketch here (link not long-term stable). In hindsight, probably this is simpler overall and I should just do it this way, unless there is some obstacle that I’m not seeing right now.
]]>Apparently by “syntax-directed” you mean “requires only one pass to type-check”? That’s a strange usage. But anyway, the whole point of how I set things up is so that terms in my system have unique derivations, so they do compositionally interpret into morphisms as well. No cheating.
Match expressions arguably correspond most closely to the categorical operations in a symmetric multicategory. But the operations in my syntax correspond most closely to the categorical operations in a prop.
]]>Re #34: Let me try to summarize what I think are the similarities and differences of our approaches:
We both start with approximately the same system of raw terms.
My system is syntax-directed, yours is not.
In your system, derivations represent PROP morphisms; in my system, derivations don’t themselves seem to have categorical significance.
So it seems like your approach is to have the derivation rules stay close to the semantics, whereas my approach is to have the derivation rules stay close to the syntax.
It seems that neither of us manage a compositional interpretation of terms into morphisms. You have a compositional interpretation of derivations. It sounds like you know a way to make your rules syntax-directed, but this change might mess up the compositionality of the interpretation.
This is why I say we’re both cheating. With match expressions in the raw syntax, the whole process from syntax to semantics could be compositional. Maybe “true” was a bad choice of word for matching syntax. I meant that it corresponds most closely to the categorical operations you have by assumption in the semantics.
]]>Well, I don’t have a lot of time to try to puzzle it out, and I’m not optimistic that it’ll work, but if you do I’ll have a look at it.
]]>function is redundant.
]]>