Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
I have added some discussion to variable.
Not sure if I ever announced this here, the original version is a few months old already. But right now I have added also some sentences on bound variables.
Some more professional logician might please look over this. There will be lots of room to improve on those few sentences that I jotted down, mainly such as to have something there at all.
I think this is okay as far as it goes. Eventually the page ought to have some discussion of the actual syntactic meaning/usage of “variable” (prior to its interpretation in categorical semantics).
Okay, thanks for looking at it.
I made a more general Idea section and promoted the section In categorical semantics to top level.
I wrote a bit about free and bound variables from an elementary perspective. So that people know what they’re supposed to be before they read about them in categorial semantics.
Thanks, Toby!
I have a feeling we could be saying more from an nPOV which involves discussion of free structures. For example, we use a variable when we discuss polynomials, where is universal among morphisms , where is the underlying set functor. This would be an nPOV way of demystifying variables.
Similarly, in classical first-order logic, variables are pedantic devices we use in typical syntactic descriptions of the free (say Boolean) hyperdoctrine generated by a signature. One could tie this in with the categorical semantics section, by considering possible hyperdoctrine maps to the hyperdoctrine associated with the codomain fibration of a topos (thinking of this target as playing a role analogous to in the paragraph above).
(I’m slightly shooting from the hip – it’s late here and I’m too tired/lazy to think about this very carefully at the moment. But maybe soon.)
Thanks, Toby!
Todd, those are interesting thoughts. In the direction of connecting the polynomial-ring of variables with the logical one, one could say I guess that the elements of can be identified with the terms in context in the complete type theory of the ring . Is that the sort of thing you had in mind?
Mike, I had in mind something simpler or more expositional than that, and it was channeling something I once read in Halmos’s autobiography. He was complaining about the fussiness and obfuscation he saw in the way logicians talked about logical syntax. When he learned that the “propositional calculus” was really all about free Boolean algebras, then he at last understood the point of it all, and what was really the idea behind variables and equivalence of term-expressions and all that (this was back in the 50’s I guess). He then wanted to go on and “algebraize” all of first-order logic, and so he began a phase of his research career which was all about algebraic logic (cylindric algebras and so on).
I see the idea of hyperdoctrine as a very nice way of algebraizing logic, better or more refined than the tools he was using (not that I am intimately familiar with them).
So my idea was simply to take a situation where the reader is likely to understand the actual point of a variable in a very simple situation (polynomials, where a variable is thought of just as an indeterminate), link up that point to free algebras (this is like Halmos’s move to free Boolean algebras), and then move to the more sophisticated scenario where we deal with full-fledged first-order logic of some sort, but still emphasize the point that it’s still all about free structures, in this case perhaps free hyperdoctrines, or something similar. I admit that I still haven’t thought it through to the end.
Does that make sense? :-)
The hard part in any of the annoyingly pedantic ways that logicians keep track of variables is how to manage bound variables. I think that these are also the hard part in any attempt to algebraise logic. In other words, propositional calculus is much easier than predicate calculus.
@Todd: Okay, I see. That seems like a fine idea to me!
Reading what you wrote just made something click in my head. The general idea of describing free structures “syntactically”, so that we can then use the syntax to prove things that are true everywhere (because they are preserved by the unique maps out of the free structure) is something that I’ve thought about a lot in two different contexts: internal logic and string diagrams. But somehow (I’m a little embarassed to admit), the fact that those are both instances of the same underlying idea never really occurred to me before. (-:
So what’s the range of examples here? When a quantum group theorist uses braids to reason about representations; using natural number indices in a monoid to represent powers; Peirce’s beta graphs for the predicate calculus, and then string diagrams for indexed monoidal categories. What’s to be said about internal logic?
Also, dually, what can one do with cofree structures capturing syntactic coterms? I wonder if there’s any life in my speculations here. I see there was an MO question asking for the coalgebraic version of ’no junk, no confusion’.
String diagrams also categorify to bicategories and to surface diagrams for higher categories. Internal logic applies to many different kinds of categories, and categorifies to 2-logic and homotopy type theory.
We use terms (initial algebras) to prove things that are preserved by functors, so that they will then be preserved by mapping into all the examples we care about. Could we use coterms to prove things that are reflected by functors?
If ’reflect’ has such a large cluster of terms associated to it - reflective, reflector, reflection, coreflective, bireflective, etc. - why does ’preserve’ not?
Presumably coterms are good for telling when things are different. If they differ as coterms, their preimages must have been different.
The terms you mention all relate to reflective subcategories. I was thinking of reflect in the sense of “if has some property, then so does ” which is basically an unrelated usage of “reflect”.
…an unrelated usage of “reflect”.
Ah, I’ve realised I needed to sort out some kind of confusion. Your ’reflect’ and ’preserve’ were meant in a similar sense to reflected and preserved limits?
Anyway, what’s the idea at hand? We have a term model as initial algebra, with its unique maps into any algebra. So an equation in the term model would have to be preserved in any algebra. Then a coterm model is a terminal coalgebra, with unique map from any coalgebra. If two coterms differ, then any corresponding coalgebra elements must differ. e.g., if two computers give different outputs, they can’t be running the same programme on the same data.
Your ’reflect’ and ’preserve’ were meant in a similar sense to reflected and preserved limits?
Yes, that’s it. For instance, functors preserve equality and reflect inequality. Your idea of using coterms to distinguish things is intriguing, but I’m not yet seeing exactly how it would work. Hmm…
I guess the parallel goes like this:
There’s a special initial term structure. Any object will have a special subobject which is the image of the initial structure. The subobject may have confused terms, and the object may have extra junk. A map between objects will map the first special subobject to the second.
There’s a special terminal coterm structure. Any object will have a special colouring by coterms which is the preimage of the terminal structure. The object may have more than one element with the same colour, and the object may not use all the colours. A map between objects will preserve the colouring.
No junk, no confusion; No ambiguity, no redundancy.
I think what computer science people mean by trace semantics is part of the story.
1 to 19 of 19