created a bare minimum at *light-cone gauge quantization*, just so as to be able to sensibly link to it from elsewhere

finally created *intensive and extensive* with the topos-theoretic formalization following the concise statement in the introduction of *Categories in Continuum Physics*

added to *Lie algebra* a brief paragraph *general abstract perspective* to go along with this MO reply

I´m Software Architect experienced in Optimization Algorithms and Distributed Expert Systems. I have recently developed a technique which breaks limitations on Neural Persistence, on which I want to release my research article. However, I think that it is recomendable firstly to introduce appart the philosophical proceeding using fibred categories as a powerfull innovation in research level. I want to review and discuss this publicly for a better acceptance before resarch article comes out.

Please find out below and give me your feedback:

http://ixilka.net/publications/innovations_in_maths.pdf

]]>Reasoning in mathematics is simple and subject to automation and discipline/system, because every concept (e.g. integer number, real number, derivative, integral, differential equation and its solution, etc.) can be expressed using some very small set of simple notions. If one considers the type theory approach to the fundamentals of mathematics, then there are only two basic types (entity and Boolean-truth) and all the other types, all the other notions and concepts are formed from those two simple types. Reasoning in mathematics is systematic because we completely know the content of the every concept. Yes, sometimes we imagine some new concepts (poetics of math) but even in such cases we manage to write down those concepts (or approximations of them) into the other concepts that can be traced to the first principles. Concepts in the mathematics are formed (or at least - can be expressed) in the bottom-up manner.

Reasoning about physics, about real world (ontology, metaphysics, nature, social world, humanities, emotions, mind, etc.) is very hard, because we can only make guesses about the eventual concepts, about the connections with other concepts and we don’t know the full content of the concept, every research discover new shades of some concept, concepts are created, merged etc. And all this happens in non-rigorous manner, because we don’t know the complete content of the concepts expressed in the first principles. We even don’t know the first principles that can be used for the real world.

Semantics of the natural language is perfect example for efforts to discover such first principles. E.g. reading from https://edinburghuniversitypress.com/book-elements-of-formal-semantics.html one can see the table that expresses each grammatical category as the derived type that is made from just two basic types:

Abstract type F-type S-type NP→S intransitive verb ff et NP→(NP→S) transitive verb f(f f ) e(et) A→(NP→S) be copula f(f f ) (et)(et) A→A adjective modifier f f (et)(et) S→(S→S) sentence coordinator f(f f ) t(tt) A→(A→A) adjective coordinator f(f f ) (et)((et)(et)) (NP→S)→S quantified noun phrase (f f )f (et)t N→((NP→S)→S) determiner f((f f )f ) (et)((et)t) (NP→S)→(N→N) relative pronoun (f f )(f f ) (et)((et)(et))

One can guess - if mathematics is the model of the real world, then we already have all the first principles, we just need more efforts to express such concepts as ’happines according to Aristotle’, ’ontology according to Hegel’, ’ontology according to British encyclopedia’, ’ontology according to some famous philosopher N.’ (we should always take into account that well defined concepts are connected to some personality in whose inner semantic we they can be found and only from such personal concepts the conventional concepts can emerge by convention in some scientific community, legal system, etc.) using the basic notions of math.

OK, I know that my thoughts are very childish. That is why my real question is this - is there some discipline in philosophy that tries to express the content of the each concept in some basic notions, is there discipline of the philosophy that tries to uncover such basic notions and types (be they the already known mathematical notions and types or something other)? What are the names of such disciplines of philosophy? What are common terms and research themes in such disciplines? Just keywords and names? Everything other I can find further myself.

I know, that there is metaphysical ontology (as opposite to applied ontology) but I don’t know the efforts to find the content of concepts and the first principles. I know that there is mereology, but it is about parts, about structures and systems, but the essence of the concept is something more that just its structural build-up. So, I am completely lost and I don’t know where to search further.

p.s. Why I am asking this? Well, I have zero internal/personal drive to understand world in such basic terms. I am just trying to automate thinking/reasoning (artificial general intelligence) for applied purposes and that is why I need systematic, disciplined, extensible and automatable way of handling concepts and I am just seeking for theories that are already created for such handling of concepts. Of course, they can not give the final answers, but they can be good starting point and the system can discover further horizons itself.

]]>created *dual graviton* with nothing but a one-sentence Idea and a reference. (I need that to point to it from *3d supergravity*, for completeness).

added a few more references with brief comments to *QFT with defects*

(this entry is still just a stub)

]]>Yesterday I had added some rough bits and pieces and some references to *ADE singularity*, and cross-linked with relevant entries such as *ADE classification* and *M-theory on G2-manifolds*. But for the moment this remains a stub.

Wiki has interesting chapter https://en.wikipedia.org/wiki/Adjoint_functors#Solutions_to_optimization_problems that adjoint functors can be used for optimization, I guess more in the sense of finding optimal objects, structures. Is this original idea whose first exposition is in the wiki article or maybe there are available some references and elaborations of this idea? It would be good to know them? References will suffice, I can study them further.

Also, I guess, such optimization can use for solving the “optimal, paradox free deontic logic” as sketched in my previous question https://nforum.ncatlab.org/discussion/9838/category-of-institutions

]]>As far as I understand - then each institution is designed for some logic. It has two categories - one for syntax (whose objects are signatures) and one for models, effectively - each pair of objects in those categories describe one theory for the logic of this institution.

My question is: are there efforts to construct category of institutions - whose objects would be some logic?

I am beginning to read https://academic.oup.com/logcom/article-abstract/27/6/1753/2687725 and the logical framework mentioned in this articles, seems to be the step in this direction, but this article is somehow detached from the other papers in institutional model theory, so, some comments would be welcome.

Applications sometimes require to construct logic with some peculiar properties, e.g. field of normative reasoning and deontic logics is in search of paradox-free (purely philosophical notion, axiomatic level) deontic logic. So - maybe one can construct category of institutions and then find some universal property, some distinct object which would be the sought after deontic logic with excellent properties?

]]>This twenty page note aims at a clear and quick exposition of some basic concepts and results in differential geometry, starting from the definition of vector fields, and culminating in Hodge theory on Kahler manifolds. Any success comes at the expense of omitting all proofs as well as key tools like sheaf theory (except in passing remarks) and pull back functions and their functorial properties. I have tried and believe to have make the prerequisites few and the exposition simple. Researching for this note helped me consolidate foggy recollections of my decades-old studies, and

I hope it will likewise prove useful to some readers in their learning introductory differential geometry.

I assume the reader knows how real and complex manifolds and occasionally vector bundles are defined, but beyond this the development is self contained. It concentrates on the algebra $\A$ (or $\A_\C$) of smooth real (or complex) valued functions on the manifold, viewing tensors, forms and indeed smooth sections of all vector bundles as $\A$ (or $\A_\C$) modules. Nothing in commutative algebra harder than the concept of module homomorphism (which I call $\A$-linear) and its multilinear counterpart is used, yet this simple language goes a long way to economize our presentation.

The pace is leisurely in the beginning for the benefit of the novice, then picks up a bit in later sections.

The first 6 sections are about real smooth manifolds, sections 7 and 8 discuss real and complex vector bundles over real manifolds, and the final 3 sections are about complex manifolds. I start by first defining vector fields, tensor fields, Lie derivative and then move on to metrics and (Levi-Civita) connections on the tangent bundle and their Riemann, Ricci and scalar curvature. Sec 5 defines differential forms and lists their main properties. Sec 6 discusses Hodge theory and harmonic forms on real manifolds. Sec 7 is about connections and their curvature on real vector bundles and Bianchi identities and Sec 8 presents complex vector bundles on real manifolds and their Chern classes. Sec 9 discusses complex manifolds and the Dolbeault complex and Sec 10 Chern connections on holoromorphic vector bundles. Sec 11 discusses the Hodge decomposition on compact Kahler manifolds.

Beyond whatever left of my college day studies, I have drawn freely from internet sources, including nLab, particularly Wikepedia, as well as some downloadable books and notes. I give no references because aside from my own expository peculiarities, choices, typos, or any errors, the material is textbook standard. ]]>

I was just wondering why there was so little on “Institution independent Model Theory” or Absrtact Model Theory in the wiki. I found this short entry for Abstract Model Theory, and a link to yet non existing page on institutions.

I am trying to use this to see if this can help me extend the semantic Web semantics to modal logic. The reason is that institutions have been used to show the coherence between the different RDF logics - RDFS, OWL, … and so it seems that it should be helpful to go beyond that.

Some papers on semantic web and institutions are listed below. These are great because the semantic web is quite simple, useful, - and I understand it well - and these show in a practical way how to think about institutions, which would be otherwise much more difficult to get into. Also the basics of Abstract Model theory are quite intuitive

- Lucanu, D., Li, Y. F., & Dong, J. S. (2006). Semantic web languages–towards an institutional perspective. In Algebra, Meaning, and Computation (pp. 99-123). Springer, Berlin, Heidelberg.
- Bao, J., Tao, J., McGuinness, D. L., & Smart, P. (2010). Context representation for the semantic web.

The last one ties rdf to Contexts and to Institutions.

The RDF model is actually really simple btw. See the question and answer “What kind of Categorical object is an RDF Model?”

It is nearly self evident from using it that RDF already contains modal logic (see my short example on semweb mailing list), especially as for RDF1.0 xml syntax one can have relations to RDF/XML literals, whose interpretations are of course sets of models, and in RDF1.1 this is made clearer with the notion of DataSets which are sets of graphs. But they have not given a semantics for it… But self evidence does not make for a proof. (and by the way, RDF/XML is really the ugliest syntax existing. Much better to consider N3 which is Tim Berners-Lee’s neat notation for doing logic on the web.

- Berners-Lee, T., Connolly, D., Kagal, L., Scharf, Y., & Hendler, J. (2008). N3logic: A logical framework for the world wide web. Theory and Practice of Logic Programming, 8(3), 249-269.

Btw, as an extra part the discussion on modal logic in RDF is tied up with the notion of context, which may just be another way of thinking of modal logic (I am working to see if there is a difference)

- Guha, R. V. (1991). Contexts: a formalization and some applications (Vol. 101). Stanford, CA: Stanford University.
- Hayes, P. (1997, November). Contexts in context. In Context in knowledge representation and natural language, AAAI Fall Symposium.
- Bizer, C., Carroll, J. J., Hayes, P., & Stickler, P. (2005). Named Graphs, Provenance and Trust. In Proceedings of the 14th international conference on World Wide Web.
- Hayes, P. (2007). Context Mereology. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning (pp. 59-64). This is I thought a really neat paper.
- Bao, J., Tao, J., McGuinness, D. L., & Smart, P. (2010). Context representation for the semantic web.
- Klarman, S. (2013). Reasoning with contexts in description logics.

So because there was little on the wiki on abstract model theory I was wondering if that was not quite thought of as good Category Theory, or if there just had not been time to complete that page. And for Contexts I was wondering if this was the right place to look at. In the book “Institution independent Model Theory” R Diaconescu has a chapter on Kripke frames, but I think we actually need neighborhood semantics, that is not relations between one world and another but between one world and a set of worlds. So that one can represent inconsistent sets of ideas. (which the web really is a big example of)

]]>Created General Theory of Natural Equivalences, partly on the model of the (to me) useful-seeming Elephant. One reason was that I think it can be instructive for people learning more category theory to see this classic transparently commented on in a modern reference work like the nLab.

Another reason was that this is a historical important paper, which I needed an nLab reference for when writing a historical, context-adding section of directed graph, embedding Lawvere’s interesting Como comments into a relevant context, drawing historical parallels.

]]>added to *Noether theorem* a brief paragraph on the *symplectic/Hamiltonian Noether theorem*

My question is - what can we deduce about objects and morphisms of this category from the properties of this catgory? Even more - maybe we can recover from the properties of the C the full structure of the objects and morphisms.

E.g. each logic can be assigned the tuple called institution (e.g. https://kwarc.info/people/frabe/Research/GMPRS_catlog_07.pdf and https://www.amazon.co.uk/Institution-independent-Model-Theory-Studies-Universal-ebook/dp/B00KTHC9AI) which is tuple of categories and functors. There are catalogized around 2.000 different logics, not all of them have their institutions invented but maybe all they can have institutions. There is urgent need to devise new logics and to quickly check the properties of newly minted logics, i.e. are they complete and sound. E.g. there are dynamic action logics from the one side and there are linear logics with temporal, spatial, epistemic and some other modalities, but there is no good example of the logic that combines both aspects - dynamic actions and substructural modalities.

So - maybe we can work at the institution level and devise in the first step the tuple of categories and functors with some properties that can go into the institution of the future logic and after then we can infere the exact objects and morphisms for the new minted categories. E.g. instutions have category of signature, so - maybe we can recover signature from the properties of the category.

That is one example. But maybe there are more example where such approach can be desirable.

I have asked the same question in math.stackexchange https://math.stackexchange.com/questions/2821628/algorithms-and-procedures-to-recover-objects-and-morphisms-from-the-properties-o but there is quite aggresive stance against it and it is being voted to be closed.

I have to hear some thoughts about this deduction process. Maybe there are some good literature on this? ]]>

created a table-for-inclusion *equivariant cohomology – table*

I apologize in advance is this is not the correct “category” for this discussion. Please feel free to fix this.

Under the *realization functors* subsection of the entry on motivic homotopy theory, the last line reads:

“For a non-separably closed field k, there is a Gal(k^sep/k)-equivariant realization analogous to the Real realization.”

However no reference is given, and I do not believe such a reference exists. This is an extremely subtle point. See for example Wickelgren’s paper:

http://people.math.gatech.edu/~kwickelgren3/papers/Etale_realization.pdf

This does work in the unstable setting, but this is currently being written up by myself and Elden Elemanto, and we are working on the stable result. I’m not sure the best way to revise this entry. But for sure, if there is a reference, it should be added, and otherwise, I would suggest removing or rephrasing this sentence.

]]>I want to record a certain lemma on the nlab that is rarely made explicit (slight variation here: https://arxiv.org/abs/1112.0094).

The theorem is that if you have a profunctor $R$ from $C$ to $D$ and it is representable as a functor $F : C \to D$ then the action of $F$ on morphisms is uniquely determined by $R$ and the representation isomorphism.

This explains the initially strange feature of type theoretic connectives, which have semantics as functors but we never actually define their action on morphisms because we can derive the action from the introduction and elimination rules.

My question is should this get a page on the nlab, called I guess “Representability determines Functoriality” or something like that? Or should I try to find some existing page?

And a meta-question is should I err on the side of just making a page with whatever name I feel like or come to discuss the name here on the nforum? Is it easy to change names of pages after the fact?

]]>Created subformula property.

]]>I am aware of the following: in the context of synthetic differential geometry (SDG) one obtains a Lie algebra by exponentiating a microlinear group by a standard infinitesimal object and taking the infinitesimal commutator, and that the functor expressed by this operation factors through formal group laws (FGLs) in the usual way. This reveals that Lie groups are FGLs with respect to first-order infinitesimals.

Now I would like to consider a lined topos equipped with higher-order infinitesimals, and develop in this context a modified notion of microlinearity. I have not yet developed the details of this. But does modifying microlinearity in this way, to yield R-modules by exponentiating FGLs with higher-order infinitesimals, sound reasonable? It is worth saying that in general we want certain polynomial identities to hold in the resulting R-modules, e.g. the Jacobian identity.

While FGLs have been thought of in this way (e.g. Didry in [1], an attempt to extend Lie theory to include Leibniz algebras), I have not found sources discussing modifications of microlinearity to subsume FGLs in the language of SDG. Some suggestive remarks can be found in Nishimura’s work, such as in the introduction of the paper [2], where the author discusses prolongations of spaces with respect to polynomials algebras as generalizations of Weil algebras. What do you think, nForum?

[1] Didry, M. Construction of Groups Associated to Lie- and to Leibniz- Algebras

[2] Nishimura, H. Axiomatic Differential Geometry II-2, Chapter 2: Differential Forms

]]>Made a stub for admissible rule with a few examples, after seeing the discussion about negation here

]]>I’ve just seen an interesting abstract construction in this paper by Mellies and Tabareau on page 19 and I’m curious if people have other examples.

They call it the “fingerprint”. Given 3 categories $A,B,C$ and adjunctions from $A$ to $B$ and $B$ to $C$ you get a monad and a comonad on $B$, and so for every object $b \in B$ a morphism $f_b : W b \to b \to M b$. If $B$ further has an orthogonal factorization system, then you can recover (up to iso) $b$ from this map and then it is called the “fingerprint” of $b$ (since it uniquely identifies $b$). By the adjunction this map can actually be represented as a morphism in $A$ or $C$ as well, so this means any object of $B$ can be represented by a morphism in $A$ or $C$.

In that paper the example they give is that $A$ is Cat, $B$ is MultiCat and $C$ is MonoidalCat with the (hopefully) obvious adjunctions. Then the fingerprint of a multicategory $M$ is a morphism $i : Unary(M) \to Contexts(M)$ in Cat from the category of unary morphisms to the monoidal category of “contexts” that has sequences of objects of $M$ as objects and “substitutions”/sequences of arrows of $M$ as arrows.

This seems very similar to the presentation of models of simple type theory taking as objects contexts and then having a subset of display objects to represent the actual types. In fact, I think you can make this presentation an instance of “fingerprint” if you take $A$ to be Set and compose with the discrete cat/set of objects adjunction to get an adjunction from Set to MultiCat.

The construction is obviously very general though, does anyone see any other natural examples (or perhaps a different name for fingerprint)? I would hope that a presentation of models of dependent type theories would have a similar formulation, but I’m not sure what the categories would be. $C$ is probably locally cartesian closed categories and maybe $A$ is Cat but then what is $B$?

]]>I was recently Barr's translation of Grothendieck's Tohoku paper and he said something (in his preface) that, which now seems obvious, I had not thought of: how can one make sense of Galois theory, which depends on the notion of automorphism group, without being able to distinguish between isomorphism and identity?

How does HoTT and univalence deal with this?

I apologize if this question has already been discussed in detail. ]]>