added a few more references with brief comments to *QFT with defects*

(this entry is still just a stub)

]]>created a bare minimum at *light-cone gauge quantization*, just so as to be able to sensibly link to it from elsewhere

added to *Lie algebra* a brief paragraph *general abstract perspective* to go along with this MO reply

finally cross-linked *Landau-Ginzburg model* with *TCFT* and added a corresponding reference,

prompted by this Physics.SE question

]]>created a stub for *decidability*, mainly only so that the mainy pointers to it do point somewhere

Created General Theory of Natural Equivalences, partly on the model of the (to me) useful-seeming Elephant. One reason was that I think it can be instructive for people learning more category theory to see this classic transparently commented on in a modern reference work like the nLab.

Another reason was that this is a historical important paper, which I needed an nLab reference for when writing a historical, context-adding section of directed graph, embedding Lawvere’s interesting Como comments into a relevant context, drawing historical parallels.

]]>created *dual graviton* with nothing but a one-sentence Idea and a reference. (I need that to point to it from *3d supergravity*, for completeness).

Reasoning in mathematics is simple and subject to automation and discipline/system, because every concept (e.g. integer number, real number, derivative, integral, differential equation and its solution, etc.) can be expressed using some very small set of simple notions. If one considers the type theory approach to the fundamentals of mathematics, then there are only two basic types (entity and Boolean-truth) and all the other types, all the other notions and concepts are formed from those two simple types. Reasoning in mathematics is systematic because we completely know the content of the every concept. Yes, sometimes we imagine some new concepts (poetics of math) but even in such cases we manage to write down those concepts (or approximations of them) into the other concepts that can be traced to the first principles. Concepts in the mathematics are formed (or at least - can be expressed) in the bottom-up manner.

Reasoning about physics, about real world (ontology, metaphysics, nature, social world, humanities, emotions, mind, etc.) is very hard, because we can only make guesses about the eventual concepts, about the connections with other concepts and we don’t know the full content of the concept, every research discover new shades of some concept, concepts are created, merged etc. And all this happens in non-rigorous manner, because we don’t know the complete content of the concepts expressed in the first principles. We even don’t know the first principles that can be used for the real world.

Semantics of the natural language is perfect example for efforts to discover such first principles. E.g. reading from https://edinburghuniversitypress.com/book-elements-of-formal-semantics.html one can see the table that expresses each grammatical category as the derived type that is made from just two basic types:

Abstract type F-type S-type NP→S intransitive verb ff et NP→(NP→S) transitive verb f(f f ) e(et) A→(NP→S) be copula f(f f ) (et)(et) A→A adjective modifier f f (et)(et) S→(S→S) sentence coordinator f(f f ) t(tt) A→(A→A) adjective coordinator f(f f ) (et)((et)(et)) (NP→S)→S quantified noun phrase (f f )f (et)t N→((NP→S)→S) determiner f((f f )f ) (et)((et)t) (NP→S)→(N→N) relative pronoun (f f )(f f ) (et)((et)(et))

One can guess - if mathematics is the model of the real world, then we already have all the first principles, we just need more efforts to express such concepts as ’happines according to Aristotle’, ’ontology according to Hegel’, ’ontology according to British encyclopedia’, ’ontology according to some famous philosopher N.’ (we should always take into account that well defined concepts are connected to some personality in whose inner semantic we they can be found and only from such personal concepts the conventional concepts can emerge by convention in some scientific community, legal system, etc.) using the basic notions of math.

OK, I know that my thoughts are very childish. That is why my real question is this - is there some discipline in philosophy that tries to express the content of the each concept in some basic notions, is there discipline of the philosophy that tries to uncover such basic notions and types (be they the already known mathematical notions and types or something other)? What are the names of such disciplines of philosophy? What are common terms and research themes in such disciplines? Just keywords and names? Everything other I can find further myself.

I know, that there is metaphysical ontology (as opposite to applied ontology) but I don’t know the efforts to find the content of concepts and the first principles. I know that there is mereology, but it is about parts, about structures and systems, but the essence of the concept is something more that just its structural build-up. So, I am completely lost and I don’t know where to search further.

p.s. Why I am asking this? Well, I have zero internal/personal drive to understand world in such basic terms. I am just trying to automate thinking/reasoning (artificial general intelligence) for applied purposes and that is why I need systematic, disciplined, extensible and automatable way of handling concepts and I am just seeking for theories that are already created for such handling of concepts. Of course, they can not give the final answers, but they can be good starting point and the system can discover further horizons itself.

]]>Yesterday I had added some rough bits and pieces and some references to *ADE singularity*, and cross-linked with relevant entries such as *ADE classification* and *M-theory on G2-manifolds*. But for the moment this remains a stub.

Wanting to know if any progress has been made in this area I searched and found this old forum post by Urs comparing the IKKT matrix model of the type IIb string theory to Loop Quantum Gravity. He says that both display 'radical background independence'

https://www.physicsforums.com/threads/lqg-strings-and-the-ikkt-matrix-model.8391/

Have the prospects for this type of research changed since 2003? What are the prospects for the IKKT model and, more ambitiously, the unification of LQG and String Theory? ]]>

I’ve created the entry Ehrenfeucht-Fraïssé games and recorded the basic definitions and results (partly inspired by jesse’s efforts to add some model theory content to the nlab). I’m running out of steam now, but we should also mention the connections to quantifier elimination and Scott rank. (I’ve added a redirect from back-and-forth argument which was requested at quantifier elimination.)

]]>New page small cardinality selection axiom.

]]>We know Finset is a topos, but is not a model of ETCS since it does not have a natural number object.

Is there any smart way of defining product/coproduct/exponential on the subcategory of **Sets** whose collection of object is collection of all sets of natural numbers (and therefore includes itself, so it is like joining a NNO to the category of finite sets), so it becomes a model of ETCS?

If it is not possible, I simply want a smaller model of ETCS which is not the category **Sets**.

If such a model does not exists, is there any proof that relevant to the fact that there is no subtopos of **Sets** which is a model of ETCS?

It is well-known that we can do proof by induction in a topos using the natural number object, I am interested in proving the strong induction principle, and I am attempting to translate this proof here https://math.ou.edu/~nbrady/teaching/f14-2513/LeastPrinciple.pdf ( (I) $\mathbb N\to$ (SI) ) into an abstract version, so it works for the natural number object in a category satisfies ETCS (In particular, the topos satisfies well-foundedness, and every subobject has complement, therefore, this is a Boolean topos).

The $\le$ relation is given as the pullback of $o: 1\to N$, as the element $0$ of natural number, along the map truncated subtraction $-: N\times N \to N$, as in Sketch of an elephant by Johnstone (page 114, A2.5).

I think the statement of strong induction to prove is as the following.

For a subobject $p: P \to N$, if for an arbitary element $n: 1 \to N$ of the NNO, “all member $n_0: 1\to N$ such that $- \circ \langle n_0, n\rangle = o$ factorises through $P$” implies “$s \circ n: 1 \to N \to N$ factors through P”, then $P\cong N$.

I am not sure what to do with the $Q$ though, and I am not sure if constructing such a $Q$ need the comprehension axiom for ETCS, and hence have trouble translating the proof in the link to also work for NNO. I think if we translate the proof, then we should reduce the task of $P\cong N$ into the task of proving $Q \cong N$. I am now asking about how to construct such a $Q$, which corresponds to the predicate $Q$ in the link. Any help, please?

p.s. The problem that I am actually interested in is proving the well-foundedness of natural number object (explicitly, every non-empty set has a minimal element), but the “material set theoretic proof” using strong induction. Therefore I think I should get strong induction principle for NNO. It will also be great if we can directly prove the well-ordered principle for NNO. (Have searched online for well-orderedness and strong induction for NNO, but nothing helpful is found. Is there any material to read about them?)

]]>I am reading the following document about Lawvere’s ETCS:

http://www.tac.mta.ca/tac/reprints/articles/11/tr11.pdf

My question is about Theorem 6 in this document. The aim is to prove an equivalence relation is an equalizer. On page 29, the author says:

Finally we can assert our theorem (I think the $a_0q = a_1g$ in the link is a typo.) $a_0q = a_1q$ iff $a_0 \equiv a_1$.

And finished the proof.

That is, the author proves for every element $\langle a_0,a_1\rangle$ from the terminal object $1$, we have the split via the map which we want to prove to be an equalizer. But to prove the equalizer result, we need that we still have the factorization when $1$ is replaced by any arbitrary object. I know that $1$ is the generator in ETCS, and have trouble working out the proof that the existence of factorization for $1$ implies the existence of factorization for any $T$. I tried taking the elements of $T$, and unable to construct a factorization from $T$ from the factorization for its elements.

Any help, please? Thank you!

Note: If someone is reading the link above, then please note that the composition is written as: if $f: A \to B,g: B\to C$, then the author will write $fg$ to denote the composition of $f$ of $g$, rather than $g \circ f$.

]]>The classic way to encounter the theory of categories is via Set Theory via the typical definition we see for categories. We see all kinds of categories that are equivalent to the category of small categories. I wonder about presentations of the theory of categories. To facilitate a discussion, we may need to define what a presentation of a theory is. It may consist of a logical language or even the standard presentations of algebraic structures. For instance, a presentation of the theory of partial monoids would count as a presentation of Categories. The presentation should come with enough structure to analyze all small Categories.

I saw Marsden put together a presentation of categories in terms of string diagrams.

I like to think that string diagrams can be seen as containers. This is a paper about containers. So the idea is that you have a (co)monad that encodes the container for the theory of categories. Could this work?

]]>finally created *intensive and extensive* with the topos-theoretic formalization following the concise statement in the introduction of *Categories in Continuum Physics*

I´m Software Architect experienced in Optimization Algorithms and Distributed Expert Systems. I have recently developed a technique which breaks limitations on Neural Persistence, on which I want to release my research article. However, I think that it is recomendable firstly to introduce appart the philosophical proceeding using fibred categories as a powerfull innovation in research level. I want to review and discuss this publicly for a better acceptance before resarch article comes out.

Please find out below and give me your feedback:

http://ixilka.net/publications/innovations_in_maths.pdf

]]>Wiki has interesting chapter https://en.wikipedia.org/wiki/Adjoint_functors#Solutions_to_optimization_problems that adjoint functors can be used for optimization, I guess more in the sense of finding optimal objects, structures. Is this original idea whose first exposition is in the wiki article or maybe there are available some references and elaborations of this idea? It would be good to know them? References will suffice, I can study them further.

Also, I guess, such optimization can use for solving the “optimal, paradox free deontic logic” as sketched in my previous question https://nforum.ncatlab.org/discussion/9838/category-of-institutions

]]>As far as I understand - then each institution is designed for some logic. It has two categories - one for syntax (whose objects are signatures) and one for models, effectively - each pair of objects in those categories describe one theory for the logic of this institution.

My question is: are there efforts to construct category of institutions - whose objects would be some logic?

I am beginning to read https://academic.oup.com/logcom/article-abstract/27/6/1753/2687725 and the logical framework mentioned in this articles, seems to be the step in this direction, but this article is somehow detached from the other papers in institutional model theory, so, some comments would be welcome.

Applications sometimes require to construct logic with some peculiar properties, e.g. field of normative reasoning and deontic logics is in search of paradox-free (purely philosophical notion, axiomatic level) deontic logic. So - maybe one can construct category of institutions and then find some universal property, some distinct object which would be the sought after deontic logic with excellent properties?

]]>This twenty page note aims at a clear and quick exposition of some basic concepts and results in differential geometry, starting from the definition of vector fields, and culminating in Hodge theory on Kahler manifolds. Any success comes at the expense of omitting all proofs as well as key tools like sheaf theory (except in passing remarks) and pull back functions and their functorial properties. I have tried and believe to have make the prerequisites few and the exposition simple. Researching for this note helped me consolidate foggy recollections of my decades-old studies, and

I hope it will likewise prove useful to some readers in their learning introductory differential geometry.

I assume the reader knows how real and complex manifolds and occasionally vector bundles are defined, but beyond this the development is self contained. It concentrates on the algebra $\A$ (or $\A_\C$) of smooth real (or complex) valued functions on the manifold, viewing tensors, forms and indeed smooth sections of all vector bundles as $\A$ (or $\A_\C$) modules. Nothing in commutative algebra harder than the concept of module homomorphism (which I call $\A$-linear) and its multilinear counterpart is used, yet this simple language goes a long way to economize our presentation.

The pace is leisurely in the beginning for the benefit of the novice, then picks up a bit in later sections.

The first 6 sections are about real smooth manifolds, sections 7 and 8 discuss real and complex vector bundles over real manifolds, and the final 3 sections are about complex manifolds. I start by first defining vector fields, tensor fields, Lie derivative and then move on to metrics and (Levi-Civita) connections on the tangent bundle and their Riemann, Ricci and scalar curvature. Sec 5 defines differential forms and lists their main properties. Sec 6 discusses Hodge theory and harmonic forms on real manifolds. Sec 7 is about connections and their curvature on real vector bundles and Bianchi identities and Sec 8 presents complex vector bundles on real manifolds and their Chern classes. Sec 9 discusses complex manifolds and the Dolbeault complex and Sec 10 Chern connections on holoromorphic vector bundles. Sec 11 discusses the Hodge decomposition on compact Kahler manifolds.

Beyond whatever left of my college day studies, I have drawn freely from internet sources, including nLab, particularly Wikepedia, as well as some downloadable books and notes. I give no references because aside from my own expository peculiarities, choices, typos, or any errors, the material is textbook standard. ]]>

I was just wondering why there was so little on “Institution independent Model Theory” or Absrtact Model Theory in the wiki. I found this short entry for Abstract Model Theory, and a link to yet non existing page on institutions.

I am trying to use this to see if this can help me extend the semantic Web semantics to modal logic. The reason is that institutions have been used to show the coherence between the different RDF logics - RDFS, OWL, … and so it seems that it should be helpful to go beyond that.

Some papers on semantic web and institutions are listed below. These are great because the semantic web is quite simple, useful, - and I understand it well - and these show in a practical way how to think about institutions, which would be otherwise much more difficult to get into. Also the basics of Abstract Model theory are quite intuitive

- Lucanu, D., Li, Y. F., & Dong, J. S. (2006). Semantic web languages–towards an institutional perspective. In Algebra, Meaning, and Computation (pp. 99-123). Springer, Berlin, Heidelberg.
- Bao, J., Tao, J., McGuinness, D. L., & Smart, P. (2010). Context representation for the semantic web.

The last one ties rdf to Contexts and to Institutions.

The RDF model is actually really simple btw. See the question and answer “What kind of Categorical object is an RDF Model?”

It is nearly self evident from using it that RDF already contains modal logic (see my short example on semweb mailing list), especially as for RDF1.0 xml syntax one can have relations to RDF/XML literals, whose interpretations are of course sets of models, and in RDF1.1 this is made clearer with the notion of DataSets which are sets of graphs. But they have not given a semantics for it… But self evidence does not make for a proof. (and by the way, RDF/XML is really the ugliest syntax existing. Much better to consider N3 which is Tim Berners-Lee’s neat notation for doing logic on the web.

- Berners-Lee, T., Connolly, D., Kagal, L., Scharf, Y., & Hendler, J. (2008). N3logic: A logical framework for the world wide web. Theory and Practice of Logic Programming, 8(3), 249-269.

Btw, as an extra part the discussion on modal logic in RDF is tied up with the notion of context, which may just be another way of thinking of modal logic (I am working to see if there is a difference)

- Guha, R. V. (1991). Contexts: a formalization and some applications (Vol. 101). Stanford, CA: Stanford University.
- Hayes, P. (1997, November). Contexts in context. In Context in knowledge representation and natural language, AAAI Fall Symposium.
- Bizer, C., Carroll, J. J., Hayes, P., & Stickler, P. (2005). Named Graphs, Provenance and Trust. In Proceedings of the 14th international conference on World Wide Web.
- Hayes, P. (2007). Context Mereology. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning (pp. 59-64). This is I thought a really neat paper.
- Bao, J., Tao, J., McGuinness, D. L., & Smart, P. (2010). Context representation for the semantic web.
- Klarman, S. (2013). Reasoning with contexts in description logics.

So because there was little on the wiki on abstract model theory I was wondering if that was not quite thought of as good Category Theory, or if there just had not been time to complete that page. And for Contexts I was wondering if this was the right place to look at. In the book “Institution independent Model Theory” R Diaconescu has a chapter on Kripke frames, but I think we actually need neighborhood semantics, that is not relations between one world and another but between one world and a set of worlds. So that one can represent inconsistent sets of ideas. (which the web really is a big example of)

]]>added to *Noether theorem* a brief paragraph on the *symplectic/Hamiltonian Noether theorem*