The smooth variant is mentiooned ONLY lifting paths. ]]>

I tried posting a message containing the Unicode symbol for $\lambda$ (https://www.compart.com/en/unicode/U+1D706), and I got an error. I had expected that the nForum supported full Unicode?

]]>As explained in the well-order article, any well-order (i.e. extensional, transitive well-founded relation) is a linear order in classical mathematics. The proof given is rather complicated. Happily, within constructive mathematics, there’s a stronger result with a simple proof, as I’ll explain.

Given an irreflexive, transitive relation $\prec$ on a set $X$, say that elements $a, b \in X$ are *trichotomously related* when either $a \prec b$ or $a=b$ or $b \prec a$. (Clearly no more than one of these can be true.) Say that $\prec$ is *trichotomous* when any $a,b \in X$ are trichotomously related.

Theorem: a transitive well-founded relation $\prec$ on a set $X$ is trichotomous iff it is both extensional and decidable.

Proof: ($\Rightarrow$) is trivial. For ($\Leftarrow$), we claim that, for any $a,b \in X$, if $a$ is trichotomously related with every $y \prec b$, and every $x \prec a$ with $b$, then $a$ and $b$ are trichotomously related. Therefore any $a,b \in X$ are trichotomously related, by induction.

The claim is proved as follows. Decidability of the propositions $a \prec b$ and $b \prec a$ gives either $a \prec b$ or $b \prec a$ or both $a \nprec b$ and $b \nprec a$. In the last case, $x \prec a$ implies $x \prec b$, since $x$ is trichotomously related with $b$ but $x \neq b$ and $b \nprec x$, and likewise $y \prec b$ implies $y \prec a$, so extensionality gives $a=b$. Thus $a$ and $b$ are trichotomously related.

Martín Escardó has pointed out to me that this proof lies within intensional Martin-Löf dependent type theory.

]]>The nForum and the nLab have always been very open to all contributors, named and anonymous, to the point of not even requiring users to create an account before editing the nLab or posting on the nForum. However, recently we have found that on the nForum, because all posts from users without an account are attributed to “Guest”, it can be very confusing to read a discussion thread involving multiple people posting as Guest, with no way to tell which of the Guests are the same as which other Guests.

For this reason, the Steering Committee has decided to disable Guest posting on the nForum, effective immediately. Creating an account is easy and quick (or it should be – please contact us if you have any trouble), and anyone who wishes to remain anonymous can choose a pseudonym as their account name. The only purpose of requiring account creation is to make it possible to distinguish all participants in a conversation.

(For now at least, it still remains possible to edit the nLab without creating an account.)

]]>Hi, all.

I am a bit confused by the proofs in “comparing material and structural set theories” https://arxiv.org/pdf/1808.05204.pdf

My question is about proposition 7.5 (collection of contexts) and proposition 7.6

7.5 proves collection of contexts from collection of functions and collection of equalities. The proof says:

Simply apply collection of sets and functions repeatedly, using the fact that regular epimorphisms compose.

To make sure I understand it correctly let me expand an example: say the context is $\{X,Y,f:A\to B\}$ over $U$, given that for each $u:1\to U$, there exists a context $X,Y, f:X\to Y$ such that $\phi(u,X,Y,f:X\to Y)$, to unwind this layer by layer, we can write $\exists X (\exists Y \;\exists f:X\to Y. \phi(u,X,Y,f))$. We set $\psi(u,X):= \exists Y \;\exists f:X\to Y. \phi(u,X,Y,f)$, then by collection of sets, for each $u:1\to U$, there exists a set $X$ such that $\psi(u,X)$. Then we get a regular epi $p:V\to U$ and a $p_A:A\to V$ such that for each $v:1\to V$, $\psi(pv, v^*A)$. And for the next layer, we use the fact that for each $v:1\to V$, we have $\exists Y.(\exists f.\phi(pv,v^*A,Y,f:v^*A \to Y))$ and set $\varphi(v,Y) := \exists f.\phi(pv,v^*A,Y,f:v^*A \to Y)$, then by collection of sets we have a regular epi $q:W\to V$ and a $f_B:B\to W$, such that for all $w:1\to W$, we have $\exists f. \phi(pqw,w^*q^*A,f:w^*q^*A \to B)$. Also, the composition $p\circ q$ is a regular epi $W\to U$. Next step is similar and use function collection instead.

I think as replacement of sets and functions are also proved in paper, then the proof of replacement of context can be obtained the same way and be as short as “simply apply replacement of sets and functions repeatly”. But the paper is not written like this. Did I miss some critical thing? I think the stack condition is only used for forming new sets, and prestack condition is sufficient to form the functions, given the sets are already constructed. We can only invoke the gluing facts in proving replacement of sets and functions as lemma, but not need to invoke them in the proof of replacement of context.

]]>I a searching a proof for the fact(s) claimed in this post: https://nforum.ncatlab.org/discussion/560/codomain-stacks/

Is there a proof on the condition for a self-indexing to be a stack? I would like a reference. If there is not, some particular cases or proofs with similar ideas would also be appreciated.

Thanks!

]]>Hi, all.

I am trying to understand the proof of proposition 7.6 in Shulman’s paper https://arxiv.org/pdf/1808.05204.pdf . I am interested in understanding it for applying it within SEAR to obtain interesting constructions (or ETCS, I say SEAR here because it comes with a collection axiom). One point that confuses me is that I am not yet familiar to the notion of stacks. As the claim is that the proof of 7.6 can be carried our within SEAR for each particular case, I think it would be ideal if I can see how the proof of 7.6 expands on particular examples to obtain a proof within SEAR. (Note: I do not think I am confused by how to apply the principle of replacement, I am confused by how its proof works for these particular examples. Say, from the context we can obtain from collection, how can we derive the existence of the context we want by manipulation within these examples without directly appeal to the replacement schema.)

For the simplest example I can think of, consider we want to build the well-ordered set N without explicitly defining the order “$lt$”. I think we may obtain an isomorphic copy of the well-ordered set N using replacement.

Let U be N, u be a natural number n in N, we can prove for each n there exists a set A and a relation R on A, such that R is a well order on A. That is, the context $\Theta$ is $[A,R:A\looparrowright A]$, and it can be proved that consider the formula $\phi$ such that $\phi(N,n,A,R:A\looparrowright A)$ is true iff $A$ has cardinality $n$ and $R$ is a well-order on $A$, then there would be a context over a cover $p:V \to N$ which descents to a context over $N$, and the resultant context should consist of a set $\tilde{A}$ and a relation on $\tilde{A}$, where $\tilde{A}$ has a map to $N$, call it $q$. I think then $q^{-1}(n)$ will be the well-ordered set contains $n$ elements, and the whole $\tilde{A}$ is the disjoint union of $A_0,\cdots,A_n,\cdots$. where $A_n$ is the well-ordered set containing $n$ elements. Then we can collect the maximal elements in each such set and obtain a copy of $N$ with the order $\lt$.

This example above is due to myself and I am not sure if I have got it correct, but the following example from Mike Shulman must be correct. It is a bit complicated and let me expand the story in details below.

For an ordinal $\alpha$, there is a notion of the $\alpha$-th Beth number, denoted as $\beth_\alpha$. This **number** itself is a **set**, so you may think of associate each ordinal with a set.

If $\alpha \lt \omega$, means that $\alpha$ is a finite ordinal, that is, a natural number, then the $\alpha$-th Beth number is the power set That is, , $P^\alpha(\mathbb N\beth_0 =\mathbb N\beth_1= PN$, $\beth_2 = PPN$ and $\beth_3 =PPPN$, etc.

Now consider the notion of the $\omega$-th Beth number, $\beth_\omega$ is defined to be the minimal set such that for every natural number $n$, the set $\beth_n$ injects to it. We are going construct $\beth_\omega$, and its construction relies on schema of replacement.

Regarding how to apply the replacement schema (credit to Mike Shulman):

The way to use replacement of contexts here is something like the following. For simplicity consider the natural numbers N. For x in N, consider the context consisting of a set X, a function p : X -> N, a relation R from X to X, and a function z : N -> X. Depending on x and on this context, consider the statement that the image of p is {y | y <= x}, that z is an isomorphism onto the fiber p^{-1}(0), and that R is the union of “membership” relations witnessing that p^{-1}(y+1) is the powerset of p^{-1}(y) for all y<x. One can prove by induction on x that two contexts satisfying this statement are uniquely isomorphic. Therefore, by replacement of contexts, there is a universal such context over N, which in particular has a function p : X -> N x N. Pulling this back along the diagonal map of N, we obtain an N-indexed family of sets that is the sequence of iterated powersets starting with N, and thus its total space is beth_omega.

Recall the statement of replacement of contexts (from Shulman’s paper):

for any formula $\phi(U, u :\in U, \Theta)$, for any $U$, if for every $u :\in U$ there is an extension $\Theta$ of $(U, u)$ such that $\phi(U, u, \Theta)$, and moreover this extension $\Theta$ is unique up to a unique isomorphism of contexts $\Phi_\Theta$, then there is an extension $\Theta_0$ of $(U ^* U,\Delta U )$ in $Set/U$ such that $\phi(U, u , u ^ * \Theta_0 )$ for all $u :\in U$.

Here the $\phi$ is the “statement” in ” consider the statement that the image of p”. This statement is on the following parameters: 1. The fixed natural number $x$, as in “For x in N, consider…”. 2. the function $p:X\to N$, 3. the relation $R:X\looparrowright X$, which is “the union of “membership” relations witnessing that p^{-1}(y+1) is the powerset of p^{-1}(y) for all y<x”. (Union of relations are given by consider a relation from $X$ to $Y$ as a subset of the power set ${\cal P} (X\times Y)$). 4. the funcion $z:N\to X$, which injects $N$ to $X$.

Intuitively, this relation says that $X$ is a set which is fibred over $N$ via the projection map $p:X\to N$, such that the fibre of $0$ is $N$, and for each number $m\lt n$ the fiber of its successor $m+1$ is the power set of the fiber of $m$. (*) Therefore, $\beth_n$ is the fiber of $n$.

Call the predicate by $Upow(n,p:X\to N,R:X\looparrowright,z:{N}\to X)$, we apply replacement of contexts on $Upow$. Here $U$ is $N$, $u\in U$, is $n\in N$, $\Theta$ is the list $[X,p:X\to { N},R:X\looparrowright X, z:{ N} \to X]$. As suggested by

One can prove by induction on x that two contexts satisfying this statement are uniquely isomorphic.

By induction on $n\in N$, we prove there exists a context $[X,p:X\to { N},R:X\looparrowright X, z:{ N} \to X]$ up to unique isomorphism such that $Upow(n,p:X\to N,R:X\looparrowright X,z:{ N}\to X)$. Therefore we can apply replacement. It gives a context $\Theta_0:=[X_0(\to N),p_0:X_0\to { N}\times { N}, R_0:X_0\looparrowright X_0,z_0:{\Bbb N}\times {N}\to X_0]$ where the maps $p_0,z_0$ and the relation $z_0$ are in the slice category. In particular, the fibre of the map $p_0:X_0\to { N}\times { N}$ on the point $(n,n)$ is the greatest fiber, i.e., the fiber of $n$ as in (*) above, of a context satisfying $Upow(n,p:X\to N,R:X\looparrowright,z:{ N}\to X)$. Therefore, pulling back along the diagonal map just means take out all these sets $\beth_1,...,\beth_n,\beth_{n+1},...$ (infinitely many). All these sets are contained as a subset of $X_0$, so we can take union of them.

Could someone please explain the proof of 7.6 in Shulman’s paper, in terms of some examples such as the two above, to illustrate how the proofs work? It does not have to be convoluted. I am most confused by the step which uses “Set is a stack of its regular topology”.

]]>I was recently looking for references on the horizontal categorification of the notion of a monad, maybe called something like a 'monadoid'. I was surprised that I couldn't find anything on nlab or elsewhere about this, given that monads are so well-documented and it it clear to see what the right definition of a monadoid should be. (Monads are internal monoids in some endofunctor category End(C), equivalently they're End(C) enriched categories with one object, and then you can immediately generalize this and define monadoids to be End(C)-enriched categories.) I've quickly put together a document listing the right definitions for monadoids and some related concepts like their algebras (algebroids?) and the counterpart of the Kleisi category. I've also written down some simple examples and how they might be interpreted as computational side effects with a twist. So my question is twofold:

1) Has anyone seen anything like this before? Is there already a reference somewhere on nlab for what I've been calling monadoids?

2) If not, would nlab be a good place to upload these definitions/make an nlab entry for them?

I would be a first-time contributor here on nlab, so am hesitant to just make a page without checking first that the concept is relevant and has not already been documented!

Thanks! ]]>

I’m looking for advice on how to call classes of morphisms $M$ whose orthogonality and injectivity classes coincide: $M$ has this property if every $M$-injective object $X$ is also $M$-orthogonal. Or has perhaps somebody already given a name to this property somewhere?

The concept is relevant because

a) the small object argument for sets of morphisms with this property computes the reflection into the orthogonal subcategory, and

b) every class of morphisms can be extended to a class with this property and the same orthogonality class (add $B \amalg_A B \rightarrow B$ for each map $A \rightarrow B$ in the original set).

Thanks!

]]>I am dealing with McLarty’s paper on axiomatising a category of categories here

I am confused by his description on Theorem 27. His description starts on page 1251, and my confusion is about the paragraph precisely before the statement of Theorem 27 on page 1252. I feel very messed-up by the following sentences:

Now consider any internal small category in which every “endomorphism” is an “identity arrow”, and suppose it has a skeleton. The skeleton is of the sort described in the last paragraph, so let $B$ be the corresponding actual category. Then an actual category corresponding to the pattern appears (in a number of ways) as a full subcategory of $B \times (Cl^{Ob})$.

I have the following concrete questions:

What does a “skeleton” refer to and why it has a corresponding category? From the construction in the paragraph before, I guess the skeleton is something whose corresponding actual category is the equalizer, but what is the thing?

Regarding:

Then an actual category corresponding to the pattern appears (in a number of ways) as a full subcategory of $B \times (Cl^{Ob})$.

What is “the pattern appears”?

- In “then $Ar$, $Ob$ corresponds to an actual category”, is the witness of the existence of $B$ or some full subcategory of $B\times Cl^{Ob}$? If the later, is there any description of such full subcategory?

Sorry if it turns out to be my stupidness. But I am really confused now. I hope there is a rearranged clearer version of such a proof. Thank you for any hint ot comment which may help me understanding this.

]]>I’m getting a SEC_ERROR_EXPIRED_CERTIFICATE error when I try to access the n-Forum (on multiple devices and browsers). Looks like the certificate needs to be renewed!

Of course it might be that no one sees this for the same reason.

]]>I read the surreal number on nlab and it is written there:

It is a routine matter to rewrite the definitions in a more structural fashion, say within ETCS. In outline, a game is a rooted tree with no infinite branches and equipped with a labeling of each edge by a symbol $L$ or $R$.

and

In this set-up, the condition of no infinite branches plays the role of a foundation axiom, and inductive arguments tend to devolve upon the use of the principle of dependent choice.

But it is not the fact that any surreal number is a tree with finite depth, and many interesting results comes after depth $\omega$, therefore, it seems to me that the tree approach cannot formulate all the surreal numbers (unless we use something like a “transfinite tree”, whose definition in SEAR is not obvious to me as well.) but only all the surreals which is created on day $n\in\omega$. (Thanks for correcting me if I am already wrong here.)

I think there remains hope to capture “all of the surreals”, using the approach here

But again, as there is no universally approved notion of ordinals in structural set theory, we can only formulate ordinals with some base $A$, in the sense that an $A$ ordinal is an equivalence class of well-orders on $A$.I think this gives hope to formulate $A$-based surreals as functions from $A$-ordinals.

Is there any suggested approach to define the whole collection of surreals in structual set theory like SEAR or ETCS?

Another question is that: If we can capture surreals in SEAR perfectly, then we should be able to prove that each ordinal can be represented as a surreal. Since ordinals cannot be captured perfectly, “each ordinal can be represented as a surreal” will have an alternative statement as for surreals in SEAR. What would that statement be? Does it sound plausible to use “ordinals with some base” instead?

]]>Hi all, I am dealing with CCAF by McLarty, which can be found here .

I am confused by the definition of $Cl$ on page 1248, the construction is:

Coequalize $n \circ_P m$ with $m \circ 0$, then do the same with $m$ and $n$ reversed. Call this coequalizer $Cl$.

What is **this equalizer $Cl$**? There are two equalizers here, does **this coequalizer** actually mean the pushout of the coequalizer of $n \circ_P m$ with $m \circ 0$ and the coequalizer of $m \circ_P n$ with $n \circ 0$?

I guess a bit like the author want to start with the category with exactly two objects, and a pair of arrows between them which are not necessarily inverses of each other, call them $m$ and $n$, then any arrow of such category will be a combination of $m$ and $n$, and taking the coequalizer quotients out the relation $m\circ n = id$ and $n\circ m = id$, but then the category will just be the category containing precisely two objects and a pair of isomorphisms between them, so there will be no point of mentioning “every arrow parallel to $m$” in the proof.

Could someone please explain what does the quoted sentence mean here, or how does this category look like? Or some hint of reading this proof? Since I am very unsure about his construction, I had a hard time guess what does he mean after this sentence.

]]>Hi, all.

I am confused by the following sentence from the document here

In page 1249, The paper says:

By Theorem 3, $2^{op}\cong 2$

Theorem 3 says “any generator with exactly three endomorphisms is isomorphic to $2$”, therefore, to show $2^{op}\cong 2$, there are two things to check: 1. $2^{op}$ is a generator, 2. there are exactly three functors $2^{op} \to 2^{op}$.

I agree that the axioms $(A^{op})^{op} = A$ and $(F^{op})^{op} = F$ makes sure that there is a one-to-one correspondence from functors $A \to B$ to functors $A^{op} \to B^{op}$. My confusion is to show the first point, that is, why is $2^{op}$ a generator? That is, given functors $f,g:A\to B$ such that $f \neq g$, how can we come up with a functor $a:2^{op} \to A$ such that $f\circ a \neq g \circ a$?

The author does not say anything like “we have an isomorphism $2\to 2^{op}$”, and the paper does not assume any “extensionality of categories” (i.e., we are not able to say two categories are equal iff they have equal collection of objects and a equal collection of arrows), therefore, the assumption $(A^{op})^{op} = A$ does not really say anything about the objects an arrows of $A^{op}$, since the operation is applied on $A$, not on its object (which are **defined** to be functors $1\to A$), and not on its arrows (which are **defined** to be functors $2\to A$). Also note that this sentence is stated before assuming $1 = 1^{op}$ and $2 = 2^{op}$, instead, the workflow seems to be: the axioms are sufficient to prove $2\cong 2^{op}$, and since assuming $2 = 2^{op}$ does not break consistency, we can add this assumption.

Or is the author sloppy here with the “minor detail”? If so, I think I am in trouble. Any hint to fix this point?

Thanks to any attempt to help!

]]>Hi, All. I am reading through McLarty’s axiomatization of category of categories, and here is a theorem that I cannot understand. It might be my silly or have forgotten some basics, but it has annoyed me for several hours.

The paper “Axiomatizing category of categories” is here

On page 1245, Theorem 5 states:

Let $f$ and $g$ be composable arrows of any category $A$. If $g$ is an identity arrow then $g\circ_A f = f$

And the proof says:

Theorem 4 plus chasing through the pushout for $g \circ_A f$.

I think to be able to use Theorem 4, we need the unique arrow induced by the pair $f,g$ to factorize through the one induced by $id_2$ and $0 \circ !_2$. if such factorization does not exist, I do not see any chance of applying theorem 4.

I think there is some condition that pushout arrow induced by of $f \circ a$ and $g \circ b$ factors though the pushout arrow induced by $a$ and $b$, and the case described in Theorem 5 of the paper satisfies the condition. If there is a condition, what is it? (I agree with the intuition that if two pairs of things have the same shape, then their pushout should have a “same fragment”, my trouble is to figure out how it formally happens.)

Thanks for any hint and attempt to help!

]]>Recently, there was a discussion in the page about Serre subcategories in The Stacks project. I usually follow both nLab and The Stack project for these things but it seems that the definitions of Serre subcategories are different here and there. I just want to clarify if both definitions are correct or there is an error on any of them. Am I missing something? The discussion is about the use of “if” or “iff” conditions under the arrows involving $M,M',M''$ (or $A,B,C$) in the pages Serre subcategory and https://stacks.math.columbia.edu/tag/02MO (there is even more discussion (going on) in https://stacks.math.columbia.edu/tag/02MN).

]]>Hi, all.

I am reading the SEAR nlab page here: https://ncatlab.org/nlab/show/SEAR

Regarding Theorem 2.8, it says:

Theorem 2.8. For any sets A and B, $A\times B$ is a product of A and B in the category Set, and a coproduct in the category Rel.

and the proofs says just in naive set theory.

But the coproduct of Rel in naive set theory is the disjoint union! I am confused how can $A\times B$ can serve as the coproduct of Rel, I know that the coproduct/product in Rel is the disjoint union, but how can the cartesian product coincide with the disjoint union? Given relation $f:A\to X,g:B\to X$, how can we define the relation $A \times B \to X$ so it is a coproduct?

I assume the inclusion relations $A\to A \times B$ and $B\to A\times B$ is $(a,(a1,b1))$ holds iff $a = a1$, and $(b,(a1,b1))$ holds iff $b = b1$. I may also make mistake here but I still wonder what else could it be.

Thank you for any explaination! (and sorry if it turns out to be a stupid question…)

]]>In computer science, a Monad is always an Applicative Functor. It’s regarded as intermediate structure between functors and monads.

And it is the programming equivalent of lax monoidal functors with tensorial strength in category theory. So a Monad in programming is always a lax monoidal functor with tensorial strength.

Exactly how is this possible? What I know about Monad in programming is that they are equivalent to a monad on a ’type category’ which has ’data types’ as objects and ’pure functions’ as morphisms. Since data types can always be seen as a set of values, the type category is like a subcategory of Set. Actually an arbitrary subcategory because languages can support any types as they want.

I got to know that every monad in Set are strong monad, which has tensorial strength by definition. I tried to prove that they eventually form a monoidal functor, but I failed. I could make two different possible candidate for coherence maps from given tensorial strength and costrength, but couldn’t prove that they satisfy the commutation condition for the coherence maps.

The only thing I could find was that strong monads are monoidal monad (and thus monoidal functor) if they are commutative. However, monads are not commmutative in general even in Set. There are many monad which are not commutative in Set, including free monoid monad. (I think they are equivalent to List in programming.)

Just proving that every monad in Set are monoidal functor may not be that hard. I rather want to know more fundamental, or generalizable reason behind that (If exist).

If I’m not wrong, it seems that every monad in Set are indeed monoidal functor even when they are not commutative. What makes this behavior? What makes every monad or strong monad in certain category be always monoidal functor, even when they are not commutative?

I am not a math major and still new to this area. I just got curious about some fundamental things behind tools I have used. I apologize if I said something stupid here.

]]>I am researching Topos theory and realizability, I was wondering if anyone saw or has a reference to a Topos/Tripos formalization in coq. What I am looking for specifically is the tripos to topos construction in Coq. ]]>

I can see that it is possible to call objects of such a category all the board positions possible and morphisms being a kind of "subset" relationship between the different board positions. For example, if a specific board position, a, can be arrived after a specific board position, b, then define a morphism b -> a. (This way we can satisfy the associative property required for a category and we just get a semi-lattice.) However I am really curious if there is another way of looking at the game categorically.

(New to the forum here, hello everyone! I hope this is the right place to talk about such things. I am If not then sorry for the "spam"!) ]]>

Let $p:X\rightarrow Y$ be a covering and $f:X \rightarrow Z$ be a map. What is the correct definition of the space of factorizations of $f$ through $Y$ up to homotopy ? I’d like a definition not as a space of maps $Y \rightarrow Z$ that makes the diagram homotopy commutative, but rather something like a spaces of homotopies from $f$ to a map that factors on the nose.. It’s probably super elementary but for some reasons it gets me confused.

Basically this is the topological picture generalizing: given an inclusion of groups $H \subset G$ and a morphism $H \rightarrow K$, classify the lifts $G\rightarrow K$ along this map (either on the nose, or up to conjugation, depending of whether we’re taking (un)pointed spaces and maps).

Now suppose that the normal closure of $H$ in $G$ is the whole of $G$. Say, to simplify, there is another subgroup $H'$ of $G$ which is conjugated to $H$ and such that $H,H'$ generate $G$. Now assume I also have a morphism $H' \rightarrow K$ (modulo details this is typically the sort of things I’d get from the topological picture, like if I have a morphism into $K$ out of the fundamental groupoid of $X$ with respect to the fiber over a chosen basepoint in $Y$, e.g. if i know alll those points are sent to the same point in $Z$). Now the question of whether these morphisms extends to a morphism from $G$ becomes a condition, not extra structure (ie we have a surjective map from the free product $H*H'$ to $G$ and a map from $H*H'$ to $K$ so now the question is whether it factors through a quotient (rather than extends along an inclusion)). In particular if it does factors through it does so in a unique way.

What is the correct way to model this topologically, or groupoidally ? In what setting can I use that the normal closure of $H$ is $G$ to get some sort of unicity of factorization of some map ? I guess what I’m really asking is whether thee is a name for a functor $F:C\rightarrow D$ such that $End(d)$ is generated by the images of the $End(c)$ for all $c$ s.t. $F(c)=d$ or something like that…

]]>In Nikolaus and Schweigert's paper "Equivariance in higher geometry", they define \tau-prestack in Definition 2.12, page 10, via "fully faithful functor of bicategories". They explained in the definition without giving any reference that a functor of bicategories is called fully faithful, if all functors on Hom categories are equivalences of categories.

I think this definition of fully faithful bifunctor is reasonable. I saw the definition of fully faithful 2-functor here on nlab: https://ncatlab.org/nlab/show/full+sub-2-category, which is defined in the same way as that in Nikolaus and Schweigert's paper. However, on this page, there is no reference as well.

So my question is above at the beginning. ]]>

We know Finset is a topos, but is not a model of ETCS since it does not have a natural number object.

Is there any smart way of defining product/coproduct/exponential on the subcategory of **Sets** whose collection of object is collection of all sets of natural numbers (and therefore includes itself, so it is like joining a NNO to the category of finite sets), so it becomes a model of ETCS?

If it is not possible, I simply want a smaller model of ETCS which is not the category **Sets**.

If such a model does not exists, is there any proof that relevant to the fact that there is no subtopos of **Sets** which is a model of ETCS?

This question comes to me because I am interested in organising ETCS in a formal language. I found this material called FOLDS https://www.math.mcgill.ca/makkai/folds/foldsinpdf/FOLDS.pdf and it is claimed that topos axioms can be stated in FOLDS as in the style of the axioms on page 19. However, when we have both an initial object 0 and a terminal object 1, the sort of arrows from 1 to 0 is empty, but it does not seem to me that FOLDS allows empty sort.

On page 20 of the above link, it is written:

When X is a sort, and x:X, that is, x=〈2,X,a〉 for some a, x is called a variable of sort X

And later it is written that we can use the natural numbers to serve as the “a”. Therefore, once we have a sort, we can form infinitely many variables.

If we require every sort to be non-empty then we have soundness issue here: it can be easily proved that there does not exist any arrow 0 -> 1, otherwise we will have $\mathbb N0\cong 1$.

So is it my misunderstand? How do we treat empty sort in FOLDS?

]]>I scribbled some thoughts on the Yoneda lemma here. Feedback welcome.

]]>