Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 45 of 45
Although Nuprl’s logic (henceforth simply “Nuprl”; the proof assistant itself is not relevant) is not based on category theory, you can still do the usual trick of considering it as a category by looking at its closed types and unary functions. I’ve been thinking a little about Nuprl’s structure as a category. It’s a variant of extensional type theory, and includes dependent function types, unit, nullary and binary coproducts, integers, and quotients. So I figure that makes it an LCCC with a NNO and finite limits and colimits. An interesting difference from many other extensional, set-level, constructive foundations is that it’s not exact.
So when reasoning internally, it matters whether you take propositions to be arbitrary subsingletons, or regular subsingletons. The Nuprl preference definitely seems to be essentially to regard the regular subsingletons as the propositions. Regular images are always available, and they’re well behaved from a proof assistant implementation standpoint. (Not-necessarily-regular images are always available too, but seem rarely used.)
I’m trying to get a sense of how (un)common it is to reason using just the regular subsingletons from a categorical perspective. And whether it’s considered a good/bad/interesting idea.
More info: Nuprl is based on a meaning explanation (but not that one), and is a category of PERs. But the way universes are added to the computation system makes them behave like plain sets within a category of assemblies. (They are uniform. Their subobjects are also uniform.)
I’ve mainly encountered this in the context of quasitoposes. However, the resulting lack of PUC makes the resulting internal logic rather impractical and unfamiliar to most mathematicians, even constructive ones; and so the internal logic of quasitoposes is used much less than that of toposes. I once tried asking Bob Harper how NuPRL deals with the lack of PUC, but I think I didn’t end up with an answer that really helped me.
Thanks for telling me you think this is uncommon. And thanks for the link to your MathOverflow question.
It seemed odd to me to say that lack of PUC makes the internal logic impractical and unfamiliar. PUC is just telling you about functions, not the bare logic. But then I remembered that in category theory, “internal logic” means something very specific, and is all based on morphisms, which are the functions internally.
Maybe it helps to think of the regular subsingletons as an internal reflection of the external logic? I’m pretty sure that’s the case with assemblies and PERs. If not, my original question was mistaken. Perhaps lack of PUC is a good reason not to think of arbitrary subsingletons as propositions.
I don’t know the real reason Nuprl people didn’t/don’t add PUC. It didn’t have it from the start, because it was based on Martin-Löf’s meaning explanation. Also, programmers are not in the habit of using unique choice. They couldn’t be, since most programming languages can’t even express the principle.
I personally wouldn’t want PUC added to Nuprl unless it’s done in a careful way to retain the current regular subobjects, and their implementation. Like how assemblies are a full subcategory of the realizability topos.
An answer to your MO question reminded me that Bishop’s foundation had an intensional layer and an extensional layer. This seems to come up a lot. nLab talks about presets vs. sets. With HoTT we consider two-level systems. I think Nuprl is most appropriate as an intensional layer. If this surprises you, it’s probably because “extensional type theory” was that bad a name.
I’m confused. I followed links to Andrej’s blog where he says PUC is true for “numbered sets” (which I think is equivalent to PERs over number realizability). Thinking about it, this seems right, if you consistently use props-as-subsingletons, but not if you use props-as-regular-subsingletons. In a later comment, he says “the usual interpretation of first-order logic in a category validates unique choice”.
It sounds like the version of PUC involved which is actually missing sometimes is the “epic monos are isos” one, right? Is that the one you’re talking about?
Oh I see. On the quasitopos page, it says the internal logic is usually taken to be the strong subobjects, which is the reason function comprehension fails. I guess that explains it.
Right, if propositions are strong (i.e regular) subobjects, then PUC fails, whereas if they are arbitrary subobjects it holds. I think something of the same sort is true in NuPRL; you can choose what kind of subobjects you want to consider “propositions”, and thereby get or not get PUC.
IMHO the best way to deal with these things is to take arbitrary subobjects as the propositions and then regard the regular subobjects as the $\sharp$-modal ones for an appropriate modality $\sharp$. That way the fundamental logic satisfies PUC, but you can also express statements about the logic of regular subobjects if you want to.
I don’t know the real reason Nuprl people didn’t/don’t add PUC…
I personally wouldn’t want PUC added to Nuprl unless it’s done in a careful way…
What I meant here was making Nuprl into a pretopos.
I believe Jon Sterling once worked on the categorical structure of NuPrl. Quasitoposes also show up in the type theory of Coq; see Arnaud Spiwack’s thesis.
I really like Arnaud’s argument against adopting PUC. It starts on page 22 (or 28) of his thesis at:
Functions which are both injective and surjective are called bijections. Hence we can state the the principle of unique choice as follows: unique choice: bijections have an inverse.
I’m not sure it’s OK to copy too much of the text, but you can download the PDF and find what I’m talking about.
The gist, as I understand it, is that you want computations to be efficient, but proofs shouldn’t have to be (that is, in terms of their realizers), so a principle conflating them should not be used.
I wonder what realizability topos fans would say to that. Shoot, I wonder what Martin-Löf would say to that. But I completely agree.
Here is a link that works.
I think that that might be a reasonable argument if one were trying to develop an “effective mathematics” in which all definable functions were efficient. (Although another totally reasonable approach to such a thing would be to require proofs to also be efficient, thereby maintaining PUC. He claims that requiring proofs to be efficient would prevent automation, but that wouldn’t be the case if the automation were correct for a formal system in which all proofs were efficient.) But while developments like soft linear logic hold out some hope that such an effective mathematics might exist, MLTT is not it, since it is trivial therein to define functions with extremely high complexity. So I don’t see how that is an argument against PUC for MLTT.
The most I can see that argument saying is that when trying to manually control the complexity of the functions you define, it’s handy to have an “abstraction barrier” behind which you can put proofs of arbitrarily high complexity without worrying that they might pollute your functions. As far as I can see, this barrier doesn’t actually mean anything formally (since the ambient theory is strong enough to allow you to define unfeasible functions), it’s just a “tag” with a user-defined significance. Which is totally reasonable, but it makes more sense to me to include “both worlds”, where the naturally defined propositions inside MLTT satisfy PUC and the tag is a modality. This approach has the additional benefit that a (higher) modality can also be applied to types that are not propositions, allowing you to mark not just proofs but also functions as places where you’re not going to worry about complexity.
Efficiency in programming is a multi-faceted thing. The kind of languages I see where everything definable is “efficient”, efficiency is taken to be some well-behaved complexity class, like polynomial time. This is too coarse for algorithm design, which tends to use big-O-style asymptotic analysis. But in practice big-O is just a ballpark estimate of efficiency, since it’s definitely not OK to have a program that’s 100 times slower than it should be, although that would not register in big-O.
So I don’t believe there is any language where everything definable is as efficient as it needs to be, for practical programming. Also, algorithm design traditionally relies on Turing completeness, so ironically, I suspect that intrinsically efficient languages don’t have enough efficient programs. (I’m almost certain they don’t, in theory. I mean in practice.) I don’t think Arnaud meant efficiency in some precise, theoretical sense like that; he meant it the way a programmer does. So your second paragraph is the one I think is on topic.
So we’re talking about the difference between the functions, and the things that are functional relations according to regular subobject logic. Since arbitrary subobject logic evidently satisfies PUC, this is also the difference between regular subobject “functional relations” and arbitrary subobject “functional relations”. I’ll call the former just “functional relations”, and the latter “functional programs”. So the PUC missing from Nuprl, and Arnaud’s setup, is for regular subobjects, and would collapse the distinction between functional relations and programs.
There is definitely a formal difference between them, in general. If you assume excluded middle restricted to regular subobjects, you get uncomputable functional relations, but no uncomputable functional programs. Maybe that’s not what you had in mind, but it’s completely reasonable to use this sort of system to reason classically about programs. (A classical “program logic”.) I understood his argument as promoting, even in the intuitionistic case, that you manually manage efficiency of functional programs and not functional relations.
I don’t know what you mean by “the naturally defined propositions inside MLTT”, and including “both worlds”.
The naturally defined propositions are the subobjects, in HoTT they are sometimes called h-propositions. By both worlds I meant that you have a logic that can talk about both subobjects and regular subobjects and use either or both as appropriate or even mix them, rather than forcing yourself to choose one of them as “the logic” and discard the other. This is naturally achieved by adding a “regular-subobject reflection” modality to the logic of arbitrary subobjects, and in many cases such a modality also extends to a higher modality on all types (not just propositions).
I’m not sure we’re on the same page. In the kind of system we’re talking about, you can already do all that. It’s a type system. Types (families) are generally not subsingletons (subobjects). But you can quotient a type down to a subsingleton. That’s propositional truncation. In these systems, you can also take a “subset” (equalizer!) of unit to get a regular subsingleton. This seems like the “regular-subobject reflection” you want. In Nuprl, subset is a primitive. In Arnaud’s setup, it uses Coq’s Prop
-based subset. These are two different ways to enforce computational irrelevance of the proofs. Both kinds of truncation are idempotent monadic modalities.
Yes – my point is just that we should treat it like a modality, rather than declaring that only the modal propositions are to be used in “logic” and that therefore PUC “fails”, when in fact PUC is true for the non-modal fragment of the logic.
Hi all, the lack of unique choice and the lack of effective quotients in Nuprl is a huge source of pain for people who wanted to use it. I don’t think anyone really ever engaged with the unique choice issue, but the lack of effectivity for quotients was so devastating that Nuprl users generally try to avoid quotienting anything ever (somewhat ironic, considering that Nuprl was for many years one of the few implementations of type theory that boasted quotient types). Zooming out, my understanding (I rely on Mike to correct it!) is that this kind of situation is typical of working in either categories of PERs or categories of assemblies, and that the realizability construction corrects it — for instance, by redefining the notion of a function in terms of its graph.
To type theorists like myself who had grown up in the tradition of Nuprl-style realizability, one of the big selling points of HoTT/UF in contrast to what we were used to was to obtain a constructive and computational account of unique choice and effective quotients, which didn’t involve getting utterly owned by manually working inside an exact completion. We haven’t looked back…
Hi Jon,
It seems like there’s a single cause in PERs and assemblies for both of: quotients generally aren’t effective (not exact), and not all monos are regular (not balanced). It’s that equality proofs are computationally irrelevant. The “fix” in general is exact completion, as you seem to know.
Classically, realizability topoi are equivalent to exact completions of categories of (partitioned; or not) assemblies. The impression I’ve gotten is that without choice, the traditional construction of a realizability topos may not be exact. So I don’t think that style of realizability construction (realizing truth rather than objects, and encoding functions as relations) is the actual essence of the solution.
As Mike confirmed in #5, the arbitrary-subobjects internal logic already satisfies unique choice, so its failure in the regular-subobjects logic (in assemblies-like situations) is just a symptom of not being balanced. What’s really missing is exactness, not unique choice.
It seems like there’s a cultural divide on whether the use of computationally-relevant proofs is actually a good idea. It’s definitely interesting, and was a key motivation for MLTT. But like Arnaud was saying, for programs, efficiency matters, and for proofs, it oughtn’t. At least not for the same sense of “efficiency”. I get the impression this is a programmers vs. constructive mathematicians disagreement. Do you reason about programs, or reason using programs? It seems like Arnaud and I are “programmers”, and you and Mike are “mathematicians”.
If your equivalence relations are computationally irrelevant, Nuprl’s quotients are already effective. So if you keep your programs and proofs straight, Nuprl has no serious problems that I know of. Computational relevance of proofs is reflected in the difference between regular subobjects, which are irrelevant, and others, which can be relevant. With exact completion, you sweep this real and important difference under the rug. That’s fine, as a high-level approach. But I appreciate this low-level aspect of Nuprl. Nuprl encourages you to care how your computations are implemented. Programming via constructive reasoning is a fascinating paradigm, but it’s too avant-garde for me. And anyway, that’s not what Nuprl’s good for.
So to you type theorists who’ve moved to HoTT and aren’t looking back, good for you; you’re probably getting closer to what you want. For other type theorists, it’s heading in the wrong direction. Nuprl is still king for combining dependent types and program logics.
BTW, I am also a programmer ;-) I have a couple followup questions and comments:
I have also heard about the difficulty with choice in the ex-reg completion, but I don’t know the specifics. Does anyone know if there is an alternative version which can be made constructive?
Indeed, if your equivalence relation is itself computationally irrelevant, it is ok—the issue is that in practice, this is not usually what happens. A lot of these difficulties are chronicled in a paper from Nogin a long time ago, and he came up with some practical ways to mitigate them, but I don’t think the issue is resolved by just “keeping things straight”.
I am quite a bit less pessimistic about combining dependent types and programs than to say that Nuprl is still the king — any king, for instance, whose utility relies on the odious “pointwise functionality” construct ought to be toppled. I am in very strong agreement about the importance of (strict) proof irrelevance (as opposed to just “homotopy-irrelevance”) but I note that this is different from Nuprl’s computational irrelevance.
Because we still care about programming, but want a system that is both syntactically and semantically more well-behaved, we have been working to port ideas from HoTT to lower dimensions in a way that is compatible with the (perennially hypothetical) use-cases of Nuprl, if not exactly the spirit of Nuprl. Our first contribution in that area is the XTT type theory, a version of cubical type theory with judgmental UIP: Cubical Syntax for Reflection-Free Extensional Equality. We still have to investigate things like propositions and quotient types, but I think we can do something that is quite well-behaved and efficient to program with / extract.
I think the issue with PERs/assemblies/NuPRL is best expressed by saying that you can’t have both unique choice and effective quotients with the same logic. If you use arbitrary subobjects, you get PUC but not exactness, while if you use regular subobjects you get effective quotients but not PUC.
I feel in this discussion a bit of the sense of the dichotomy between nice objects and nice categories. Years and years of experience in category theory and related fields of mathematics teach us that nice categories are usually the way to go. But I’ve given up hope of ever convincing Matt that category theory has anything useful to do for him. (-:O
Mike, thanks for that remark about both unique choice and effective quotients simultaneously, it really helps and clarifies! :-)
And btw, I hope this is not too harsh a thing for me to say, but I got convinced about the importance of (1) having good categorical tools, and (2) working in nice categories, just about the moment that I wanted to start to use type theory to actually do something, instead of just theorize about doing that and stroke my beard – and I am saying this both as a programmer and as a “mathematician” (maybe this label should be reserved for others, since I feel more like a dilettante); I note that serious attempts to apply computational type theory to programming have been ironically stillborn in contrast to the massive body of work applying type theory to programming in the context of Idris/Agda/Coq (in spite of the obvious shortcomings of these systems!) — it is one thing to be “meant for programming” and another to be “used for programming”. It is all well and good to study type theory because it is intrinsically beautiful, and maybe doing that does not lead one into contradiction with the ideological tenets of the Nuprl doctrine, but the type theoretic tradition draws its strength not from its intrinsic beauty but from the fact that it is useful to a diverse spread of scientists, from its (often unpredictable) applications.
Mike, I like categories with nice morphisms. ;o)
Jon, I wonder: does the lack of effective quotients also bother you when you write ML? I had imagined one does not even think that way, when thinking like a programmer. By the way, I was mainly trying to emphasize the distinction between programming and constructive math, not to label people. I just pointed out the side of this disagreement people fell on. I know you program. Probably more than me, lately.
Going back to #18 and 19, can you give any details or references for this issue with choice in exact completions / realizability toposes? I’m not bringing anything to mind right now.
Re #19:
Re 1 (and #23): I don’t know much about it. But my impression was that the problem is specifically with getting a subobject classifier. Why do all subsingletons have small codes? But maybe I’m confused, and there’s no problem.
Re 2: Do you remember the name of the paper?
Re 3: I can’t follow the grammar in the beginning. It looks like you don’t like Nuprl because of some peculiarity of its semantics? By “pointwise functionality”, do you mean that thing in the meaning of the hypothetical judgment forms? I also found that weird. I came up with something different.
I’ll take a look at this XTT. Should I take it as an example of what you mean by “(strict) proof irrelevance”? I’m skeptical there are three different kinds of irrelevance visible when working up to isomorphism. (“homotopy”, “strict”, “computational”)
Should I take it as an example of what you mean by “(strict) proof irrelevance”?
Another thing I’d like to know, which maybe wasn’t clear, is whether XTT is intended to be balanced. The original topic of this thread was how category theorists deal with reasoning in a non-balanced, non-exact categories that are otherwise pretoposes.
Nuprl and Coq have a special class of propositions, which I referred to as computationally irrelevant. This was true of OTT as well. But what I’m focusing on is the implementation-independent fact that (once you have extensionality and quotients) these systems are non-balanced.
I’m skeptical there are three different kinds of irrelevance visible when working up to isomorphism. (“homotopy”, “strict”, “computational”)
Never mind, you didn’t claim that. Sorry.
Strict proof irrelevance has been added to both Coq and agda.
Another thing I’d like to know, which maybe wasn’t clear, is whether XTT is intended to be balanced. The original topic of this thread was how category theorists deal with reasoning in a non-balanced, non-exact categories that are otherwise pretoposes.
We haven’t checked this yet, but I think there is hope — if it ends up working, it will be for roughly the same reasons that sets in HoTT are balanced. But let me not talk more about it, since in order to even state this theorem (much less prove it), there are things that need to be added to XTT.
Regarding the discussion about the classicality of the fact that the effective topos is the ex/lex completion of the partitioned assemblies: AC is used in the proof that every partitioned assembly is projective, see Van Oosten’s book Prop 3.2.7. Basically a projective object of the effective topos is precisely a partitioned assembly X whose underlying set |X| is projective, and one needs AC to get rid of the latter condition.
There is a constructive version of this fact, however, which is in Hofstra’s Relative Completions. Van Oosten also writes briefly about this in section 3.2.4 of his book. The idea appears to be to complete PAss relative to Set instead of globally.
Re #26: From the cited paper: “Proof-irrelevance, the principle that any two proofs of the same proposition are equal…”
I don’t like that terminology. Why not call it “proof uniqueness”? Irrelevance means, or ought to have meant, something else. (Just a gripe. No technical complaints.)
Re #28: Thanks! That’s not nearly as bad as I thought.
Ah, ok, so the problem isn’t with the construction of realizability toposes via (say) triposes, it’s that they may not be ex/lex completions, because ex/lex completions have a lot of projective objects. That makes perfect sense; ex/lex completions are pretty special. I assume that they can still be obtained as an ex/reg completion of a suitable subcategory, however.
I thought maybe I should share more of my thoughts about exactness/balancedness, and why I think it’s not ideal for the strict/non-fibrant level of a proof assistant.
On the HoTT list, not so long ago, I shared my opinion that “subsumptive equality” is a very good thing, since it gives you equations that you can rewrite with without adding a coercion operation to your term. Definitional equality is subsumptive, but you can’t reason about it, so this avoids coercions only in a few lucky cases. I like type systems with a subsumptive equality type. Equality reflection is one way to get that. Probably the best way, since it seems like the alternatives (subsumptive equality type but not reflective equality) necessarily lack function extensionality.
Note that avoiding coercions is not about coherence. Coercive identity types with UIP have no coherence issues, they’re just a pain in the butt. Avoiding coercions is about convenience. But this is formal proofs we’re talking about; you need all the convenience you can get. In principle, avoiding coercions should also improve proof checking performance, since the terms will be smaller without coercions. (Or with fewer coercions, if you also have a coercive equality.)
What happens if we take the subsumptive equality type to mean equality of morphisms? It seems like there are three possibilities:
Well, so it seems like (1) is silly: you obviously just picked the wrong term language. (2) is not so bad, honestly. The HOL proof assistants don’t have canonicity, and are still used for program extraction. But then you have to wonder what’s the best fragment with canonicity. In other words, first find the best system with canonicity, then ask if it’s worth it to break canonicity for some reason (like a quotient effectivity axiom). (3) is Nuprl, basically.
So then it seems it’s not a good idea, mathematically, to consider subsumptive equality to be equality of morphisms. I’m completely OK with that! But I want a subsumptive equality type to be there anyway, and you can still analyze the more-intensional category where morphisms are defined to be terms up to subsumptive equality.
I started by saying that the strict layer shouldn’t be balanced. That’s because I want a subsumptive equality type, and I want to be in class (3) above. So if there’s also going to be a mathematical equality type, like to provide exactness, it looks like you need a two-level type system. Even if the mathematical equality still satisfies UIP!
Way back in #3, I insinuated that two-level type theory is basically Bishop’s distinction between an intensional layer (types and operations) and an extensional layer (sets and functions). In the not-so-unlikely event that Nuprl is deemed too intensional because it’s not exact, then please do consider it for the intensional layer. What extensionality it does have should be understood as technology to make the subsumptive equality richer than definitional equality. Mathematical equality can be richer still.
The real problem with NuPRL is not so much its non-exactness but the fact that it lacks general categorical semantics, being inextricably tied to a particular computational model.
If there was a variant of Nuprl that was a conservative extension of intrinsic extensional type theory, would that solve the problem?
Maybe. I’m a little dubious that such a thing could be called “a variant of NuPRL” however.
Actually though probably saying “the real problem” was too strong; I think Jon in #21 is describing a purely syntactic problem with NuPRL, independently of its lack of models.
If you take variants of Nuprl and ETT without universes, it’s not clear to me that Nuprl isn’t already a conservative extension. Do you know some incompatibility besides unusual types and funky type equality? (Which do not seem to affect conservativity for small types.)
Regarding #21, Nuprl is a proof assistant based on programming ideas. It’s not a conventional programming language. It has nothing that seriously competes with Coq/Agda/Idris’ program extraction feature. My interest in Nuprl is based on my belief that in the long run, its approach can get us to a more convenient and practical proof assistant than anything remotely like Coq/Agda.
I don’t know much about NuPRL, but my understanding is that it is based on a particular underlying operational semantics, consisting of PERs on some particular reduction system, and that everything that is true about that semantics is provable in NuPRL. In particular, this means that facts like strong Church’s thesis and Brouwer’s continuity principle are true in NuPRL, which they certainly are not in ETT.
No. I wish I could point you to a nice small set of official rules, but I don’t think there is such a thing. The formal system is fairly similar to other dependent type systems, but with some definitely unusual aspects, not all of which are equally important.
You must be thinking of the syntactic PER semantics, which I think satisfies Church’s thesis and continuity. The formal system almost certainly doesn’t prove those. At least not without deliberately adding them, which they might’ve done. I think Jon knows more about that. And it’s definitely not the case that all the true semantic judgments are formally derivable, due to Gödel’s incompleteness.
The operational semantics is the reduction system. The PER semantics is a layer on top of that, which culminates in definitions of the semantic judgments. Aspects of this semantics are engineered to justify the desired formal rules, which correspond to selected facts about the semantic judgments.
OK, terminology:
Following (I think) Robert Harper (in lectures) “computational type theory” refers to a style of semantic type system along the lines of realizability, syntactic logical relations, PER semantics, and meaning explanations. These systems are not formal systems, but they have types, objects, and judgments, so can be thought of as type systems. Often, the types and objects are syntax. (So the objects are terms.) Often, the judgments are not effectively axiomatizable.
“Nuprl-style” type systems are formal dependent refinement type systems with equality reflection and few/no type annotations, whose closed terms have values according to some computable operational semantics.
I’m emphasizing this partly to avoid confusion, since the way to justify a Nuprl-style system is a computational type system, but they’re not the same. But also, I’m primarily interested in Nuprl-style formal systems, whereas I get the impression Harper is primarily interested in computational type theory.
Well, that is different from what other people have told me about “NuPRL”, so I guess not everyone uses words the same way. Whatever it is that you want to talk about, if you can provide or point to a precise definite mathematical definition of it, we can continue the conversation; otherwise it seems we don’t have anything to talk about.
The formal system I’m really interested is the one I’m working on. I will try putting up those draft rules, soonish. In the mean time, if you want, you can consider the rules in the Nuprl 5 manual, section A.3. They depend on the term language and maybe (I forget) evaluation semantics described earlier in appendix A, but not on the judgment semantics. They are an old version of the Nuprl rules. Multiple features have since been added.
Thanks for the link. Section A.2.2 defines the “precise meaning” of the judgments in terms of untyped reduction, which afaik is problematic for a categorical semantics. How is this related to the inference rules in section A.3? I generally think of inference rules as what defines (by induction) the “meaning” of the judgments.
Right, A.2.2 is a somewhat precise summary of the PER semantics. It’s pretty precise by programmer standards; probably not by mathematician standards. I’m saying to ignore A.2.2 and read A.3 as an inductive definition. The intent though is that A.3 is sound with respect to A.2.2.
The draft formal rules of my system are up at prototype CLF.
Can you save me some time by pointing out exactly how the inductive definition of A.3 differs from, say, MLTT+UIP, or whatever intrinsically typed system it is easiest to compare it to?
In your prototype CLF, to start with I have no idea what “$Comp$” would mean in categorical semantics.
Maybe we should continue this discussion at the discussion thread for prototype CLF, since it’s not about regular subsingletons anymore.
1 to 45 of 45