Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
The pages apartness relation and antisubalgebra disagree about the definition of an antiideal: do we assume ¬(0∈A) or ∀p∈A,p#0? Presumably there is a similar question for antisubgroups, etc. In particular, the general universal-algebraic definition at antisubalgebra would give ¬(0∈A) as the definition (since 0 is a constant and ⊥ is a nullary disjunction), contradicting the explicit definition of antiideal later on the same page.
Does this have something to do with whether #-openness is assumed explicitly or not? The page apartness relation claims that, at least for antiideals, openness is automatic as long as the ring operations are strongly extensional. But antisubalgebra assumes openness explicitly, in addition to strong extensionality of the algebraic operations.
Finally, do we ever really need the apartness to be tight?
Okay, it’s easy to prove that if A is #-open, then anything not in it is apart from everything in it, i.e. its logical complement and #-complement agree. Thus, assuming #-openness ensures that ¬(0∈A) implies the stronger ∀p∈A,p#0 and similarly for all constants. Conversely, for theories containing a group operation with identity constant e (such as groups and rings), if we assume ∀p∈A,p#e, then A is necessarily #-open, since if x∈A then x=(xy−1)y, hence y∈A or xy−1∈A, and in the latter case xy−1#e and so x#y. So in the general case, we need to assume #-openness explicitly, but in some familiar cases we can get away without it by strengthening the nullary anti-closure condition.
I edited the two pages to clarify this point. However I have another question: antisubalgebra claims that A can be recovered as the ≠-complement of its complementary subalgebra, but I don’t see why.
Yes, #2 is correct. If you read things like Richman et al's book, they make their antisubalgebras #-open in this way, by saying p#0 instead of ¬(p#0). But in the general case, #-openness is a separate condition.
Finally, do we ever really need the apartness to be tight?
You need it to be tight in order to say things like ‘it’s easy to prove that if A is #-open, then anything not in it is apart from everything in it, i.e. its logical complement and #-complement agree’ (comment #2). That A is open only proves half of that; that # is tight proves the other half.
More generally, if I have an algebra equipped with an apartness relation # relative to which the operations are all strongly extensional, then I'm really really tempted to replace it with a quotient algebra on which # becomes tight.
@#3: You're right, that statement is wrong. In a theory where every arity is at least 1 (to keep it simple), if P is any proposition, then {p∈X|P} is an antisubalgebra; its complementary subalgebra is {p∈X|¬P}; and the ≠-complement of this is {p∈X|¬¬P}.
You need it to be tight in order to say things like ‘it’s easy to prove that if A is #-open, then anything not in it is apart from everything in it, i.e. its logical complement and #-complement agree’ (comment #2). That A is open only proves half of that; that # is tight proves the other half.
Hmm. If A is #-open and x∉A and y∈A, then #-openness applied to y gives x∈A∨x#y, hence x#y since x∉A. In the other direction, if A is #-open and ∀y∈A,x#y, then in particular ∀y∈A,¬(x=y) by irreflexivity of #, hence x∉A. Where did I use tightness?
if I have an algebra equipped with an apartness relation # relative to which the operations are all strongly extensional, then I'm really really tempted to replace it with a quotient algebra on which # becomes tight.
Topologically, I guess this means passing to the T0 quotient. Which is of course not unreasonable (in particular, it’s part of sobrification), but as part of a general theory there’s often no need to assume T0.
Hmm, I guess you need the apartness to be tight in order for the complement of the antikernel to be the kernel. I’m not sure whether anyone thinks about topological algebra for non-T0 spaces or what the right notion of “kernel” would be in such a case, but in any case that makes tightness seem more reasonable to me.
Would it work, impredicatively, to define the anti-image of a morphism as the antisubalgebra generated by the complement or ≠-complement of the image? In general, how is a subalgebra related to the antisubalgebra generated by its complement?
Here’s an interesting fact I just noticed: for an inequality group, the symmetry and comparison axioms are automatic. That is, if G has an irreflexive relation # and a group structure that is strongly extensional with respect to #, then # is automatically an apartness relation.
To prove comparison, if g#h and k∈G, then by strong extensionality of multiplication, either g#g (impossible) or e#hg−1. Then by the same argument, either e#hk−1 or e#kg−1, and in the former case k#h while in the latter case g#k.
To prove symmetry, if g#h, then e#hg−1 as above, hence either h#h (impossible) or h−1#g−1; and in the latter case, h#g by strong extensionality of inversion.
I just got a copy of the Mines-Richman-Ruitenberg book A Course in Constructive Algebra, and was surprised to find no antisubalgebras in it at all! I haven’t read it page by page, but if they’re there at all then they’re well-hidden. The closest thing I was able to find was a remark in passing that “In rings with a positive notion of inequality… it is natural to define an ideal P to be prime if whenever a,b∉P then ab∉P” (where a∉A means a≠b for all b∈P, i.e. a is in the inequality complement of A); but the definition of “prime ideal” that they actually use is “if ab∈P then a∈P or b∈P.
So I updated the page antisubalgebra and added a reference to Troelstra + van Dalen, who do discuss antisubalgebras to a certain extent at least.
Have removed all !redirect
-commands for variants of “anti-ideal”, making them instead point to a little separate entry anti-ideal to be created in a moment…
added publication details for
and made the intended link to this item work
1 to 13 of 13