Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 25 of 25
Does anyone of any connections that have been made between the notions of cybernetics, autopoeitic systems, or even George Spencer Browns Laws of Form, and homotopy theory or category theory?
I suppose right now such connections might be somewhat tenuous, as the latter is pretty philosophical, and the former is pretty formal, but it seems that there are perhaps connections to be made. Especially, with the notion of the distinctions in space that Spencer-Brown talks about, ideas of homology and homotopy occur to me. Also, possible connections with David Spivak’s ontology logs and various ideas about categorical database type things. I’m also wondering if there are any connections between the aforementioned more philosophical topics and internal logic of topoi, or even topological semantics.
Any references or ideas on this would be really appreciated, it’s just kind of a shot in the dark. Perhaps the question is not well formed.
Best, Jon Beardsley
I suppose you’ve already seen some of Louis Kaufman’s ruminations on Laws of Form in his On Knots?
Does anyone of any connections that have been made between the notions of cybernetics, autopoeitic systems, or even George Spencer Browns Laws of Form, and homotopy theory or category theory?
Why “or even”? LoF is just Boolean algebra in new notation, no? That is easily related to category theory, see for instance at internal logic. On the other hand, “cybernetics” and “autopoeitic systems” is about complex systems akin to biological systems. This is way, way beyond what category theory/mathematics usually describes in any substantial detail.
Well, by “or even” I meant that I guess it’s more likely that there is a connection, but it wouldn’t be as interesting. However, I’m specifically thinking of the LoF ideas of differentiation in terms of homology, though again that might be sort of a trivial connection. He also introduces this notion of oscillation (p. 59) that causes me to ruminate on homology. Wondering if his “marks” on forms can be thought of as homology classes, or homotopy classes of maps, somehow embedding LoF inside of homotopy theory, though that might be sort of a silly idea. Additionally, I think Spencer-Brown would not appreciate your statement about LoF being just Boolean algebra in new notation, specifically with respect to the idea of “imaginary” truth values (i.e. the value of the statement “This statement is false” can be consistently labeled as “imaginary” to some effect, which I have not completely understood yet).
And regarding your statement about cybernetics, Niklas Luhmann specifically uses the ideas of LoF to describe at least one autopoietic system. I’m still in the process of studying this stuff, but I’m wondering then if these notions could be, at the very least, updated to correspond to the more fashionable language of categories. That’s just a basic consideration though. There is the idea of a self-creating system, and the notion of a localization in a category to a category which is in fact equivalent to its overcategory occurs to me, making the localization an equivalence. Could this have anything to do with some kind of internal groupoid or something? At the very least we could certainly attempt to think of “operations” as arrows in a category, though this is a rough idea, and perhaps even associativity would fail here.
Thanks again for an engaging discussion! :)
Additionally, I think Spencer-Brown would not appreciate your statement about LoF being just Boolean algebra in new notation
But that’s easily checked. The Wikipedia page spells it out. LoF is not interesting for its mathematics, which is trivial, but because of it’s style of speaking about mathematics.
Okay fair enough, except you still ignore the notion of second-degree equations, which is kind of what I’m finding interesting. But I guess you would argue that it has all been taken care of with Church’s work on so-called restricted recursive arithmetic (as mentioned in the Wiki), which I had not heard of.
Let me turn this around to a positive statement:
LoF is cool. But type theory contains it and is way cooler. And way more interesting ;-)
Let’s just state some basics on type theory in the cool fashion of LoF:
let’s make the cool move of denoting the empty type, often denoted , by no symbol at all.
then negation of is denoted simply
That’s at least as cool as the
from LoF, isn’t it :-)
in this notation (assuming Boolean logic), the unit type is simply
(namely if we did write out the empty type ) which competes for coolness with LoF’s
and we have
(namely ).
And there we go, the Laws of Type™
But the real fun is that, cool as this already is, this is but a tiny-tiny fragment of what mathematics is founded on. And the rest is even cooler, still.
Namely, this is just (Boolean) (-1)-type theory. The Laws of (-1)Type™. Next comes the Laws of 0Type™ which is where traditionally most of mathematics happens. Around here, we are fond of the full story: the Laws of ∞Type™.
Haha, indeed. I’m a little bit familiar with type theory. And I agree, almost all of LoF is not particularly mathematically interesting, though I still think it might be interesting to think about autopoietic systems from the point of view of category theory. But maybe this is nothing more interesting than what essentially boils down to set theory and logic, although the study of homotopy-type theory occurs to me, specifically the work of Steve Awodey, though I know next to nothing about that stuff, so I won’t try to talk about it.
I will also check out the laws of \infty type, because I had not heard of that.
thanks, Jon
Let me expand, in a more serious tone.
Here is what I think what Spencer Brown was really after. (Judging just from my personal gut impression from reading the book. I may be entirely wrong.) It’s something that I sympathize with. When I was young, I was spending hours and days (too many) thinking about something similar. At some point I gave it up – luckily – and started learning and then doing real science. By some cosmic chance, these days, decades after, it seems as if I am coming full circle back to those old days… up to some non-trivial holonomy.
What I mean is this: at times one may feel a deep miracle in the fact that we write symbols to paper and then these symbols carry meaning and information about reality.
John Archibald Wheeler, famous theoretical physicist and deep thinker about physics, once expressed roughly this awe (I think) for the fact that there is, for instance, an equation of gravity that describes the evolution of the cosmos (to some extent, at least) with the words
Write your equations on the tiles of the floor. When you’re done, wave a wand over the equations tell them to fly.
(I think one can see him say this in some video titled “Creation of the Universe”. I try to dig out the precise reference.)
That feeling of magic (“wave a wand”) when speaking about how formulas relate to reality is, I think, what drove Spencer Brown. It connects to what is probably a deep root in the human psyche: the notion of casting a spell, of saying words and creating reality.
In an age of science, we are used to disregard spells as a figment of our ancestor’s imagination. But at the same time, we have realized much of what they were, roughly, dreaming of: we can write symbols on paper and predict the future from them – to some extent, say when we are computing the arrival of an asteroid or the time when the sun will burn out, or the time when our galaxy will collide with its neighbour. That’s already amazing.
But that’s just the physics aspect of it. From there it gets even more miraculous: by the unreasonable effectiveness of mathematics we find that many of these spells all follow from a handful of master spells. Einstein’s equations and those of Yang-Mills theory. Apart from all the constants of nature in it, the standard model of particle physics is a simple mathematical formula that fits on two lines. Given all the things that follow from it, this is a rather mighty spell.
And then it gets even more miraculous still: as I have just written about with Mike in Quantum gauge field theory in Cohesive homotopy type theory (schreiber), at least some central general structure of that mighty spell is rooted in just the bare Laws of ∞Type.
I can be very sober about this and just continue doing research. But if I allow myself, this raises in me a feeling that feels as if it could make one start write a book of the form Spencer Brown once did. A book whose style of prose tries to closely resemble the remarkable content it conveys: if only we bring the Laws of Type out of nowhere, then a whole world appears. Our world. To a large extent, at least. As on p. v of LoF, where it says
The theme of this book is that a universe comes into being when a space is severed or taken apart.
I imagine that this was what Spencer Brown was after. A kind of creation myth of the world from algebra of symbols. I am not sure if this is a reasonable thing to do. But I think I do understand the inner conditions that could make one try to do this.
And in this respect, I find Spencer Brown’s book very interesting. If only he had arrived at a more powerful symbolism…
That Wheeler quote continues apparently as:
Write your equations on the tiles of the floor. When you’re done, wave a wand over the equations tell them to fly. Not one will sprout wings and fly. The universe flies, it has a life to it that no equation has.
Not that it’s particularly deep, but since we are talking about it at all:
I see that some Chris Holt observes that natural interpretation of Spencer Brown’s “empty symbol” as the empty type in type theory, as in comment #7 above, here in a comment on sci.logic.
is the unit type even in constructive logic. (-:
Oh, sure. Thanks for catching that. All the better.
I am again in procrastination mood, so allow me to come back to the above game. But also, I need to finally learn to speak Coq, not just read it. You can regard the following as a basic question about Coq. Or as a comment on Laws of Form in type theory :-).
So I start with the latter: observe that it’s kind of cute that the formal definition of the empty type as an inductive type is pretty much verbatim what Spence Brown is suggesting: the absence of a symbol
Inductive empty : Type :=
:-)
Just for distraction purposes I opened my Coq editor and tried to see if I can prove that is equivalent to the unit type. But I get stuck already with such a kindergarten Coq-problem. Here is the code that I type
Require Import Homotopy.
Inductive empty : Type := .
Definition tzimtzum : (empty -> empty).
intro H.
exact H.
Defined.
Lemma contractible : is_contr (empty -> empty).
Proof.
exists tzimtzum.
intro y.
induction y.
After this I am puzzled: I expected the last line to finish off the proof (though that’s using intuition more than actual reasoning about what the type theory engine does in the background). But instead after that last line Coq claims that the remaining subgoal is to prove
empty
which of course I cannot.
What’s wrong with my proof?
That’s slightly weird; I would expect the tactic “induction” to give you an error message, since does not belong to an inductive type so you can’t do induction over it. Evidently Coq is guessing that you’re trying to do something fancy and outsmarting itself.
Since your remaining goal is an equality of two functions, it’s almost certain that what you need to use is function extensionality — there’s almost no other way to prove in Coq that two functions are equal. Try “apply funext.” and go from there.
If you replace with some arbitrary other type , then you get the error I expect: “Not an inductive product”. But if you replace it with a type that is inductive, like or , you get the error “ is used in conclusion”. I have no idea what Coq is trying to do. The manual is not helpful; in the description of the tactic “induction” it says “The type of the argument term must be an inductive constant”, which is manifestly not true here.
Wow, thanks Mike, for the detailed reply.
(I am only looking at this reply now, since I must force myself to take care of some other tasks. :-)
I’ll try again a little later. Thanks again.
Coming back to this over a year later… haha. For what it’s worth, I’m still not satisfied with all of this. I keep thinking about paradoxes these days, and the fact that paradoxes are not just dead ends or something, but rather, alternating systems. Paradoxes seem to encode the notion of movement or alternation in a way that no other static symbol or formulation can. That is, a paradox is slippery in the sense that it changes meaning upon being perceived. This is somehow almost its definition. We never experience a paradox as being both false and true simultaneously. We experience it as true, then false, then true again, then false again (in the sense that implication is a path parameterized by time as we experience it). As such, this to feels something like a element in the fundamental group of… reality… haha! I mean, it’s a loop that can’t be closed, it doesn’t resolve. It’s like one of those little alternating guys in John Conway’s Game of Life (or that weird little thing in lambda calculus that just gets longer every time you -reduce it, or whatever it’s called).
Okay, after rereading what I’ve written, I’ve come to the conclusion that I need to go to bed.
My guess is that complex systems approach and emergence are the ways to go. But there is strange things with math: currently complex systems theory handle only physical, continuous data with differential equations. E.g. how some set of oscillators behaves in the phase space. And usual real numbers and functions are here. But symbolic systems involve words, combinatorial structures (growing graphs, being grown by augmenting with graph operations), sentences and there should be math how such words, combinatorial structures behave and evolve dynamically. Some kind of Hamiltonian on words or graphs. My guess is that there is no real line, there are lattices and similar branching systems of ordered elements. So - we should reformulate whole analysis, whole geometry from real lines, real spaces to the lattices, modules over lattices. I am not aware of such efforts. And only then we can consider symbolic (crisp or approximate) and graph systems as interacting and evolving complex systems in symbolic domain and observe/describe emergence in them. We should go beyond real numbers. Thanks for any reference or suggestions in this direction.
This is good reference https://www.researchgate.net/publication/298868058_Mathematical_models_of_computational_and_combinatorial_structures_-_Invited_address - maybe the notion of the species of combinatorial structure is the way to go.
10 years back in comment #9 I mentioned this incident and promised to provide a reference:
John Archibald Wheeler, famous theoretical physicist and deep thinker about physics, once expressed roughly this awe (I think) for the fact that there is, for instance, an equation of gravity that describes the evolution of the cosmos (to some extent, at least) with the words
Write your equations on the tiles of the floor. When you’re done, wave a wand over the equations tell them to fly.
I don’t have a reference yet, but I am being reminded by Jonathan Oppenheim tweeting about (here) John Preskill re-enacting at “ItFromQBit 2023” banquet of whatever it was that Wheeler was doing around that quote.
Back then in comment #9I added that
(I think one can see him say this in some video titled “Creation of the Universe”. I try to dig out the precise reference.)
but I forget why I said this [edit: it is recounted that way here] and I don’t have the reference. There are tons of video recordings of Wheeler on YT (here), but none of that kind of title.
Oh, now I found it. It’s well referenced by Blake Stacey here.
Am adding this now to our entry John Wheeler in a section Quotes.
1 to 25 of 25