Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
1 to 69 of 69
In the context of a discussion of what might be possible to prove ’synthetically’ in modal HoTT, I mentioned the fundamental theorem of calculus, Proposition 3.18 from of Higher prequantum geometry. I see Urs also called this ’Theorem 6.8 ($\infty$-Stokes’ theorem), problem 2 in the talk that inspired Felix Wellen to take on problem 1.
Mike Shulman went looking at U. Bunke, T. Nikolaus, M. Völkl, Differential cohomology as sheaves of spectra, Journal of Homotopy and Related Structures, October 2014, arxiv:1311.3188 to see if it contained what he could recognise as the fundamental theorem and reports:
The 60-page paper uses notation that’s fairly impenetrable to me (perhaps it is clearer to people already familiar with differential cohomology), but from a cursory glance it seems that it (1) is about sheaves of chain complexes, not tangent $\infty$-toposes, (2) uses sheaves on the category of manifolds with corners, not super formal smooth Cartesian spaces, (3) recovers only integration of closed differential forms, (4) recovers only the image of such integration in cohomology, with the connection to integration made visible by a particular point-set-level choice that’s not visible at the homotopy level, and (5) provides no synthetic way to define differential forms, only an abstract construction that happens to agree with integration when specialized to a particular example built out of classically defined differential forms. HPG appears to make stronger claims, but given what I can currently understand, it seems like a stretch to say that the fundamental theorem of calculus arises from modalities. But I would love it if someone could explain why I’m wrong!
So of course there are more details at Stokes’ theorem.
Hmm, what do we have connecting fundamental theorem of calculus and Stokes? I see at the former:
The FTC is usually split into two parts. The first part (sometimes called the second part) states that every indefinite integral is an antiderivative. The second part (sometimes called the first part) states that every antiderivative is an indefinite integral. Somewhere in here (usually in the first part) we also want to state that such antiderivatives and indefinite integrals actually exist. Their uniqueness (such as it is) may also be included.
The second part is a low-dimensional version of the Stokes theorem.
it seems that it (1) is about sheaves of chain complexes, not tangent $\infty$-toposes,
No, it is about sheaves of spectra, as in the title. (It’s not THAT hard to read :-). Chain complexes only appear as models (via the stable Dold-Kan corrrespondence) for the $H\mathbb{R}$-module spectra that encode the differential refinement of plain spectra.
I have written a possibly more readable digest at differential cohomology hexagon.
(2) uses sheaves on the category of manifolds with corners, not super formal smooth Cartesian spaces,
The corners in the site are just a convenience in order to have an explicit expression for the shape modality in terms of actual smooth singular simplicial complexes.This is not needed for the statement and proof of the “fundamental theorem of calculus”. (I think, am working from memory now, should check). The formal and super part does not play a role here, this happens in just plain cohesion, it does not need super differential cohesion. But there is a faithful embedding, so it doesn’t matter.
(3) recovers only integration of closed differential forms,
That’s true, but this happens to be what is needed for the fundamental theorem of calculus.
(4) recovers only the image of such integration in cohomology, with the connection to integration made visible by a particular point-set-level choice that’s not visible at the homotopy level,
No, for the ordinary case the integration lands in forms modulo exact forms. It goes reverse to the top morphism in the ordinary differential cohomology hexagon
$\array{ && \Omega^{n}(X)/im(\mathbf{d}) && \stackrel{\mathbf{d}}{\longrightarrow} && \Omega^{n+1}_{cl}(X) \\ & \nearrow && \searrow^{\mathrlap{a}} && \nearrow && \searrow \\ H^{n}(X, \mathbb{R}) && && H^0(X,\mathbf{B}^n U(1)_{conn}) && && H^{n+1}(X,\mathbb{R}) \\ & \searrow && \nearrow && \searrow && \nearrow \\ && H^{n}(X,U(1)) && \underset{}{\longrightarrow} && H^{n+1}(X,\mathbb{Z}) }$(5) provides no synthetic way to define differential forms,
It gives a synthetic definition of forms modulo exact forms (namely the homotopy fiber of the shape modality unit) and of closed forms (the homotppy cofiber of the flat modality counit).
Urs, it’s great that you don’t find it that hard to read. But I personally find it difficult to keep track of the meaning of all the $A$’s and $S$’s and $Z$’s and $U$’s, especially when Roman $S$ and script $S$ mean different things. I’m afraid differential cohomology hexagon is also not readable to me. Has anyone written anything aimed at an audience that doesn’t already know everything about classical differential cohomology?
Re: (1), everything that I was able to find in the paper is written in terms of sheaves with values in a stable $\infty$-category $C$, and in the section on differential forms (4.1) they explicitly take $C$ to be the localization of the category of chain complexes. Could you please give a pointer to where they talk about $H\mathbb{Z}$-module spectra, and where they relate this to tangent $\infty$-toposes?
One thing that particularly confuses me is the frequent presence of these “truncation” operations on chain complexes, which I do not know how to make sense of at the $\infty$-category level or for spectra. Taking the $n$-cycles of a chain complex is not a homotopy-invariant operation.
Re: (2), that’s nice to hear, but I would like to see a proof.
Re: (4), this was my interpretation of their Remark 4.3:
There is a contractible space of choices for such contractions. It is at this point that we fix a particular choice related to the integration of forms. If we would make a different choice here, then we would get another formula for the integration map, possibly not related to integration of forms. The action of the integration map in cohomology does not depend on the choices.
Re: (5), could you give a pointer to where this is proved?
(Also, why don’t you hang out at the Cafe any more?)
If people had the time, it would be a good opportunity to improve exposition on this subject. The hexagon seems such an integral part of differential cohomology that exposition there should overlap hugely with the plain differential cohomology. The latter has a couple of ’need to brush up later’ notes.
In an ideal world we’d give Mike a week off work to read through Bunke’s lecture notes and see him make sense of Urs’s modal HoTT formulation.
Hi, just to say that I am busy today, I’ll try to reply this evening.
Re: (1), everything that I was able to find in the paper is written in terms of sheaves with values in a stable $\infty$-category $C$, and in the section on differential forms (4.1) they explicitly take $C$ to be the localization of the category of chain complexes. Could you please give a pointer to where they talk about $H\mathbb{Z}$-module spectra, and where they relate this to tangent $\infty$-toposes?
Sheaves of spectra appear from section 4.4 on, the first paragraph there recalls the stable Dold-Kan correspondence via $H \mathbb{Z}$-module spectra.
(The logic of the expositon is this: Sections 4.1 to 4.3 starts with ordinary differential cohomology, hence no spectra are mentioned yet. Section 4.4 shows how the Hopkins-Singer construction of lifts of spectra to differential generalized cohomology theories is a special case of sheaves of spectra. Section 5 does Chern-Weil theory with coefficients in ordinary differential cohomology, section 6 finally gives several lifts of the topological K-theory spectrum to a sheaf of spectra, through the shape modality.)
One thing that particularly confuses me is the frequent presence of these “truncation” operations on chain complexes, which I do not know how to make sense of at the $\infty$-category level or for spectra. Taking the $n$-cycles of a chain complex is not a homotopy-invariant operation.
Yes, and it is crucial that it is not. These truncations are meant to produce new homotopy types from old ones, and it is this choice of truncation that the extra structure on differential cohomology, compared to its underlying non-differential cohomology, is encoded.
Best to look at this for the example of ordinary differential cohomology:
Start with the connective spectrum $\mathbf{B}^n \mathbb{Z}$, i.e. with $\Sigma^n H \mathbb{Z}$, classifying ordinary integral cohomology. Consider its real-ification map $\mathbb{B}^n \mathbb{Z} \longrightarrow \mathbf{B}^n \flat \mathbb{R}$. Now think of this morphism over the smooth site. Then choose any non-constant sheaf (of spectra) $\mathbf{A}$ that comes equipped with a morphism $\mathbf{A} \longrightarrow \mathbf{B}^n \flat \mathbb{R}$. Given this, form the homotopy fiber product
$\array{ \mathbf{Q} &\longrightarrow& \mathbf{A} \\ \downarrow &(pb)& \downarrow \\ \mathbf{B}^n \mathbb{Z} &\longrightarrow&\mathbf{B}^n \flat \mathbb{R} } \,.$Every such $\mathbf{Q}$ may be thought of as a differential refinement of $\mathbf{B}^n \mathbb{Z}$, in that a cocycle with coefficients in $\mathbf{Q}$ is an ordinary integral cocycle together with a lift, up to homotopy, of its image in real cohomology to the smooth coefficients $\mathbf{A}$.
(More precisely, Bunke-Nikolaus give conditions on the choice of $\mathbf{A}$ such that $\mathbf{Q}\to \mathbf{B}^n \mathbb{Z}$ is equivalently the unit of the shape modality on $\mathbf{Q}$. If this is the case, then $\mathbf{Q}$ is a differential refinement of integral cohomology in the stronger sense of: lift through the shape modality.)
The archetypical choices for $\mathbf{A}$ are as follows, and this will illustrate the truncation business: by the Poincaré lemma, there is an equivalence of sheaves of spectra
$\mathbf{B}^n \flat\mathbb{R} \;\simeq\; \mathbf{B}^n [ \mathbf{\Omega}^0 \stackrel{d_{dR}}{\to} \mathbf{\Omega}^1 \stackrel{d_{dR}}{\to} \mathbf{\Omega}^2 \stackrel{d_{dR}}{\to} \to \cdots ] \,,$where on the right we have the de Rham complex, shifted such that the 0-forms sit in degree $n$, and regarded as a sheaf of spectra under Dold-Kan.
Now the de Rham complex comes with a canonical filtration by truncations, where we chop it off from the left. (If we are over the complex analytic site and take $\mathbf{\Omega}^p$ to be the sheaf of holomorphic forms, then this is famous as the Hodge filtration.) So we can choose $\mathbf{A}$ to be one of these.
If we chop off $n$ degrees, then (again by the Poincaré lemma) the result is equivalent to just the sheaf $\mathbf{\Omega}^n_{cl}$ of closed forms (or rather its corresponding sheaf of spectra). In this case the above homotopy pullback is
$\array{ \mathbf{B}^n U(1)_{conn} &\longrightarrow& \mathbf{\Omega}^n_{cl} \\ \downarrow &(pb)& \downarrow \\ \mathbf{B}^n \mathbb{Z} &\longrightarrow&\mathbf{B}^n \flat \mathbb{R} } \,.$So $\mathbf{B}^n U(1)_{conn}$ is the sheaf of spectra such that a cocycle with coefficients in it is
an integral cohomology class $\chi$ in degree $n$
a closed differential $n$-form $\omega$
an equivalence of the images of the two in real cohomology.
This is what is called a cocycle in ordinary differential cohomology. For $n = 2$ then such cocycles are equivalent to $U(1)$-principal bundles with connection.
A very detailed exposition of this is at Deligne complex. (These are lecture notes that I wrote for an audience with no background in differential cohomology.)
Re: (2), that’s nice to hear, but I would like to see a proof.
Please say which statement you want to see proof of. What I meant to point out was that your worry in (2) seemed to not address the point in question.
The point of the proof of theorem 3.2 in Bunke-Nikolaus-Völkl is that it uses nothing but the existence of a shape modality (in fact just any idempotent $\infty$-monad) together with an object $Int$ equipped with two points $0,1 \colon \ast \to Int$ such that 0 is homotopic to 1 in the shape of $Int$.
So this works in spectrum objects in the $\infty$-topos over smooth manifolds, over smooth manifolds with corneres, over formal smooth manifolds, over super formal smooth manifolds etc. It works in (the tangent $\infty$-topos) of every cohesive $\infty$-topos.
Re: (4), this was my interpretation of their Remark 4.3:
There is a contractible space of choices for such contractions. It is at this point that we fix a particular choice related to the integration of forms. If we would make a different choice here, then we would get another formula for the integration map, possibly not related to integration of forms. The action of the integration map in cohomology does not depend on the choices.
Here they say “cohomology” for the cohomology of the sheaves of spectra $\mathcal{Z}(Diff^m(\mathbb{R}[0])$ (cofiber of flat modality) and $\mathcal{A}(Diff^m(\mathbb{R}[0])$ (fiber of shape modality). The cohomology groups of these sheaves of spectra are concentrated in degree 0 on the groups of closed differential forms and differential forms up to exact forms, respectively. They are saying that while their construction gives a particular morphisms of sheaves of spectra, on cohomology groups of sheaves of spectra its uniquely defined. It is that morphisms on the cohomology groups of sheaves of spectra that is the morphism from the sheaf of closed forms on the interval to the sheaf of forms modulo exact forms. What is not happening is that these forms are passed to their images in de Rham cohomology. We really get integration of actual forms as a map of cohomology groups of sheaves of spectra.
Re: (5), could you give a pointer to where this is proved?
The differential hexagon for ordinary differential cohomology is worked out in much explicit detail in the entry Deligne complex.
Thank you very much for your patience! I’m making progress; the entry Deligne complex is helpful as are your explanations.
What I would call a synthetic definition of differential forms is one that doesn’t require us to “put them in” anywhere. So $\mathbf{B}^n \flat\mathbb{R}$ is a good synthetic definition that gives the de Rham complex (this much is very cool!). But can the “filtration by truncations” be defined synthetically too?
I would guess that anything that relies on the Poincaré lemma would need a synthetic proof of that result to itself have a synthetic construction.
But can the “filtration by truncations” be defined synthetically too?
Def 1.30 of Science of Logic?
Oh perhaps you mean for the canonical filtration.
Yes, of course the notion of filtration is easy to define, but in #8 Urs said “the de Rham complex comes with a canonical filtration by truncations”.
Sorry for the slow replies. I have currently two visitors and some other urgent tasks. But I do very much want to keep this discussion going.
So $\mathbf{B}^n \flat\mathbb{R}$ is a good synthetic definition that gives the de Rham complex (this much is very cool!). But can the “filtration by truncations” be defined synthetically too?
So the perspective I have in mind is that for $E$ any spectrum object in any cohesive $\infty$-topos, then the cofiber $\overline{\flat}E$ of $\flat E \to E$ behaves like closed $E$-valued differential forms, while the fiber $\overline{ʃ} E$ of $E \to ʃ E$ behaves like $E$-valued forms modulo exact $E$-valued forms.
This is a synthetic definition of differential forms in that these $E$-valued differential forms satisfy a bunch of properties that we want differential forms to satisfy. The “fundamental theorem of calculus” we are discussing is one, the rest of the differential hexagon are six more such properties (for instance the one which says that closed $E$-valued differential forms arise as the obstruction to flatness of $E$-cocycles).
So, as it goes with synthetic definitions, this is a very general concept, of which the ordinary differential forms are but a single special case; a special case that arises for a particular choice of $E$, namely $E =$ Deligne complex (under Dold-Kan).
If we want a set of synthetic axioms that narrows in on this particular choice of $E$ (as maybe you do) then we’ll need to think about adding more axioms on top of just cohesion.
but in #8 Urs said “the de Rham complex comes with a canonical filtration by truncations”.
And it does. Every chain complex comes with a canonical filtration and a canonical co-filtration, doesn’t it.
The “problem” here (if it is one) is that the de Rham complex itself is not abstractly characterized. In using the equivalence of $\mathbf{B}^n \flat \mathbb{R}$ with the shifted de Rham complex, we are making a choice, and use that to transport the filtration on the de Rham complex to a filtration on $\mathbf{B}^n \flat\mathbb{R}$.
This is a bit tangential to the general point of the “synthetic fundamental theory of calculus”, in the above sense. This goes in the direction of understanding which abstract properties might single out the Deligne complex among all other choices of $E$.
I am not claiming that there is a neat synthetic axiomatization of the Deligne complex (though maybe there is and it has not occured to me). Also, it goes a bit against the grain of synthetic axioms to have them be so narrow that only a classical concept satisfies them. But of course it would still be interesting to think about.
Every chain complex comes with a canonical filtration and a canonical co-filtration, doesn’t it.
Well, every object of the 1-category of chain complexes comes with such a filtration. But that filtration is not invariant under quasi-isomorphism, so once we regard our chain complexes as spectrum objects in a cohesive $\infty$-topos they no longer have a canonical filtration. Or am I wrong?
Early morning thought, is there any way to use the $\sharp_k(-)$, the $k$-image of the unit of $\sharp$, to produce a filtration?
Hmm, I guess filtering the site, namely Euclidean spaces and smooth maps, by dimension would help detect n-forms… This is I suppose linked to what David C proposed.
For other sites with more exotic gradings this might turn up interesting objects.
Every chain complex comes with a canonical filtration and a canonical co-filtration, doesn’t it.
Well, every object of the 1-category of chain complexes comes with such a filtration. But that filtration is not invariant under quasi-isomorphism, so once we regard our chain complexes as spectrum objects in a cohesive $\infty$-topos they no longer have a canonical filtration. Or am I wrong?
You are right, and this is exactly what is going on and what is meant to be going on. The choice of filtration is extra structure that is meant to produce new homotopy types from old ones.
This is a crucial point of differential refinement: for given real-ification of a spectrum, we want to choose a chain complex that maps to it but is not equivalent to it. It is in this non-equivalence of the choice of “cycle data”, as it is called, that the information about the differential refinement of the spectrum is encoded.
This is best understood in the archetypical example of ordinary differential cohomology already mentioned above. We start with shifted integral Eilenberg-MacLane spectrum which I’ll write $\mathbf{B}^n \mathbb{Z}$, form its real-ification $\mathbf{B}^n \flat \mathbb{R}$, and then ask for sheaves of spectra that map into this without being equivalent to it.
The choice of equivalence $\flat \mathbb{R} \simeq \mathbf{\Omega}^{\bullet}$, which picks a particular 1-categorical (yes) model for $\flat \mathbb{R}$, namely the de Rham complex, induces a sequence of such choices of “cycle data”, namely the trauncations appearing in the canonical (Hodge) filtration of the chain complex.
In the ordinary case we choose the lowest stage in this filtration and get as cycle data the morphism of sheaves of spectra
$\mathbf{\Omega}^n_{cl} \longrightarrow \mathbf{B}^n \flat \mathbb{R}$from the sheaf of closed $n$-forms. It is not a problem that this morphism is not a weak equivalence, instead this is the whole point of it. because if we were to choose a weak equivalence here then differential cohomology collapses to non-differential cohomology!
And it is important that in other situations we may choose yet other non-equivalence morphsims here. For instance in the presence of an infinitesimal shape modality $\Im$ then it is interesting to pull back $\mathbf{B}^n \flat \mathbb{R}$ to the slice over $\Im \Sigma$, for some $\Sigma$. It turns out that for this pulled-back object there is now another chain complex that constitutes a 1-categorical model, namely the Euler-Lagrange complex. Using its truncations to define differential coefficients yields different differential cohomology theories, namely “Euler-Lagrange gerbes” whose connections are Lagrangian densities, and whose curvatures are the Euler-Lagrange variational derivatives of these.
This way one and the same ordinary homotopy type, such as $\mathbf{B}^n \flat \mathbb{Z}$, gets a variety of different differential refinements by choosing various truncations of various 1-categorical chain complex models for its real-ification.
The other point to highlight is that for the discussion of “synthetic” axiomatization of differential forms, none of this matters. The discussion of truncations of chain complexes only matters for convenient discussion of the construction of differential cohomology theories in models. For the synthetic theory however the perspective is the opposite: there we take any stable cohesive homotopy type $\mathbf{E}$ and then discover what its “cycle data” is, by applying modalities to it.
there we take any stable cohesive homotopy type $\mathbf{E}$ and then discover what its “cycle data” is, by applying modalities to it.
This is what I’m trying to find out: how do you “discover” this? It still sounds as though in the case of $\mathbf{B}^n\flat \mathbb{R}$ the cycle data is not “discovered” at the synthetic level, but imposed by the choice of a 1-categorical model or a filtration.
I am following this discussion only formally (i.e. I don’t know the details of anything that is involved), but perhaps this may allow to me offer a summary that may lead to some clarity.
1) Urs claims that he can state and prove Theorem 3.2 in the paper of Nikolaus et al in an arbitrary cohesive $(\infty,1)$-topos, in a purely synthetic way. For the statement at least, some details are at Stokes theorem, as David Corfield pointed out. I can certainly believe that the proof of this theorem can be given synthetically.
2) In 4.1 of the paper of Nikolaus et al, it is shown that the proof of Theorem 3.2 can be written out more explicitly in some special case. In the course of this explicit construction, they are making a choice out of a contractible space of such choices. With one particular such choice, the explicit description recovers the classical fundamental theorem of calculus. Different choices give different explicit descriptions; it is only in cohomology that they agree.
So in short, it seems to me that Mike’s original suggestion is correct: whilst this may be itself very remarkable, the synthetic story only recovers the fundamental theorem of calculus on the level of cohomology. The only way to get the classical story is to rigidify the purely synthetic construction by making some choices that will not be able to carried out on the $(\infty,1)$-categorical level, i.e. will not be able to carried out in a purely synthetic way.
And this should not be at all surprising. There is no way that a purely $(\infty,1)$-categorical, or even 1-categorical for that matter, construction will be able to recover something fundamentally set theoretic such as the fundamental theorem of calculus. In cohomology is the best one could possibly hope for.
There is no way that a purely $(\infty,1)$-categorical, or even 1-categorical for that matter, construction will be able to recover something fundamentally set theoretic such as the fundamental theorem of calculus.
I don’t see any reason to believe that a priori. Plenty of “set theoretic” things turn out to have nice categorical descriptions.
For instance if one could get some categorical construction that relied on the fact the real numbers are a complete ordered field, then any actual numbers that could be calculated using it (for instance an integral) would be unique, if I’m thinking correctly.
Certainly what I wrote is only a feeling, not an absolute fact. It is certainly conceivable to me that one could prove the fundamental theorem of calculus in a category theoretic setting, in which category theoretic ideas could lend insight; and indeed I believe that one could give a proof in a category theoretic foundations.
What I was getting at was that one cannot get something for nothing: one cannot hope to just take a few adjunctions on a category and magically produce the fundamental theorem of calculus! There is some deep (to appreciate this, we should remember to place the theorem in its historical context, and distinguish familiarity from triviality) set-theoretic content in the standard proofs of this theorem; one can generalise and reformulate the theorem, but one cannot hope to magically suddenly produce a proof of one these generalisations which by-passes all the depth of the original proof. A truly new proof in a category theoretic setting would require deep original ideas, which no doubt would be of profound wider interest. I do not see such ideas (referring specifically to the fundamental theorem of calculus here) in Urs’ setting of cohesive homotopy theory.
Moreover, I stand by my statement in the case of $(\infty,1)$-category theory as it stands today. I do not think that there is any way that quasi-category theory, or any other homotopical approach to $(\infty,1)$-category theory, could possibly hope to say something about the fundamental theorem of calculus: one will need some kind of rigidification, and that will have to come from outside the quasi-categorical world. Given an algebraic $(\infty,1)$-category theory, it might be possible, but a proof which required this setting, without being able to be given in a 1-categorical setting, would be such of such originality that again it would be likely to have very deep and profound consequences beyond just the fundamental theorem of calculus.
I believe that one could give a proof in a category theoretic foundations.
Sure, this has been known since Lawvere came up with ETCS.
one cannot hope to just take a few adjunctions on a category and magically produce…
So what of the proof of
The infinitesimal disk bundle on any ∞-group $V$ is trivialized by left translation,
as Felix Wellen has constructed in Coq from Urs’s proof?
Or is that less ’deep’?
Perhaps relative to any system there is a notion of depth to a derivation, and perhaps even there may be a kind of invariance to a result where there is minimum to what can be achieved in terms of sum of simplicity of system and simplicity of derivation. But when one should expect to have come anywhere close to this minimum having formulated in a system like set theory would then be far from obvious. I.e., can’t we imagine two seemingly equally complicated derivations in set theory, one of which can be much more simply formulated elsewhere, while the other not?
Sure, this has been known since Lawvere came up with ETCS.
I was referring to a foundations in which categories are primitive objects (as opposed to sets being the primitive objects, as is the case for ETCS).
This is what I’m trying to find out: how do you “discover” this?
Given any spectrum object $\mathbf{E}$ in a cohesive $\infty$-topos, thought of as a differential cohomology theory, then the cycle data that we are talking about is the homotopy cofiber $\overline{\flat} \mathbf{E}$ of the counit of its flat modality.
If we are in the $\infty$-topos over smooth manifolds and $\mathbf{E} = \mathbf{B}^p U(1)_{conn}$ is the sheaf of spectra given under Dold-Kan by the Deligne complex in some degree, then this comes out as being
$\overline{\flat}\mathbf{B}^p U(1)_{conn} \simeq \mathbf{\Omega}^{p+1}_{cl} \,,$the sheaf of closed $(p+1)$-forms. Moreover, working out the entire right half of the exact hexagon for $\mathbf{E}$ induced by cohesion
$\array{ && \overline{\flat}\mathbf{E} \\ & \nearrow && \searrow \\ \mathbf{E} && (pb) && ʃ \overline{\flat} \mathbf{E} \\ & \searrow && \nearrow \\ && ʃ \mathbf{E} }$shows, for $\mathbf{E} = \mathbf{B}^p U(1)_{conn}$, that this is the pullback square
$\array{ && \mathbf{\Omega}^{p+1}_{cl} \\ & \nearrow && \searrow \\ \mathbf{B}^p U(1)_{conn} && (pb) && \mathbf{B}^{p+1} \flat \mathbb{R} \\ & \searrow && \nearrow \\ && \mathbf{B}^{p+1}\mathbb{Z} }$that, analytically, may be used to construct $\mathbf{B}^p U(1)_{conn}$.
Generally, there are many cohesive spectra $\mathbf{E}$ (differential cohomology theories) that lift a given plain spectrum $ʃ \mathbf{E}$ (generalized but otherwise plain cohomology theories) through the unit of the shape modality. If one starts with a plain spectrum and asks for a differential refinement, then one way to do this is to choose a chain complex model for its real-ification, truncate that chain complex and use that truncation as $\overline{\flat} \mathbf{E}$. Then the resulting fiber product constructs the differential refinement $\mathbf{E}$.
This is what people used to do, starting with Hopkins-Singer.
But the synthetic formulation of forms and FTC proceeds from the opposite perspective. Choose any cohesive spectrum $\mathbf{E}$. Then by just applying the modalities of cohesion, one discovers the closed forms with coefficients in $\mathbf{E}$, the forms modulo exact forms with coefficients in $\mathbf{E}$, the de Rham differential for forms with $\mathbf{E}$-coefficients and much more, and one finds an integration map which satisfies FTC.
If we do this in the model over smooth manifolds, and if the chosen $\mathbf{E}$ happens to be of the form $\mathbf{E} = \mathbf{B}^p U(1)_{conn}$, then this synthetic formalization of differential forms reduces to the standard one. If we are in another cohesive $\infty$-topos, or choose $\mathbf{E}$ to be another cohesive spectrum (say differential tmf or differential bordism) then we get something much richer, which however, by the synthetic theory, will still behave in key ways as ordinary differential forms do.
it is only in cohomology that they agree. […] the synthetic story only recovers the fundamental theorem of calculus on the level of cohomology.
No, the theorem works at the level of differential forms (modulo exact froms).
I have pointed that out in #9: the integration morphism they construct lands in the cohomology with coefficients in the spectrum that is the homotopy fiber $\overline{ʃ} \mathbf{E}$ of the unit of the shape modality of the given cohesive spectrum $\mathbf{E}$. For the relevant case of $\mathbf{E}$ being ordinary differential cohomology, the cohomology groups of this homotopy fiber spectrum is the group of differential forms (modulo exact forms). So “in cohomology” means “in $\overline{ʃ} \mathbf{E}$-cohomology”, and these cohomology groups are concentrated in degree 0 on the group of dfferential forms (modulo exact forms).
(All this is explained at differential cohomology hexagon.)
To highlight this last point again: every cohesive spectrum $\mathbf{E}$ sits in an exact hexagon of the following form
$\array{ && \overline{ʃ} \mathbf{E} && \stackrel{\mathbf{d}}{\longrightarrow} && \overline{\flat} \mathbf{E} \\ & \nearrow & & \searrow & & \nearrow && \searrow \\ \overline{ʃ} \flat \mathbf{E} && && \mathbf{E} && && ʃ \overline{\flat} \mathbf{E} \\ & \searrow & & \nearrow & & \searrow && \nearrow \\ && \flat \mathbf{E} && \longrightarrow && ʃ \mathbf{E} } \,.$This diagram alone says, in particular, that
$\overline{\flat} \mathbf{E}$ behaves like closed $\mathbf{E}$-valued curvature forms, because the diagram exhibits this as the obstruction classes to $\mathbf{E}$-bundles-with-connection being flat (in the image of $\flat \mathbf{E}\to \mathbf{E}$)
$\overline{ʃ} \mathbf{E}$ behaves like co-exact forms, because $\overline{ʃ}\mathbf{E} \longrightarrow \mathbf{E}$ behaves like regarding such objects as connections on trivial $\mathbf{E}$-bundles.
That’s what “synthetic” means: we don’t declare that $\overline{ʃ}\mathbf{E}$ and $\overline{\flat} \mathbf{E}$ are built like differential forms in the textbooks, instead we guarantee that these behave like differential forms in several key respects. For instance we guarantee that if you take a differential form and regard it as a connection on a trivial bundle, then the curvature of that connection is the de Rham differential of the original form. (That’s what the commutativity of the top triangle in the hexagon says).
It’s the duck principle: if it walks like a duck and quacks like a duck, then we may regard it as a duck.
Now to make contact between this synthetic formalization of differential forms and the traditional analytic construction: we find a model of the synthetic axioms for which the general abstract duck is incarnated as the concrete duck that we want to see.
This turns out to be the case if we choose the cohesive $\infty$-topos to be that over smooth manifolds, and choose $\mathbf{E}$ to be the image under Dold-Kan of a shift of the Deligne complex (the shift corresponds to which degree the differential forms will have).
For that model and choice $\mathbf{E} = \mathbf{B}^n U(1)_{conn}$, then one finds that the above general abstract hexagon reduces to the familiar hexagon of ordinary differential cohomology, which looks like so:
$\array{ && \Omega^{n}(X)/im(d_{dR}) && \stackrel{d_{dR}}{\longrightarrow} && \Omega^{n+1}_{cl}(X) \\ & \nearrow && \searrow^{\mathrlap{a}} && \nearrow && \searrow \\ H^{n}(X, \mathbb{R}) && && H^0(X,\mathbf{B}^n U(1)_{conn}) && && H^{n+1}(X,\mathbb{R}) \\ & \searrow && \nearrow && \searrow && \nearrow \\ && H^{n}(X,U(1)) && \underset{}{\longrightarrow} && H^{n+1}(X,\mathbb{Z}) }$This is displayed now in cohomology: the original hexagon of spectra, regarded as a hexagon of generalized cohomology theories, is evaluated on the manifold $X$, and what is displayed are the resulting cohomology groups. This is the point that led to confusion above. Notice that in the top row the sets (additive groups) of differential forms appear as the cohomology groups of the cohomology theories $\overline{ʃ} \mathbf{E}$ and $\overline{\flat}\mathbf{E}$, respectively.
The abstractly constructed integration map lands in the top left item of the hexagon, hence it lands in the group of co-exact differential forms $\Omega^n(X)/im(d_{dR})$.
The point in question is different from the one you are addressing, Urs. I had already understood from #9 that this is what you had in mind. The point is that to describe the integration map as the classical one, one has to step outside cohesive homotopy theory and make a choice. This choice cannot be accounted for synthetically.
[Edited from here, to try to make things clearer.]
To put it another way, it is not at all obvious, because of this stepping outside of cohesive homotopy theory, that there is a model of the synthetic theory for which we recover a proof of the fundamental theorem of calculus without depending upon it (i.e. without going round in circles). The question in these terms is: is there a model of the synthetic theory which, for some choice of object, recovers exactly the fundamental theorem of calculus for the real numbers, without using this theorem in the model?
But this particular spectrum object $\mathbf{B}^p U(1)_{conn}$ is constructed only externally, by using differential forms. That’s what I’m saying — you don’t get differential forms out unless you already put them in.
But this particular spectrum object $\mathbf{B}^p U(1)_{conn}$ is constructed only externally, by using differential forms. That’s what I’m saying — you don’t get differential forms out unless you already put them in.
Not the standard differential forms, right, they are only one model for the synthetic axioms.
This is exactly as it should be for a synthetic construction. Similarly the Kock-Lawvere axioms don’t produce the standard smooth tangent bundles of manifolds, unless one passes to a suitable model for them and chooses an ordinary smooth base manifold $X$ in this model. Nevertheless, the axioms guarantee that every object of the form $[\mathbb{D}^1,X]$ does behave in crucial ways as the tangent bundle of any $X$ is supposed to behave. And all theorems true for the abstract $[\mathbb{D}^1, X]$ will hold true also for the standard smooth tangent bundle. Hence whenever we prove an abstract theorem about $[\mathbb{D}^1, X]$, using just the Kock-Lawvere axioms, then we have given a “synthetic proof” of the corresponding statement about the traditional smooth tangent bundle.
This is not a defect, this is much of the point of synthetic theory. If the synthetic theory would give only exactly the traditional thing, that should worry us.
is there a model of the synthetic theory which, for some choice of object, recovers exactly the fundamental theorem of calculus for the real numbers, without using this theorem in the model?
Yes, the abstract properties of cohesive spectra imply that the integration map exists and satisfies
$FTC \;:\; \int_{I} \circ \mathbf{d} = (-)|_1 - (-)_0$What one needs to check is that in the standard smooth model the abstractly defined integration map comes out as the standard integration of differential forms. Once this is established, it follows that $FTC$ applies to the standard integration of differential forms.
Similarly, if you pass to any other cohesive $\infty$-topos, then there is guaranteed to be an integration map and it is guaranteed to satisfy the above identification FTC.
This is not supposed to be unfamiliar, this is how all abstract theory works. Once you check that your context is a model of some general abstract axioms, then all the impliciations of the general axioms immediately become theorems in your model, which otherwise you might have had to laboriously check explicitly.
This is not a “from nothing”-magic, instead it’s the “from the minimum”-principle: You still do need to prove something explicitly in the model, namely that it satisfies those general abstract axioms. But what it buys is that it shows that this is sufficient for implying also all the implications of the abstract axioms.
Similarly the Kock-Lawvere axioms don’t produce the standard smooth tangent bundles of manifolds, unless one passes to a suitable model for them and chooses an ordinary smooth base manifold $X$ in this model.
I don’t think that analogy holds. The Kock-Lawvere axioms (about a particular “line object”) give us a canonical way to construct, from any object $X$, a “tangent object”, which in the case of a smooth manifold $X$ reproduces the tangent bundle. Here there is no canonicity; even once given a spectrum object like $\mathbf{B}^n \flat \mathbb{R}$, there is no canonical way to produce its cycles until we make a further choice.
This is not supposed to be unfamiliar, this is how all abstract theory works. Once you check that your context is a model of some general abstract axioms, then all the impliciations of the general axioms immediately become theorems in your model, which otherwise you might have had to laboriously check explicitly.
I believe that Mike and I both understand perfectly well how a synthetic formalism works :-).
What one needs to check is that in the standard smooth model the abstractly defined integration map comes out as the standard integration of differential forms. Once this is established, it follows that FTC applies to the standard integration of differential forms.
This does not answer the question, which was: can this be done (i.e. the standard model be constructed and this check made) without relying on the fundamental theorem of calculus in the model, or in constructing the model? I am highly skeptical.
I am not arguing at all that cohesive homotopy theory has nothing going for it. Everything that I have written is specific to the classical fundamental theorem of calculus; I cannot, of course, prove that cohesive homotopy theory is trying to establish this theorem by magic, but that it is my strong feeling on the matter. From an aesthetic point of view, I don’t see much wrong in taking the fundamental theorem of calculus as an axiom, as in synthetic differential geometry.
Let me try to put in stark terms while my question (and also Mike’s, ultimately, as I see it) is relevant: it is not a particularly impressive feature of a synthetic theory that it can prove a tautology! I mean, if we need the fundamental theorem of calculus in the model, then we are using the fundamental theorem of calculus to prove the fundamental theorem of calculus. We can just as well use the synthetic theory to tautologously prove any theorem in the model! That is to say: there is then no longer any content in the statement that we have a synthetic fundamental theorem of calculus.
This does not answer the question, which was: can this be done (i.e. the standard model be constructed and this check made) without relying on the fundamental theorem of calculus in the model, or in constructing the model?
That’s not my question. I’m perfectly happy if classical FTC is used in constructing the model and verifying that the synthetic construction reproduces the classical result. I just want the synthetic construction to be purely synthetic, using only the operations available in cohesive HoTT, rather than relying on a choice that builds in the desired answer.
Here’s an example of the sort of thing I want. In type theory (or any foundations) one can define the Dedekind real numbers. It turns out that when this construction is interpreted in various “toposes of spaces” (such as Johnstone’s topological topos, or the topos of “continuous sets”) the result is the Dedekind real numbers equipped with their classical Euclidean topology. Obviously the classical Euclidean topology can be already used in constructing the model — continuous sets are sheaves on the site of cartesian spaces $\mathbb{R}^n$ and continuous functions relative to that topology. The point is that the construction of real numbers in type theory is entirely canonical and doesn’t “build in” the topology in any way, it just turns out to reproduce the desired result in the model. That’s the sort of thing I would want in order to say we have a synthetic theory of differential forms and integration.
Let’s see if I’m getting this. We’re all agreed that entirely canonical is the construction of the hexagon for a cohesive spectrum $\mathbf{E}$? So canonically there are $\mathbf{E}$-valued forms associated to it, and an entirely synthetic result that when interpreted in a specific case we would call ’FTC’. We might then want to call that synthetic result $\mathbf{E}$-FTC, in a similar way to how we might say $\mathbf{E}$-valued form.
But one might respond that this $\mathbf{E}$-FTC is ill-named. It’s only when the $\mathbf{E}$ has special features that we’d like to see the result as FTC-like. And the information to say when $\mathbf{E}$ has that quality can’t be provided synthetically. [Maybe at this point we even object to $\mathbf{E}$-valued forms for similar reasons.] E.g., it’s only when $\mathbf{E}$ looks something like $\mathbf{B}^p U(1)_{conn}$ in the $\infty$-topos over smooth manifolds that we should speak of an FTC. And then if the qualities picking out spectra like $\mathbf{B}^p U(1)_{conn}$ can’t be given synthetically, we have no right to speak of a synthetic FTC.
If this is roughly right about the issue at stake, is that not something that could also have been raised about Felix Wellen’s Coq proof of Urs’s synthetic proofs concerning bundles (#26). In the discussion afterwards, it was pointed out that very little structure was used, so could have a very wide range of interpretation. Should one then say that the result isn’t really about trivialising disk bundles, that’s just one interpretation.
So when abstracting from a situation are you allowed to call it by the same name?
That’s not my question.
Thanks for trying to clarify! From your explanation and your example, I still think that our points of view are essentially the same, just expressed in different ways. In particular, I entirely agree that the fundamental theorem of calculus in the model should ’turn out’ to be the classical one; this is exactly what I have been getting at. I don’t really see that the point of view as I expressed it is very different, but if you wish to split hairs, that’s OK :-). I’ll not go into this further, though; I am content with the statement that I made as roughly representing my point of view. If you like, you can qualify ’being used in the model’ to ’being used in the model in a way which is essential to being able to identify the theorem in the model as being the classical one’. Just using the fundamental theorem of calculus in the course of constructing a model, and then directly interpreting a synthetic theorem in this model such that it recovers the classical fundamental theorem of calculus, would be fine by me.
and an entirely synthetic result that when interpreted in a specific case we would call ’FTC’.
What is crucial here is what one means by ’interpreted’. If the intepretation were direct (just plug some synthetically constructed/proven things into the model, and recover the classical fundamental theorem of calculus), there would be no problem at all. The issue, at least for me, is that the interpretation Urs has been describing does not seem to be direct in this sense at all. One has to step outside the synthetic theory to obtain it, using things that are at least closely related to, if not actually the same as, the things one wants to obtain in the end. And, for me, for the reasons that I tried to explain, this invalidates calling a synthetic result a synthetic version of a result in the model.
In other words, for me at least, the issue is not the one that you describe, David. The issue is whether or not there is any case in which we actually recover the classical result in an ’acceptable’ way from the point of view of what it means to have a synthetic theory of something.
I think that possibly what bothers me is not mainly calling this result “FTC”, but calling the things it applies to “differential forms with coefficients in $E$”. The latter implies that there is an abstract notion of “differential form” that’s defined in the synthetic theory, and that we obtain the classical notion of “differential form” by choosing a standard sort of “coefficients” $E$ like $\mathbb{R}$. In particular, there is a classical notion of “differential form with coefficients in $V$” for a vector space $V$, and the phrase “differential forms with coefficients in $E$” makes it seem like $E$ should be playing a role similar to $V$. This would be fine if the $E$ reproducing the standard theory were obtained in some relatively canonical way from $\mathbb{R}$, such as by applying modalities, delooping, and taking fibers or cofibers.
But actually, in order to get out the standard theory of differential forms, we have to choose an $E$ that already “knows” what classical differential forms are. So the synthetic construction is not “adding the differential-form-ness” to a classical object like $\mathbb{R}$, instead it’s starting with a classical object that might already consist of differential forms and doing something else to them. So it doesn’t seem correct to call $E$ the “coefficients” or to put the phrase “differential forms” outside of $E$.
Objection to “FTC” is a corollary of this. The familiar FTC is a theorem about differential forms. If the synthetic theorem is not a theorem about something synthetic called “differential forms”, then it feels wrong to call it “FTC”.
I think that possibly what bothers me is not mainly calling this result “FTC”, but calling the things it applies to “differential forms with coefficients in $E$”.
Ok good, that was my query in square brackets in #39.
To render this into a simpler case, is the concern that it’s as though you’re working on $\mathbb{R}$ and its additive structure, then someone provides an axiomisation of groups, and declares that the composition is a form of addition, and so that they have devised a synthetic theory of addition? And even if they then add in a commutativity axiom, you’re still left feeling that’s not quite addition?
[Hmm, I seem to be straying into the discussion on the other thread.]
I am in agreement with what Mike wrote in #42. What I was attempting to describe was why, on a slightly more general level, I don’t view the result as a synthetic fundamental theorem of calculus. What Mike wrote is a more particular manifestation of this in this case. I should perhaps emphasise that the synthetic result may well be perfectly interesting in its own right; what I am objecting to is only the claim that this result recovers by a direct interpretation the classical fundamental theorem of calculus in a model, i.e., we do not, in my view, have a magical new proof of the fundamental theorem of calculus.
Regarding your analogy in #43, David: for me at least, I still don’t think this is quite right. It’s not all that easy to provide an accurate analogy, but I’ll try. For me, the situation is analogous to asserting that the Quillen equivalence between the Bousfield localisation of the model structure on simplicial sets to 1-types and the Bousfield localisation of the Serre model structure on topological spaces to 1-types gives a simple proof of the homotopy hypothesis for 1-types, since there is a Quillen equivalence between groupoids and this model structure of 1-types on simplicial sets. The point is that this assertion is completely misleading, because all of the content from the point of view of the homotopy hypothesis is in the latter Quillen equivalence, not in the one between 1-types as simplicial sets and 1-types as topological spaces. This is not to say that the latter Quillen equivalence is trivial, not at all; it is just that it is not fundamentally what the homotopy hypothesis is about.
I’m not going to touch the homotopy hypothesis example (“what the homotopy hypothesis is really about” is a rabbit hole I don’t want to go down in this thread), but I would say it’s more as though I was working on $\mathbb{R}$ and its additive structure and then someone axiomatizes groups and declares that they have devised a synthetic theory of real numbers.
This is actually a really interesting philosophico-linguistic point. When does a generalization deserve to be called by the same name as the thing it generalizes? Suppose a theorem or construction with hypotheses X and conclusion A is generalized to one with hypotheses Y and conclusion B, in the sense that X is a special case of Y and in that special case B reduces to A. In this situation I think it’s not unreasonable to give the theorem the same name (though in any particular case one should worry about whether it’s “close enough” to merit that). But in general, I don’t think that such a generalization justifies giving Y itself the same name as X.
For instance, when X = the real numbers, we can take A = addition, and then Y = groups and A = the group operation. So it’s reasonable to call the group operation a sort of addition (especially if our groups are commutative), but it’s not reasonable to call the elements of a group “real numbers”. Or when X = the category of sets and A = the quotient by an equivalence relation, and Y = an arbitrary category and B = the coequalizer of an internal equivalence relation; it’s reasonable to call such a coequalizer a “quotient”, but it’s not reasonable to call the objects of an arbitrary category “sets”.
The example at hand is something like X = the Deligne complex of differential forms, A = closed p-forms, whereas Y = a cohesive spectrum object and B = the cofiber of $\flat Y \to Y$. So we can call B the “cycles of Y”, but we shouldn’t call Y “differential forms”. (And we shouldn’t call B “differential forms” either, since the construction $X\mapsto A$ is not called “differential forms”, the differential forms already being present in X.)
“what the homotopy hypothesis is really about” is a rabbit hole I don’t want to go down in this thread
Very wise :-). I just couldn’t think of anything else off the top of my head.
it’s more as though I was working on $\mathbb{R}$ and its additive structure and then someone axiomatizes groups and declares that they have devised a synthetic theory of real numbers.
Yes, for me, that’s a good analogy. Perhaps it takes it a bit far, but it’s certainly along the right lines. I also liked your elaboration. Maybe I should have not entered this thread at all, and just left you to it :-).
PS - The only thing that the elaboration that Mike gave maybe does not completely capture is that, in the setting he described, one could still prove a theorem for groups and specialise it to the real numbers. It would then be reasonable to say that the result for real numbers has a synthetic proof. For me, at least, this is not the case for the fundamental theorem of calculus. However, the analogy itself that Mike gave, that I quoted, is accurate in this respect, I feel: just as the construction of the real numbers is independent of an axiomatisation of groups, the classical fundamental theorem of calculus is, it seems to me, independent, in the same sense, of Theorem 3.2 in the paper of Nikolaus et al. What 4.1 in the paper of Nikolaus et al does is show that the classical fundamental theorem of calculus has a new interpretation; it does not provide a new, stand-alone proof of it.
Similarly the Kock-Lawvere axioms don’t produce the standard smooth tangent bundles of manifolds, unless one passes to a suitable model for them and chooses an ordinary smooth base manifold $X$ in this model.
I don’t think that analogy holds. The Kock-Lawvere axioms (about a particular “line object”) give us a canonical way to construct, from any object $X$, a “tangent object”, which in the case of a smooth manifold $X$ reproduces the tangent bundle. Here there is no canonicity; even once given a spectrum object like $\mathbf{B}^n \flat \mathbb{R}$, there is no canonical way to produce its cycles until we make a further choice.
Given any cohesive spectrum $\mathbf{E}$, the corresponding “cycles” are $\overline{\flat}\mathbf{E}$. And just as in the KL-case, there is a choice for $\mathbf{E}$ such that this reproduces the familiar object.
You seem to keep thinking of $\mathbf{B}^n \flat \mathbb{R}$ as something more fundamental, and as $\mathbf{B}^n U(1)_{conn}$ as something that is only derived from it. But both are smooth spectra, and there is no reason to regard the latter as defective where the first is not. Choosing the latter for $\mathbf{E}$, is analogous to choosing a smooth manifold for $X$.
it’s reasonable to call the group operation a sort of addition (especially if our groups are commutative), but it’s not reasonable to call the elements of a group “real numbers”.
The name one gives is the more reasonable the more core properties of what that name designates one has abstract proofs for. In the case at hand we have objects to which we may apply a differential and an integration map that are related by an FTC, we have that the objects arise as connections and as curvatures, such that the relation between the two is again that differential, and we have a Chern character map which is represented by these objects. If you ask anyone what kind of object it is that admits these operations with these relations, the evident answer is “differential forms”.
the question, which was: can this be done (i.e. the standard model be constructed and this check made) without relying on the fundamental theorem of calculus in the model, or in constructing the model? I am highly skeptical.
It should be evident that you don’t need the fundamental theorem of calculus to construct what we called the “standard model” above, namely the category of simplicial sheaves on smooth manifolds. If you are sceptical that from just the cohesion of this model one proves the FTC, maybe you could point to the first step in the proofs of the BNV article that you find problematic.
In the case at hand we have objects to which we may apply a differential and an integration map that are related by an FTC, we have that the objects arise as connections and as curvatures, such that the relation between the two is again that differential, and we have a Chern character map which is represented by these objects. If you ask anyone what kind of object it is that admits these operations with these relations, the evident answer is “differential forms”.
This is surely the kind of evidence to settle particular instances of the “really interesting philosophico-linguistic point” Mike alludes to above. You can also interpret the construction in the most distant setting and see if you’re still prepared to call it by the same name.
So here, are there cases of $\overline{\flat}\mathbf{E}$ in global equivariant homotopy theory which seem to stretch this naming? Or in the arithmetic case, differential cohesion and idelic structure?
Of course, in the latter case we have people prepared to speak of “arithmetic jet spaces” and even Arithmetic Differential Equations of Painlevé VI Type.
If you are sceptical that from just the cohesion of this model one proves the FTC, maybe you could point to the first step in the proofs of the BNV article that you find problematic.
Yes, I am indeed very skeptical :-). It is far from my expertises, but I believe that the Poincaré lemma is certainly being used when the identification with the de Rham complex is made. The proof of the Poincaré lemma relies on the fundamental theorem of calculus, as far as I know.
It should be evident that you don’t need the fundamental theorem of calculus to construct what we called the “standard model” above, namely the category of simplicial sheaves on smooth manifolds.
Yes, I was trying to explain in general why one should be suspicious when one is doing more than directly interpreting a synthetic result. It is indisputably the case that more than a direct intepretation is taking place here.
If you forgive me saying so, David, I think that you are jumping the gun rather. When one claims to have a new proof of the fundamental theorem of calculus, one should be very careful about justifying that. It is not, for me at least, a simple question of nomenclature.
When one claims to have a new proof of the fundamental theorem of calculus
That’s a thought you brought up. What had been under discussion here before is the statement that
1) There is a theorem $FTC_{abstract}$ provable in abstract cohesive homotopy theory;
2) under the interpretation $Int_{Mfd}$ of that theory in the model given by the $\infty$-topos over smooth manifolds, then this theorem comes out as the classical version $FTC_{classical}$, in that $Int_{Mfd}(FTC_{abstract}) \simeq FTC_{classical}$.
This is a theorem by BNV, only that they don’t put it in these words.
More later, but regarding the first point in #48, $\mathbf{B}^n \flat \mathbb{R}$ is more fundamental than $\mathbf{B}^n U(1)_{conn}$ because it can be defined synthetically. Both of them are smooth spectra, but there is a term I can write down in smooth-cohesive type theory whose interpretation in the model is $\mathbf{B}^n \flat \mathbb{R}$, whereas that is not the case (as far as I know) for $\mathbf{B}^n U(1)_{conn}$.
That’s a thought you brought up.
I don’t wish to get into a petty discussion, but I don’t think this is accurate. This certainly seemed to be what David was originally suggesting in the original n-café discussion here, and Mike’s reply seems to indicate that he interpreted it the same way as I did. Your own comments until now also did not suggest to me that you were denying the claim; in particular, I don’t know how else you intended the final sentence of #49 or the first sentence of #34 to be interpreted!
1) There is a theorem FTCabstract FTC_{abstract} provable in abstract cohesive homotopy theory;
I don’t think anyone is casting much doubt on this. There is no actual proof written down of this precise statement as far as I can see, only the proof in the paper of Nikolaus et al, but I can certainly believe that this proof can be translated to cohesive homotopy theory.
under the interpretation $Int_{Mfd}$ of that theory in the model given by the $\infty$-topos over smooth manifolds, then this theorem comes out as the classical version $FTC_{classical}$, in that $Int_{Mfd}(FTC_{abstract}) \simeq FTC_{classical}$.
There are two problems with this. Firstly, ’comes out’ should be properly clarified. Indeed, I would regard it as inappropriate. The casual reader, just as David seems to have done, would be forgiven for interpreting that you meant what is usually meant when working with synthetic theories, namely that when interpreted in the model, the synthetic result recovers exactly the classical result. This is not the case for the fundamental theorem of calculus here: the theorem in the model does not, as far as I can see, recover the classical result at all, except in an entirely circular way (i.e. relying on the classical result).
Secondly, along the same lines, I don’t know what you mean by $\simeq$, but again, it is completely misleading in any sense I can think of. There is an implication $FTC_{classical} + Int_{Mfd}(FTC_{abstract}) \implies FTC_{classical}$, maybe with other things on the left hand side as well, but this is not very interesting!
Let me just repeat again that I have no vendetta here, nor am I at all negative to cohesive homotopy theory; I am sure one can do many wonderful things with it. But I see a certain tendency to hyperbole, and I think it is healthy to be realistic.
In the case at hand we have objects to which we may apply a differential and an integration map that are related by an FTC, we have that the objects arise as connections and as curvatures, such that the relation between the two is again that differential, and we have a Chern character map which is represented by these objects. If you ask anyone what kind of object it is that admits these operations with these relations, the evident answer is “differential forms”.
One might answer that you have begged the question by naming all of these other objects with words like “connection”, “curvature”, “differential”, “integration”, and “Chern character”. But in any case I don’t think this overcomes my argument in #45 that distinguishes between the naming of objects/hypotheses and the naming of theorems/constructions/conclusions.
It should be evident that you don’t need the fundamental theorem of calculus to construct what we called the “standard model” above, namely the category of simplicial sheaves on smooth manifolds.
One only needs to show that $Cart$ and the super version are an $\infty$-cohesive site, right? The meat is where one shows that there is a morphism involving $\mathbf{B}^p U(1)_{conn}$, I think. It would be interesting to know if the Reverse Mathematics people know whether the Poincaré Lemma implies the traditional FTC (though from what I’ve read, they are happier discussing 19th century analysis and point-set theory, rather than anything of particular use in contemporary geometry)
@ Mike
If one can construct $\mathbb{R}$, can one construct $U(1)$ and its deloopings? Or is it more the case that the former is really $\flat \mathbb{R}$ constructed using Cauchy sequences, which is insufficient to get the smooth $U(1)$ (or even the continuous $U(1)$)?
(I have little issue with Urs deciding to call the result the FTC, since others have appropriated names of old theorems for vast and sometimes tenuous generalisations before. But it’s good this is discussed here first rather than up against possibly stronger opposition out in the general sphere of mathematicians. The framing of course matters.)
One can certainly construct $U(1)$. The continuous version can be constructed from the Dedekind $\mathbb{R}$, while the smooth version can be constructed from the “smooth reals” (which I’m reasonably happy to assume as an axiomatic given for synthetic smoothness – SDG does that too). But this $U(1)_{conn}$ thing is not, so far as I know, constructed from $U(1)$ in any synthetic way; it’s just called that because in the model it’s related to $U(1)$ in some reasonable way.
Mike, re #56, how far should we allow the meaning of ’magnitude’ to stretch? You’re happy with the ’magnitude of an enriched category’ and then magnitude homology? Perhaps this is less troubling since magnitude never had a well-entrenched meaning.
… distinguishes between the naming of objects/hypotheses and the naming of theorems/constructions/conclusions.
That way of distinguishing two sides would surely surprise people, who might wonder whether there is an essential difference in kind between a hypothesis and a theorem. Isn’t one just a proved version of the other? But I guess in the brave new world where proving = constructing (I can’t now remember which Café posts you discuss this), that there’s the naming of a type, and then the construction of an element of the type.
Could we not gain a little clarity on this naming issue by speaking in type theoretic terms? I guess we have a type $CohesiveSpectrum$, and you’re concerned that we don’t have the means to construct an element acting like $U(1)_{conn}$.
We could even in principle dispense with the $U(1)$, and consider an arbitrary group object $G$, then ask what is it about $B^p G_{conn}$ (for the range of $p$ that makes sense) that is special, other than being some object that can be given in the ’standard model’ of differential cohesion.
The real questions are:
- What is $B G_{conn}$ in concrete formulas?
- Why are these formulas what they are? What is the general abstract concept of an ∞-connection? What are its defining abstract properties?
A comprehensive answer to the second question is provided by the general abstract concepts discussed in section 4. (dcct, 119-120)
It seems more like 5.2.13 where we get down to defining $B G_{conn}$ as a pullback. What is that mysterious $C$ introduced there? We are sent back to Definition 5.2.70. How do we construct that $\Omega_{cl}(-,A)$?
@David C #59, by “hypothesis” I don’t mean “conjecture”, but rather the “hypotheses of a theorem”, i.e. the “if” part of an if-then. In type-theoretic terms, maybe one way to make the point is that I feel more comfortable using the same name for a generalization of a function than I do for the name of a type. (To be honest, I’m still trying to pin down exactly what the point is, so I may not be entirely consistent, but at the moment that seems like a reasonable way to say it.)
In concrete programming language, overloading the name of a function is often not problematic, because you can use the types of the arguments and the expected return value to infer which function must be meant. But if you have two types both called “Int” then that is a recipe for confusion.
Mike, given that a function is an element of a function type, and given that any element of a type is a function (e.g., even a global one from $\mathbf{1}$ as $A^\mathbf{1} = A$), isn’t the distinction really between terms and types?
Yes…and every type is also a term, namely a term belonging to the universe. So maybe I’m talking rubbish. Perhaps the point is just that I can’t see any term/function that could classically be called “differential forms” of which this construction is a generalization.
Regarding Mike’s (#42)
in order to get out the standard theory of differential forms, we have to choose an $\mathbf{E}$ that already “knows” what classical differential forms are,
while having a little dig around nForum entries, I came upon this comment, which perhaps point to means of reducing the lack of synthetic-ness of the choice of filtration:
what is certainly true is that as we progress from cohesion – where non-closed differential forms only appear via a choice of Hodge filtration of the de Rham coefficient objects – to solid cohesion, then it is possible to require that the previously chosen Hodge filtrations are compatible with the concept of differential forms as seen by the supergeometry, hence it is possible to sharpen the axioms and leave less to human choice, more to the gods. However, making the connection formal seems not to be particularly elegant, as it involves going to function algebras etc. That’s why I am hesitant of phrasing it as an additional axiom.
That sounds intriguing; I would like to understand it better.
@David C, here’s another potentially relevant quote, from Gaitsgory, paragraph 0.14 (emphasis added):
0.1.4. The small point of difference is how we derive the numerical identity (0.3) from the formula for $H^*(Bun_G)$. In [Main Text] this is done by considering the filtration by powers of the maximal ideal on the algebra of cochains $C^*(Bun_G)$. Since we live in the world of higher algebra, such a filtration is not something naive, but rather a certain canonical construction in homotopy theory.
where “[Main Text]” is Gaitsgory-Lurie.
Whenever this thread pops up, it seems to me that people are looking for something, but I don’t know what that is!
To me this whole matter is very simple.
1) There is a synthetic result (probably) which Urs wishes to call the fundamental theorem of calculus. He can of course call it whatever he likes.
2) There was an implicit claim, or so it appeared, that this synthetic theorem has as a corollary the classical fundamental theorem of calculus. This claim is completely false.
That’s it. The synthetic result as it now is a million miles away from giving a new proof of the fundamental theorem of calculus, not just a matter of sorting out a few technicalities. I stick to my earlier expressed view that one is deluding oneself if one imagines that a few axioms on an $(\infty,1)$-category are going to give a radically simple proof of the fundamental theorem of calculus which does not have anything akin to the hard details of all existing proofs.
My comment was to address the “(probably)” in (1).
1 to 69 of 69