I have turned the above answer into a further item in the string theory FAQ, *Why not consider perturbative p-brane scattering for any p?*

Is your question why one does not consider S-matrices built out of sums of correlators of $p+1$-dimensional field theories on diagrams of dimension $p+1 \gt 1+1$? Hence why not use $p$-branes for $p \gt 1$ to define a perturbation theory?

The reason is that it is hard and seems impossible to make sense of this. There are two problems:

a) for $p \gt 1$ then the standard worldvolume action functionals, which are really worldvolume gravity coupled to “matter” (the sigma-model fields) are not renormalizable;

b) for $p \gt 1$ the moduli spaces of $p+1$-manifolds are not controllable.

So for $p \gt 1$ one a) does not know how to define the “Feynman amplitudes” and b) even if one did, one does not know how against what to integrate them.

Each of these two problems in itself makes a $p$-brane perturbation theory for $p \gt 1$ be hard to come by.

That is, by the way, the reason for the term “M-theory”. First there had been the observation that the super 1-brane in 10d target spacetimes is accompanied by a 2-brane in 11d target spacetimes. This suggested the evident idea that there ought to be perturbation theory for 2-branes – called membranes, hence that there ought to be *membrane theory* in direct analogy with *string theory*. But the above two problems make a direct such analogy unlikely. But since there might be a less obvious, more sophisticated kind of analogy, Witten started saying “M-theory” as an abbreviation, not tocommit himself to what exactly might be really going on, and leaving open for the future if “M” is for “membrane” or for something else:

]]>As it has been proposed that the eleven-dimensional theory is a supermembrane theory but there are some reasons to doubt that interpretation, we will non-committally call it the M-theory, leaving to the future the relation of M to membranes. (Hořava-Witten 95)

Perhaps a silly question, but I know that one thing that is liked about string worldsheets is that they do not have the deficiencies vis a vis singularities that the underlying graphs of Feynman diagrs have. But why can’t we consider foams as well as Riemann surfaces? Sure, the theory works nice with only the latter, but so do non-interacting particles. Or did I just reinvent branes?

]]>Yes, that’s interesting about this explicit deduction of the whole story, one sees which piece of the physics picture really comes from which bit of mathematics. We see here that the wordline exists quite independently of the target spacetime.

Again this is a phenomenon that is more familiar (to some people) in the higher dimensional case of strings. Here the worldline is replaced by the worldsheet, and it is a famous fact that there are situations where it is impossible to interpret the propagators as exhibiting propagation through anything like a continuous space. The famous example are Gepner models.

In this stringy version of the Feynman diagram story, there is a priori no spacetime, defined. What in the entry appears as sums $\sum_{x = 1}^k$ over points in spacetime is all absorbed in the worldsheet correlators. All one has then in the end is a sum over Riemann surfaces (the higher dimensional version of Feynman graphs) and each contribution in the sum looks like the propability amplitude for some objects zipping around in some “space” in some way, but that “space” is a priori not defined, we see it only indirectly via those probability amplitudes.

]]>It seems we start off there just with a “discrete space of $k \in \mathbb{N}$ points”. As yet, there is no further structure on these points?

But then how do I think of worldlines of different lengths between two points? Does this not suggest paths in a space in which the $k$ points sit?

]]>Ah, don’t worry, that was an annoying typo of mine. I am glad that you spotted it.

One might imagine that integrating over $\tau$ means that we have a pair of points, $x, y$, such that the worldline length between them can vary.

Yes, the particle may travel from $x$ to $y$ taking any “time” $\tau$. The integral over $\tau$ accounts for the superposition of all contributions of particles taking any positive time to get from $x$ to $y$. Here you ought to think of $\tau$ not as physical time, but as wordline parameter length. It may be helpful to think of this when generalizing from poing particles to strings, then $\tau$ becomes complex, parameterizing the length and diameter of a torus (a string worldsheet) and the integral over $\tau$ is now over the moduli space of complex elliptic curves.

Presumably this finite version is building up to the some fuller account of all points in a region?

All the appearance and role of the diagrams is fully visible in the finite version. Passing to infinite-dimensional “path integrals” only adds the complication of how to interpret (by renormalization) the contribution of each diagram to the total sum. For the time being I am too lazy to write more on that.

]]>Oh I see, thanks. There’s something about physics that makes me more dim than usual. Maybe it’s the knowledge that the hope that physical interpretation will be helpful is often misplaced.

Take

Notice that in “Schwinger parameterization” $A^{-1}_{x,y} = \int_0^\infty \exp(- \tau A_{x,y}) d\tau$ and we may think of $\exp(-\tau A_{x,y})$ is the propagator for a scalar along its worldline of length $\tau$ from $x$ to $y$ (“worldline formalism”).

One might imagine that integrating over $\tau$ means that we have a pair of points, $x, y$, such that the worldline length between them can vary. But how then to think of such points? Especially as $x$ and $y$ might be the same point.

Presumably this finite version is building up to the some fuller account of all points in a region?

]]>Okay, there was the sum-sign missing. I have added it now.

$\begin{aligned} \langle (\sum_{x} \phi_x^3)^2 \rangle & = \sum_{x_1,x_2} \langle \phi_{x_1} \phi_{x_1}\phi_{x_1} \phi_{x_2}\phi_{x_2} \phi_{x_2} \rangle \\ &= prefactor \; \sum_{x_1,x_2 = 1}^k A^{-1}_{x_1 x_1} A^{-1}_{x_1 x_2} A^{-1}_{x_2 x_2} + prefactor \; \sum_{x_1, x_2 = 1}^k A^{-1}_{x_1 x_2} A^{-1}_{x_1 x_2} A^{-1}_{x_1 x_2} \end{aligned}$ ]]>So it’s been specified that there are just two points $x_1$ and $x_2$?

And won’t there be a contribution from $\langle \phi^6_{x_1} \rangle$, or is this zero?

]]>I don’t think so. See this paragraph. In any case, this ought to be discussed in a thread on proof nets, not in a thread on Feynman diagrams.

]]>Did all that was good in the discussion above end up appearing on the pages of $n$Lab?

]]>I have briefly spelled out how Feynman diagrams arise For finitely many degrees of freedom.

]]>Okay, I get it. Thanks.

]]>I would have thought a string diagram is not syntax but (categorical) semantics, because a string diagram is by definition labelled in objects and morphisms of a given category. The choice of category is already semantics. To me a string diagram is just a graphical way of speaking of morphisms in a monoidal category.

One might argue that anything which is described as a “way of speaking” should be considered syntax. (-:

More precisely, I would say that a *syntax* (whether graphical, type-theoretic, $n$-dimensional, or whatever) is a way of presenting a free structure of some sort, and the unique maps *out of* that free structure into some other given structure $C$ (determined by interpreting the generators in $C$) are what provide a *semantics* of that syntax in $C$. This is true even when the syntax is the “internal logic” of a given structure $C$, with axioms (= generators) corresponding to all the elements of $C$. So a string diagram itself, even if it is labeled by objects and morphisms in a fixed category $C$, is still syntax — the semantics comes when you “compose up” that diagram and interpret it as a single morphism of $C$.

Urs, you seemed to be objecting to how I was using the word “semantics”, so I gave you a long comment which explained why I used that word. So I don’t understand why you think I’m arguing. I’m not. I am however trying to explain why the sentence that was there sounds strange to my ears, and I mean to be helpful in doing so.

The precise sense in which I view string diagrams as “graphical syntax” is that (let’s work just with progressive string diagrams for monoidal categories, to illustrate the idea) the monoidal category of deformation classes of string diagrams labeled in a tensor scheme is the *free* (strict) monoidal category generated by the tensor scheme. In other words, there is a “forgetful” functor $U$ from (strict) monoidal categories to tensor schemes, taking a monoidal category $M$ to the tensor scheme whose objects are those of $M$, and whose poly-arrows $\Gamma = A_1, \ldots, A_m \to \Delta = B_1, \ldots, B_n$ are triples $(\Gamma, \Delta, f: A_1 \otimes \ldots \otimes A_m \to B_1 \otimes \ldots \otimes B_n)$ where $f$ is a morphism of $M$. So letting $F$ be the left adjoint (which produces the monoidal category of labeled string diagrams), the thing you just referred to is the construction $F(U(M))$, and there is a counit $F(U(M)) \to M$ which evaluates a string diagram labeled in $M$ as a morphism of $M$. Following Lawvere, and in agreement with how you’d like to view proof nets, I am willing to view the constituents of the free structure as ’syntactic’ by nature.

Another option is to consider the terminal tensor scheme $1$ and to construct the monoidal category $F(1)$ of (deformation classes of) unlabeled string diagrams, and then view a general $F(T)$ (where $T$ is a tensor scheme or computad) as a kind of wreath or club construction $F(1) \rtimes T^\bullet$, which is analogous to how one can construct other types of free categorical structures.

But then am I not entitled to “according full syntactic status” to them, qua your very thesis?

Yes, certainly. You could say, with respect to my thesis, that the morphisms of the free smc category $V[T]$ are identified with normalized “unit-extended” (cut-free) proof nets with variables in $T$, where “normalization” refers to a rewriting scheme on unit-extended proof nets called “directed rewiring”.

The point that I’m trying to make more or less is that both string diagrams and proof nets are both graphical syntax or one form or another, both graphical semantics if you like of one form or another, they are “about equally syntactic and equally semantic” to one another, to put it humorously. We hopefully can begin to understand each other on this point. I would *not* make a kind of statement that contrasts proof nets as being more on the syntactic side with string diagrams as being more on the semantic side, or that we particularly need to emphasize a step whereby one interprets proof nets in terms of a “categorical semantics of string diagrams”. If anything, I would say that string diagrams are “more cleanly syntactic” than proof nets are, because they cleanly represent the arrows of appropriate free structures, whereas Girardian proof nets are not completely satisfactory in this respect (at least not without caveats).

Am I making slightly better sense now?

]]>Thanks for all the information Todd.

On one point I am not sure what to do: while all this discussion is to explain your use of “semantics” as opposed to my half-sentence, you also say that you don’t want to argue about it here ;-) So please feel free to ignore this one comment I might make if I were to react:

I would have thought a string diagram is not syntax but (categorical) semantics, because a string diagram is by definition labelled in objects and morphisms of a given category. The choice of category is already semantics. To me a string diagram is just a graphical way of speaking of morphisms in a monoidal category.

Above you first say that proof net semantics is in $Struct[T]$, but then you conclude that this is actually a bad idea which you fixed in your thesis. I gather there you replaced $S$ by something that is fully faithful after all? But then am I not entitled to “according full syntactic status” to them, *qua* your very thesis?

Possibly we are splitting hairs here, not sure. I would just like to understand and to know how I do have to speak in order not to raise the eyebrows of the logicians. (That of course turned out to be impossibe on another matter already, which we discussed recently…)

]]>(Continued) And this is really what my thesis was about: giving a more refined calculus of “proof nets” (more refined than the Girard proof nets or KM graphs) where coherence really could be decided in terms of these improved proof nets. It’s an interesting problem to think about. It’s not hard to come up with proposals for these improved proof nets, involving KM linkages between occurrences of the unit $I$, but the plot thickens when one realizes that not only do you have to account for the Kelly-Mac Lane triple dual example above, but you also have to account for the fact that we should get the same proof net after substituting $I$ for $x$! So as I say, it’s an interesting problem.

]]>Yeah, sure, you could say proof nets are a kind of graphical syntax. And that’s a good way to think of them. But if you want to put it that way, I’d say that they are about as syntactic as string diagrams, which is also a graphical syntax. (So again I’m not quite sure why one would write that string diagrams are a “categorical semantics” of proof nets.)

Let me give a quick tutorial, if you don’t mind, to explain better what I had in mind when I wrote “graphical semantics”. Before we (and by “we”, you can consider that I actually mean Girard) talk of “proof nets”, we talk of “proof structures”. (Not a great term, but that’s what “we” call them.) A *proof structure* of type $A \to B$, where $A, B$ are formulas built from a set of propositional variables $T$ and a constant $I$ (for monoidal unit) by applying operations \otimes, \multimap$, is a graph consisting of

a KM-graph which pairs off variables (each pairing being an edge between a negatively signed variable and a positively signed variable, where $A$ is deemed negative and $B$ positive and subformulas acquire signs by declaring that the children $S, T$ of $S \otimes T$ acquire its parity, where $T$ has the same parity and $S$ has parity opposite to $S \multimap T$), and

the construction trees of the formulas $A$, $B$.

So in other words a proof structure of this type is a graph whose vertices are subformulas of $A$ and $B$ and whose edges occur either as KM pairings between variable subformulas or as edges in the two construction trees. This is what I was explaining in #3. (You’ll see that I’m actually working here in the context of smc categories, but it works essentially the same way for $\ast$-autonomous categories.)

So, for example, if $x, y, z$ are distinct variables, there’s a unique proof structure of the form $x \multimap (x \otimes y) \to y \otimes (z \multimap z)$. There are, I guess, $6 = 3!$ proof structures of type $x \multimap (x \otimes x) \to x \otimes (x \multimap x)$. Note by the way that this pairing off of variables in KM graphs echoes the idea of “conservation of resources” in linear deductions. For example, there’s no KM graph of the form $x \to x \otimes x$ that we’d get via the contraction rule or diagonal map, and similarly there’s no KM graph of the form $x \to I$ which would come from the weakening rule or projection map.

The fact of the matter is that formulas and proof structures between them form an smc category, $Struct[T]$, where again $T$ is a set of propositional variables. Proof structures are composed in the manner I’d tried to explain in #3, and this composition (of what I’m calling KM graphs) was discussed long, long ago by Eilenberg and Kelly, probably around the same time as their big La Jolla paper on the notion of closed category. I’ll leave to the imagination how $\otimes$ and $\multimap$ work on this category, but as functors, tensoring and internal homming is by juxtaposing proof structures, pretty much the way it works for string diagrams, and I’ll leave to the imagination how one describes the evaluation morphism $(A \multimap B) \otimes A \to B$ and coevaluation morphism $B \to A \multimap (B \otimes A)$ as proof structures.

There is an inclusion functor $T \to Struct[T]$ from the discrete category $T$, which takes $t$ to itself (as propositional formula). Letting $V[T]$ denote the free smc category on $T$, there is an inclusion $i: T \to Struct[T]$. By freeness, this extends to an smc functor $S: V[T] \to Struct[T]$.

This smc functor $S$ is what I am calling the *graph semantics*. (Actually, it’s one of several things one could mean by “graph semantics” – maybe we interpret sequent proofs instead of arrows in $V[T]$, etc.) By definition, a *proof net* is a proof structure of the form $S(f)$. In other words, a proof net is a “correct” or “valid” proof structure, if we use the language of Girard. Or, to put it differently, the smc category of proof nets is the essential image of $S$.

An example of an invalid proof structure would be the unique one of type $x \multimap (x \otimes y) \to y \otimes (z \multimap z)$ that I wrote above. There is no morphism definable in the language of smc categories that produces this proof structure.

Anyway, while in spirit you could say that proof structures or proof nets do form a kind of graphical syntax, in the set-up above they are described as a possible (categorical) semantics. It’s perhaps admittedly silly, ever since Lawvere’s thesis, to be dogmatic and draw an absolutely strict dividing line between “categorical syntax” (free models) and “categorical semantics”, and it’s hardly something I want to argue about here, but at least this explains or justifies why I had written “semantics” earlier.

You would be entirely justified in according “full syntactic status” to the smc category of nets $Net[T]$ – if the full essentially surjective functor $S: V[T] \to Net[T]$ were faithful as well. But it’s not faithful. There is a famous example observed by Kelly and Mac Lane, involving “double dual embeddings” $\delta_A: A \to (A \multimap I) \multimap I$, which shows that

$\delta_{x \multimap I} \circ (\delta_x \multimap I): ((x \multimap I) \multimap I) \multimap I \to ((x \multimap I) \multimap I) \multimap I$and the identity map on $((x \multimap I) \multimap I) \multimap I$ are distinct in $V[T]$ but have the same graph semantics. This is a fun problem to think about: morally, yes, proof nets are a kind of graphical syntax – they just happen not to be fully up to the job of carrying *all* the relevant syntax, or at least not in this form. (Continued, since the software thinks my message is too long.)

Proof nets are a “graphical semantics” of (certain fragments of) linear logic, in other words are a graphical way of interpreting deductions in linear sequent calculus (or terms in linear type theory, or what have you). They are not something one has to interpret a second time (i.e., provide a semantics for) in terms of another layer of graphical calculus (string diagrams). They already are a graphical calculus.

Is “graphical” already “semantics”? How about graphical syntax? Syntax need not be 1-dimensional, does it?

I was thinking: proof nets talk about linear logic before and independently of choosing any model, hence any semantics for the linear logic. The moment we choose a such-and-such-monoidal category as the semantics of our linear logic, then every proof net we may have considered before now gives rise to a string diagram in that monoidal category.

Isn’t that a reasonable perspective?

]]>Well, I’m gratified to hear you say so Mike and Toby, because I didn’t think I at all succeeded in saying it properly. :-)

FWIW (and having been prompted by Urs), I am currently in the process of writing up a shortened account of my thesis – which is how I learned most of what I understand of linear logic. While I am currently hammering it out on my non-public web at the nLab, it is meant to be eventually integrated into the nLab proper. Right now the idea is just to write up enough so that the main results are understandable and plausible-sounding, without getting into the nitty-gritty details of all the proofs. (But those might in time be made available on the nLab as well.)

]]>Yeah, now I think that I might understand what they are! I need to go back and look at Girard's book again.

]]>FWIW, #3 is already one of the best explanations of proof nets I’ve ever heard.

]]>It’s still not how I’d put it, but let me try to put matters in a way that might bring us closer (see below).

Proof nets are a “graphical semantics” of (certain fragments of) linear logic, in other words are a graphical way of interpreting deductions in linear sequent calculus (or terms in linear type theory, or what have you). They are not something one has to interpret a second time (i.e., provide a semantics for) in terms of another layer of graphical calculus (string diagrams). They already are a graphical calculus.

So I myself would express the Idea slightly differently. Maybe like this:

Proof nets, in the original conception due to J.-Y. Girard, provide a graphical representation for deductions in linear logic, particularly multiplicative linear logic or MLL. They are akin to (in fact, an elaborated form of) the extranaturality graphs developed by Eilenberg, Kelly, and Mac Lane for tracking compositions of extranatural transformations and for stating coherence theorems. As such, they can be viewed as a baseline for variants of string diagrams, especially those variants that deal particularly with (symmetric) closed monoidal categories and star-autonomous categories.

“Baseline” here means that proof nets, at least in the form given by Girard, are graphical structures used particularly to visualize symmetric monoidal closed categories that are freely generated from computads or tensor schemes of the simplest kind: directed graphs, usually just discrete directed graphs. This baseline concept of proof nets can be widened and developed further to take into account the structure of smc categories freely generated from more complicated tensor schemes (so that the calculus of proof nets, thus generalized, would be very closely akin to those variants of string diagram calculus that would apply particularly to smc or $\ast$-autonomous categories, although with a somewhat more proof-theoretic emphasis).

A more elaborate notion of proof net may be used to deal with full linear logic, including the exponential modalities used to mediate between the multiplicative and additive connectives. These may be conveniently packaged as (MLL) proof nets with boxes (see Melliès), where the contents of a box are arguments of an exponential that can be moved in and out according to certain rules.

Of course, this way of putting it opens up further opportunities for creating grey links with question marks. ;-)

I might even copy and paste the above, if it seems satisfactory.

]]>No problem, Todd. Let’s just sort it out:

So at *Feynman diagram* I have changed the sentence

Moreover, one may think of string diagrams in monoidal categories as being the categorical semantics for proof nets (see there for more) in linear logic.

to

Moreover, one may think of string diagrams in monoidal categories as providing categorical semantics for proof nets (see there for more) in multiplicative linear logic.

Maybe that’s better? Otherwise, please modify it.

On the other hand, putting my physicists hat on and looking at Blute & Panangaden, actually they do something much more ambitious and physically interesting than the basic statement “A Feynman diagram in gauge theory is a string diagram in a representation category”. Instead, they actually look at Lagrangians and really derive their Feynman rules, it seems. That’s rather more substantial.

So it seems we easily agree that much more should be said here, eventually.

]]>