Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Hi,
I have a beginner’s question. In $Vect_{R}$, the linear application $f:R \rightarrow E$ ($E$ a vector space over $R$) defined such as $f(1)=u$ allows to see the vector obtained by multiplication $\lambda u$ as the image of $\lambda$ : $f(\lambda)=\lambda u$. Is there a similar way to define the addition $u+v$ between two vectors $u$ and $v$ ?
Thank you for your help.
Marc
About the only thing to comes to my mind is, in not quite beginner’s language, that the codiagonal map $\nabla: E \times E \to E$ is in fact vector addition. In other words, this is one of those situations in which an operation in the algebraic theory to hand turns out to be an algebra homomorphism. This type of thing is explored in various directions within the nLab, for example in the article commutative algebraic theory and in the article Eckmann-Hilton argument.
Thank you. My question is related to this paper https://arxiv.org/pdf/1409.6067.pdf of David Spivak in which he asserts : “… there is an similarly elegant way to understand addition of vectors in terms of morphisms.” (page 14). So, i thought that an answer could be as easy to understand as the one in this paper. But reading the links you kindly provided, i’m not so sure now …
You should ignore the links then. The point is that if we represent vectors $v$ by morphisms $\mathbb{R} \to V$, then you can represent $u + v$ (in terms of morphisms $u: \mathbb{R} \to E, v: \mathbb{R} \to E$) by the composite morphism
$\mathbb{R} \stackrel{\Delta}{\to} \mathbb{R} \times \mathbb{R} \stackrel{u \times v}{\to} E \times E \stackrel{\nabla}{\to} E$where $\Delta$ is the diagonal map.
If I understand, $\Delta(1)=(1,1)$, then applying $(l_u,l_v)$ (in Spivak’s terminology), you get $(l_u(1),l_v(1))=(u,v)$ and then, $\nabla (u,v)=u+v$, all of them being morphisms of the $Vect_R$ category. Thank you so much.
I have a question related to the one you already answered. In a vector space $E$, is there a way to define subspaces via stability by morphisms ? Thanks again for your expertise.
Re #5: yes, exactly.
Re #6: I’m guessing that by stability by morphisms, you mean something category-theoretic that plays the role of “being closed under addition and scalar multiplication”.
It turns out to be even easier than that: we say that any monomorphism $i: V \to E$ in $Vect$ defines a subspace, with it being understood that two monomorphisms $i: V \to E, j: W \to E$ define the same subspace if there is an isomorphism $\phi: V \to W$ such that $j = i\phi$ (i.e., the obvious triangle commutes).
The same type of definition applies to any category to define a subobject, and hence applies across all of mathematics. One might have to tweak the definition a little in some cases to match the usual notion of “subthing”; for example in topology, topological subspaces may be defined as isomorphism classes of regular monomorphisms. But at least in algebraic situations, the notion of subalgebra (such as vector subspace) agrees with the straightforward notion of subobject as described in the previous paragraph.
Of course at some point one should match up this simple definition of algebraic subobject (in terms of monomorphisms) with the more set-theoretic description in terms of being closed under the relevant algebraic operations. If $V \subseteq E$ is a subset closed under the relevant operations, then $V$ becomes a vector space and the inclusion map $i: V \subseteq E$ taking $v \in V$ to $v \in E$ is a homomorphism and indeed a monomorphism; essentially the same argument applies to any algebraic situation. In the other direction, if $f: V \to E$ is any monomorphism, then closure under operations is expressed by a commutative diagram like
$\array{ V \times V & \stackrel{+_V}{\to} & V \\ \mathllap{f \times f} \downarrow & & \downarrow \mathrlap{f} \\ E \times E & \underset{+_E}{\to} & E }$This works especially smoothly in a case like $Vect$ where the algebraic operations like $+$ are already homomorphisms, so that the diagram lives in $Vect$. It works only slightly less smoothly for other algebraic categories like $Grp$ (the category of groups) where the algebraic operations like group multiplication $m$ are not homomorphisms. In this case the diagram lives in $Set$, and one understands the diagram as referring to underlying sets $U(V)$ of algebras $V$, and underlying functions $U(f)$ of homomorphisms $f: V \to E$. There the official appropriate diagram might look more like
$\array{ U(V) \times U(V) & \stackrel{m_V}{\to} & U(V) \\ \mathllap{U(f) \times U(f)} \downarrow & & \downarrow \mathrlap{U(f)} \\ U(E) \times U(E) & \underset{m_E}{\to} & U(E) }$where the general piece of wisdom is that $n$-ary algebraic operations induce natural transformations $U^n \to U$ (like $m_{\bullet}: U^2 \to U$), from $n$-fold powers of the the underlying set functor $U$ to $U$, and such diagrams are just naturality commutativity squares. (This piece of wisdom is a kind of starting point for Lawvere’s way of doing universal algebra: in every case there is a natural bijection between $n$-ary operations of the algebraic theory and natural transformations $U^n \to U$, and you can more or less define operations to be such natural transformations.)
1 to 7 of 7