Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
wanted to record Borel’s theorem on Taylor series expansion, so created stubs for power series and Taylor series
I have revamped power series with treating also far more general (and very natrual) case of many variables with coefficients in semirings.
About series. What is a remainder of a series ? I can not find a sufficiently sophisticated source treating the question in reasonable generality.
I view a series as a pair of a sequence and its sequence of partial sums. So a series does not need to converge. If we define the -th remainder at -th place (is this the full phrase? is remainder starting at -th place or -st?) as the difference between the sum of the series and its -th partial sum, what assumes convergence. Is this standard to assume if we use term remainder ? But I would take a remainder as a series starting at -th (or -st place?). But then, should we renumerate the series or not ?
I would take a series to be, formally, the same thing as a sequence, the sequence of terms. One could just as easily use the sequence of partial sums, so long as the values lie in a group; but if one took values in an arbitrary monoid, I would want the sequence of terms. In any case, defining it to be a pair consisting of two sequences, each derivable from the other, seems rather strange.
The real problem is that a series is none of these; it’s its own thing, and we are coming up with ways of formalising it. Still, I would pick one formalisation and stick with it, not make a pair consisting of two.
Still, I would pick one formalisation and stick with it, not make a pair consisting of two.
But defining it as a pair is entirely standard in analysis circles.
Having just a sequence of partial sums would have inconsistencies with terminology. For example, the -th member of a series is not -th partial sum, but -th member of the original sequence. Then it would also make a difficulty when talking about theorems like the convergence of such and such series does not depend on the reordering (of the original sequence).
But defining it as a pair is entirely standard in analysis circles.
How did that get started? Can you give references?
Having just a sequence of partial sums would have inconsistencies with terminology.
So would having just the sequence of terms (which I would prefer), I suppose. But there’s no inconsistency as long as you remember that you’re talking about a series, which is a concept with its own definitions; there's a reason why we use a different word. This is no harder than if you formally define a series as the pair of sequences. Even in that case, you have to explain that a term of the series is a member of one of the sequences while a limit of the series is a limit of the other sequence.
When you formally define a series as a pair of sequences (the sequence of terms and the sequence of partial sums) satisfying certain properties that guarantee that either can be recovered from the other, that seems to me like defining a topological structure to be a pair of families of sets (the family of open sets and the family of closed sets) satisfying certain properties that (among other things) guarantee that either can be recovered from the other. I have seen people use either of these families as a definition of topology, but never both at once.
I think the reason is that one thinks informally of a sum in a way that one really stairs into terms as they are in the sequence, and talks complicated things like “absolute convergence of a series does not depend on the reordering” meaning clearly ordering of the sequence (what would be very difficult envision in terms of partial sums), while, assuming convergence (or working with formal sums) all operations like multiplying series, inverting series etc. look at the series from the point of view of the sequence of partial sums. So, the intuition is really that we STARE at -s, WRITE them in notation, while operate with partial sums and all the definitions on convergence apply to the partial sums. So it is a new way of sum-packaging which makes lumps together as partial sums, while still seeing the “terms” as a first sequence.
Of course, I agree that one can not logically fully defend the choice of the definition and am aware of the arguments you propose. (By the way, think of a series from to ; unless we look at the principal value we really need to have both limits simulateneously, so in fact we look at it as a sum of two separate series, positive (nonnegative) and negative; n Laurent case they have the names like principal part of Laurent series (which is for finite points in complex plane the series having the negative power terms, but for the point at infinity it is the series with positive power terms! This is important, for example when defining the essential singularity, but the wikipedia is naive here and not aware of that exception). Of course, the Laurent power series make sense beyond the case of working over complex numbers, say in nonarchimedean analysis (as well as for certain distributions), and also formal Laurent power series (both series have infinite number of terms).
For a classical reference to the pair definition of a series see
We could include it into the entry. (By the way, Dieudonne does discuss remainder what will solve my question).
This kind of brackets usually means that the variables inside the bracket are commuting among themselves and commuting with all variables in the ring; even if the coefficient ring is noncommutative. If is just one variable then it is the same anyway. If we want to have several variables which mutually do not commute, but commute with coefficients in the commutative or noncommutative coefficient ring we write . Thus we have both and but the meaning differs.
Zoran #9 writes in part:
If is just one variable then it is the same anyway.
Well, and are the same, but there is still a difference if the variable doesn't commute with the coefficients. (For people who are most familiar with polynomials as polynomial functions, this may be the most natural interpretation!)
Toby, both notations prescribe that the new variables do commute with the “coefficients”, I think.
Now, if you want that the coefficients do not commute in maximal possible way with the varables, that means you do the free product. So you could write .
If the commuting is on the other hand more complicated, defined using an automorphism of endomorphism one puts this into the notation by listing the automorphism in the notation. For example, Ore extensions of rings have notation (the power series analogues are more rare, I can not confirm the notation in that case) where is the endomorphism of involved in the definition of Ore extension (“skew polynomial ring”). This also generalizes to several variables, say by iterated Ore extensions.
I incline towards what Zoran wrote in the first sentence of #12. The way I think of it, for any set , we use to denote the free -module on the free commutative monoid on , and to denote the free -module on the free monoid on . The latter may be represented as . [Note that for , there is no choice in the matter: variables (@Colin #10: also called indeterminates) must commute with (integer) coefficients.] Then, using the symmetry of the symmetric monoidal product , the multiplication on the tensor product is given by the evident composite
where the symmetry of is used to implicitly commute the variables past the coefficients.
I added more to the Properties section of power series on multiplicative and functional inversion of power series.
1 to 14 of 14