Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Some of David R.’s recent doings got me interested, and I wound up writing binary linear code. (Possibly I should apologize because one of the examples given was the binary Golay code, which I think he was planning on saying more about at binary Golay code.)
I took an educated guess what is meant by an isomorphism of linear codes. The source I was using, the book by Frankel, Lepowsky, and Meurman, says it’s the “obvious notion” without elaborating further, and I have to say it wasn’t immediately obvious to me. Please let me know if you think I guessed wrong.
I ought to have taken greater advantage of learning this material as a graduate student, since Lepowsky was there. Better late than never though.
Well, I was pretty much going to a give a definition along the lines of what you gave: that it can be built from the sum of three smaller codes. I just didn’t have the nice source material. :-) Thanks for getting it together.
Hi Todd,
Thanks for the entry!
I took an educated guess what is meant by an isomorphism of linear codes.
In the standard literature on codes (e.g., MacWilliams and Sloane, Conway and Sloane, Stichtenoth), there is a notion of equivalence of codes: Two codes in are equivalent if they differ only by a permutation of the coordinates and coordinate-wise multiplication by non-zero scalars from . In other words, two codes are equivalent if one can be obtained from the other by multiplying all codewords by a monomial matrix.
This of course can serve as the definition of a morphism in a category of -linear codes, which is then a groupoid (but I’ve never seen anyone actually talking about equivalence as an isomorphism in a category of linear coeds). I’ve also heard of another definition for the morphisms in a category of linear codes (I think it is non-expanding linear maps with respect to the Hamming distance, see Assmus).
I think that some of the entry could perhaps be stated in a more elementary way. For example, the dual code of a linear code should (in my opinion) be defined as the set of all vectors with for all , where . Also, the Hamming code is defined for all lengths of the form with (not just for a length of ), as the code with the parity-check matrix whose columns are all non-zero binary vectors of length . From this it is straightforward to see that the dimension is and the minimum distance is 3. If you agree, I can add this to the entry.
Regarding the Golay code, although I still haven’t read carefully what you wrote, I think it is what is called the “Turyn construction” in Theorem 18.12 in MacWilliams and Sloane. There is also another construction (which I have to recall), the Miracle Octad Generator (MOG).
Yaron
Thanks, Yaron!
I have a few questions/reactions for you. The source I was using definitely spoke of isomorphisms of codes, although I think you could say the authors do not belong to the coding community per se (so terminology could differ between communities). They are more associated with Lie algebra theory, conformal field theory, and representation-theoretic approaches to finite simple groups.
Equivalence via permutation by coordinates is essentially the same thing as what is in the article (which involves pulling back along finite bijections between vector space bases ). Equivalence via multiplying by a tuple of nonzero scalars is something I didn’t know about (although that doesn’t make any difference for the case, since there is only one nonzero scalar in ). Is there a conceptual explanation for this notion of equivalence?
I don’t think it makes much difference if one uses your preferred definition of the dual code or the one I wrote down. Either way, the bilinear pairing is given by the Kronecker pairing for a chosen basis ; the article’s way of doing it (for binary linear codes) offers a tiny advantage of not involving any choice of ordering basis elements. I’ll think about it, but either way I don’t consider it a big deal.
I am just learning about codes, including the fact that “Hamming” is more general (is there a similar abstract uniqueness characterization?). Last night I was reading about the MOG which also seems very cool. Yes, feel free to add to the entry; our immediate purposes for embarking on any of this was perhaps limited to getting to descriptions of the Golay code of length 24, Mathieu groups, the Leech lattice, the Monster group, and related things, but of course more information is welcome. There’s also a gray link pointing to an unwritten article on error-correcting codes, which I was thinking of getting to, but please feel free to work on that too, if you’d like.
Is there a conceptual explanation for this notion of equivalence?
I think the reason is that equivalent codes (including the multiplications by the non-zero scalars) have the same dimension and the same minimum distance, and in fact, the same weight distribution (number of codewords of weight for each ), and therefore have the same performance for certain practical types of errors. For example, if the number of errors is never above some , then we only care about the minimum distance (which should be at least ) and about the dimension (which tells us how much information is carried by each codeword).
Come to think of it, there is also the notion of “permutation equivalent” which avoids the multiplication by scalars even in the non-binary case.
…offers a tiny advantage of not involving any choice of ordering basis elements
I agree, but can it work for non-binary codes?
which I was thinking of getting to, but please feel free to work on that too, if you’d like.
I think that the entry will be much better if you write it. Thanks for the offer!
Bearing the discussion above with Yaron in mind, I made edits to binary linear code, and created monomial matrix.
Thanks!
A correction to #3: After re-checking, it seems that there is no single notion of equivalence for non-binary codes: some sources allow multiplication by a monomial matrix (Conway-Sloane, Tsfasman-Vladuts-Nogin), while others allow only permutations (including MacWilliams-Sloane, as opposed to what I said above). Either way, equivalent codes have the same weight distribution, and it seems that people just choose the definition that is more convenient for their purposes. (For example, Stichtenoth allows only coordinate-wise multiplications by non-zero scalars (again, as opposed to what I said in #3 – sorry), probably because this way he can say that Goppa’s geometric codes are equivalent iff they come from equivalent divisors).
It seems that most sources on coding theory allow only permutations – exactly as in the nLab entry – so perhaps it is possible to change (in binary linear code)
Arguably though, a more appropriate notion, at least for coding theorists…
To
Arguably though, a more appropriate notion, at least for some coding theorists…
Todd, I also wanted to ask: what did you mean by “is there a similar abstract uniqueness characterization?” in #4? Are you referring to the fact the Hamming code is the unique type II self-dual code? Because if this is the case, then note that longer Hamming codes are no longer self dual.
Thanks for the amendment to #3!
Sure, longer codes can’t be self-dual; I just meant is there some known nice abstract characterization. But it’s really not at all important to me; the question was somewhat idle.
Going through a bunch of old entries here, trying to brush-up a little. This one wasn’t even cross-linked with linear code.
added a brief remark (here) relating to quantum stabilizer codes
1 to 10 of 10