Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Created numeral.
I'm not sure what's standard, but in my idiolect, ‘numeral’ means any term for a natural number, not only the canonical ones. I added some random things about non-canonical terms for natural numbers, although I didn't change the definition of ‘numeral’ at all.
I’m not sure what’s standard, but in my idiolect, ‘numeral’ means any term for a natural number, not only the canonical ones.
That’s true in mine as well, so that in the usual model there are multiple terms for the same number. But in the usual model , all terms are as Mike had written.
So in your idiolects, “2+2” is a numeral?
Whether is a numeral technically depends on whether and are in the language; and then they might well be defined so that it evaluates to , and then one might identify these numerals. The really interesting additional numerals are the nonstandard ones, like the one in (the explicit formulation of) .
That’s really weird. I’m pretty sure I’ve only ever heard “numeral” used the way I defined it. For instance, that’s how it is used here, and I would expect Bob Harper to know standard terminology.
If the meaning of ‘numeral’ in logic is to connect to its meaning in ordinary mathematical language, then ‘’ ought to be a numeral, even though this only beta-reduces to ‘’.
But I'm not even claiming that I've seen ‘numeral’ used thus in logic, only that this is how I've always interpreted it.
reduced Base- notations of integers are canonical; so are successor iterations. They’re different canons is all.
More on that: base- notations naturally generalize to ordinal notations shorter than ; successors run out of steam comparatively soon.
What Toby wrote in #7 reflects my (lay) understanding as well. But if Robert Harper is throwing his authority behind the current usage, then that should supersede any objection from me (although the second sentence of my #3 was intended actually as support).
I don’t think I’ve ever seen a formal system in which ’13’ was a valid term of type nat. Not that one couldn’t write down such a system, of course, but in my experience base-10 representations are always regarded as metatheoretic abbreviations for the corresponding term written in the actual formal language under consideration. Sometimes people underline them to emphasize that. What system are you thinking of when you say that ’13’ beta-reduces to the corresponding term built out of 0 and s?
I don’t think I’ve ever seen a formal system in which ’13’ was a valid term of type nat.
Sorry, I think I expressed myself really badly (even if you, Mike, were mainly responding to Toby). I would agree with you here, but would underscore the word formal. At an informal or pre-formal level, however, people would call ’13’ an Arabic numeral for a certain number, and there are other numeration systems. As part of ordinary parlance, ’numeral’ thus refers to a notation or term that denotes a number, or at least that’s how I understand the word as used at the lay level. I think we can go on to distinguish such historical or ordinary-language usage from the usage in logic or formal systems. Am I making sense?
@Todd: yes, I was mainly responding to #7, but thanks for the clarification! I think I agree with you, although I would emphasize that “numeral” refers informally to notations or terms that denote a number directly (for some meaning of “directly”). I think one can argue that “denotes” in some sense, but I would not call it a numeral.
I’ve edited the page numeral to discuss both the informal and formal meanings; see what you think.
What system are you thinking of when you say that ’13’ beta-reduces to the corresponding term built out of 0 and s?
I don't know any such system that people are actually interested in, but it's easy to write down one with this property. The language would consist of constants through , the unary operation , the binary operations and and another binary operation written as concatenation (with precedence ahead of and and associating on the left). Put in your axioms for and (which don't affect beta-reduction anyway) and the usual recursive definitions for and (which do affect beta-reduction). Note that through also have standard definitions in terms of and . So it is left to define the operation written as concatenation; it is defined thus:
Now observe:
As jcmckeown's post suggests, there's also a formal system in which beta-reduces to . This has the same language as above, but now the operation written as concatenation is primitive while is defined as . Also, through are primitive, and is defined directly in terms of the primitive operations rather than (which would be circular) in terms of . (A similar remark applies to , although this doesn't appear in the beta-reduction of .) The full definition of is rather complicated, and there are many noncanonical terms that don't beta-reduce to any canonical ones, (such as , which is only provably equal to ), so this system is not as nice as the previous one.
Another system in which beta-reduces to would have unary operators through , written as postpositions, and an invisible constant. The recursive definition of is much simpler now, such that I almost don't mind writing it down (but watch out for the hard-to-spot invisible constants):
And now I can essentially show the reduction:
This still has the poor feature that there are unreducible noncanonical terms: , , , etc, all of which are only provably equal (once you put in the appropriate axioms) to the invisible constant, as well as , , , etc.
This second system is almost one that people actually use, although the one that people actually use is a slightly more complicated one, in which the invisible constant only appears when one of the unary operations through is directly applied to it, and even is never directly applied to it when one of these unary operations is applied to the result. If the recursive definitions of and are slightly modified (to versions provably equivalent to mine), then we have a theorem that an expression with this restriction beta-reduces to another expression with this condition, now ending in fully canonical forms. My eight-year-old1 is learning this formal system now2 in school, so it's not too complicated, even though I don't want to write it all out.
For those who knew me eight years ago and wonder why they never heard of this before: she is adopted, and actually even that process is not completed yet. ↩
Well, they're taking a break to study division before completing the definition of multiplication in this system, but she does know the full definition of addition, in that she can beta-reduce correctly even though she could not write down the definition formally. ↩
I have thought about making a kind of computer-algebra system that does calculations in the way that people are taught in schools. (This is the sort of thing that I think about when trying to teach people how to do them.) Since we need to make contact with the notation used by humans, the numerical part would use a system extending that in my comment #14. (Not the systems from #15, which are uglier; so I would just have sped-up algorithms proved to give results equivalent to the slow definitions.)
I’ve edited the page numeral to discuss both the informal and formal meanings; see what you think.
Looks fine!
Looks fine!
Ditto!
For those who knew me eight years ago and wonder why they never heard of this before: she is adopted, and actually even that process is not completed yet.
Well, well, well! I had no idea.
You have to follow me on Facebook to know that kind of thing, I guess.
Okay, I’m glad we’re all happy with the entry. (-:
I am reminded that the Coq standard library also includes a definition of natural numbers using binary notation rather than unary. Obviously the same ideas would work for base 10 rather than base 2. Is that similar to your system?
Is that similar to your system?
Yes.
I’m not sure whether there was any disagreement or confusion left over at the end of this discussion, but I just ran across the following quotation from Per Martin-Löf in “An intuitionistic theory of types: predicative part”, 1973:
A closed normal term with type symbol , which obviously must have the form , is called a numeral.
So I thought I would share it.
In discussion of formalization (or lack thereof) of ultrafinitism, by David Roberts on g+ here I am wondering if the concept of numerals is not a good way to capture what is at issue here.
Given any installation of Coq, the system will confirm within a tiny amount of time that the term
10^10^10^10^10^10^10^10 : Nat
is well typed. On the other hand, bringing that term to canonical numeral form will fail to terminate on all available hardware, due to running out of (space and/or time) resources. I suppose.
So in this sense the term in question here “exists” only in one of two possible senses. It exists as a term, but, within the given installation, not as a term of canonical form. This might be just the formalization of what is at stake in ultrafinitism. (?)
@Urs - I completely agree, and this agrees with what I see in Zeilberger’s work. One can say something like
but evaluating this will never happen. We are perfectly happy to say this is equal to , but the statement is purely formal, like saying ’given a measurable cardinal…’ or ’given a proper class of Woodin cardinals…’
Okay, good.
So that’s the sought-after formalization of ultrafinitism then: non-computability of sufficiently large canonical forms in type theory implemented on a machine with finite resources.
Seems to be simple enough.(!?)
But ultrafinitists (well, some of them, perhaps not Zeilberger) want to be able to compute freely with natural numbers without having to think that they may suddenly hit a wall. If you're on a machine with finite resources, then at some point you run out of resources, so that exists but does not. But ultrafinitism should include an axiom that every natural number has a successor. It just should not include the theorem that exists.
What makes formalization of ultrafinitism difficult is that non-ultrafinitists will claim that there is a proof that exists using only the axiom that every natural number has a successor. But ultrafinitists don't accept that that proof exists! So in a way, what's really difficult is not a formalization of ultrafinitism as such but a formalization that is comprehensible to a non-ultrafinitist.
@Toby, in reality one doesn’t really ’hit a wall’, just a temporary pause until more resources are available. As one needs more space/power, it gets more difficult to get them. Hypothetically I could convince Google to let me use their total computing power, or the owners of that big data centre in Utah. More realistically I could purchase cloud time ’as needed’. There no hard cutoff until we reach total use of planetary resources. And even then…
The important thing as you point out is that proofs with many many symbols are not accepted, not just large numbers.
True; on one particular machine there may be a hard cutoff (and even that depends on what else that machine is doing), but not in computing as a whole.
I actually like ultrafinitism as a potential way to formalize (if we can formalize ultrafinitism!) what is practically computable. Often people will say that computation problems in (polynomial-time) are practical, but this is only relative practicality in an infinite limit. If you have a polynomial with a large degree or even a large leading coefficient, then this is immediately impractical; and with any non-constant polynomial, the problem becomes eventually impractical. But if the degree, leading coefficient, and input to the polynomial are restricted to numbers that exist ultrafinitistically, then (perhaps!) we can conclude that the problem is practical.
I actually like ultrafinitism as a potential way to formalize […] what is practically computable.
But is there much to be formalized here? It seems fairly trivial: Practically computable is everything that is computable within a given bound of resources, the bound depending on the actual practice. There seems little further to be said in general.
(Maybe ultrafinitism is to mathematics as interpretation of quantum mechanics is to physics. )
Maybe ultrafinitism is the internal language of some topos in which the “natural numbers” exist less and less as one goes out to infinity?
OK, it's trivial in practice, but we still want to understand it in theory!
I mean, there's a reason that people care whether something is in . But we realize that this says at once too much and too little: something can be practical but not in (because we're only computing it for a small where it can be done), and something can be in but not practical (because , the leading coefficient, or the exponent is large). The former problem I will let live; any problem may be practical for small , and we're usually happy when it is anyway. But the latter problem I would like to fix, by demanding that , the coefficients, and the exponent must exist.
Actually, in practice, it's easy to find real-life problems in such that the problem is impractical at some that clearly exist, with the coefficients and the exponent also clearly existing. But there is still some sense that people find that problems in continue to be practical for a while. So abandon and look at the coefficients and the exponent.
I think that Edward Nelson's concept of countable, addable, and multiplicable numbers is relevant here. Most ultrafinitists speak simply of feasible numbers (if they don't just speak of numbers and refuse to qualify it at all), but Nelson has a hierarchy of more and more restricted concepts of number. (Since the countable numbers are the largest class and ‘countable’ has other meanings, I prefer to use the word ‘feasible’ for them. Then a number is addable iff its sum with any feasible number is feasible, and a number is multiplicable iff its product with any addable number is addable.) Nelson does not consider exponentiable numbers, since they don't behave as well, but they can be defined.
Then (these are now my definitions, not Nelson's, although he may also have thought of this) a polynomial is feasible if its degree is feasible and so are all of its coefficients; it's addable if its degree is feasible and its coefficients are all addable; it's multiplicable if its degree is addable and its coefficients are all multiplicable; and it's composable if its degree is exponentiable and its coefficients are all multiplicable. Or something like that.
We then get more and more restrictive classes of polynomials, defining more and more practically computable problems, with stronger and stronger closure properties.
Yes, I wanted to mention Nelson’s hierarchy of feasible numbers, but I’d gone on long enough.
@Mike, see http://mathoverflow.net/q/71386 Although perhaps a realisability style pretopos is better than a topos, since exponentials are suspect.
Thanks, Toby.
Then I suppose that the specification of the resource bounds is essentially in the choice of what is called “feasible”. Right?
Toby,
Mike mentions (at this blog post by Andrej Bauer) this paper by Aaronson: NP-complete Problems and Physical Reality. Sorta relevant.
I have expanded the first paragraph of the Idea-section, linking it to denotational semantics.
and added pointer to:
1 to 37 of 37