Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Site Tag Cloud

2-category 2-category-theory abelian-categories adjoint algebra algebraic algebraic-geometry algebraic-topology analysis analytic-geometry arithmetic arithmetic-geometry book bundles calculus categorical categories category category-theory chern-weil-theory cohesion cohesive-homotopy-type-theory cohomology colimits combinatorics complex complex-geometry computable-mathematics computer-science constructive cosmology deformation-theory descent diagrams differential differential-cohomology differential-equations differential-geometry digraphs duality elliptic-cohomology enriched fibration finite foundation foundations functional-analysis functor gauge-theory gebra geometric-quantization geometry graph graphs gravity grothendieck group group-theory harmonic-analysis higher higher-algebra higher-category-theory higher-differential-geometry higher-geometry higher-lie-theory higher-topos-theory homological homological-algebra homotopy homotopy-theory homotopy-type-theory index-theory integration integration-theory k-theory lie-theory limits linear linear-algebra locale localization logic mathematics measure-theory modal modal-logic model model-category-theory monad monads monoidal monoidal-category-theory morphism motives motivic-cohomology nlab noncommutative noncommutative-geometry number-theory of operads operator operator-algebra order-theory pages pasting philosophy physics pro-object probability probability-theory quantization quantum quantum-field quantum-field-theory quantum-mechanics quantum-physics quantum-theory question representation representation-theory riemannian-geometry scheme schemes set set-theory sheaf sheaves simplicial space spin-geometry stable-homotopy-theory stack string string-theory superalgebra supergeometry svg symplectic-geometry synthetic-differential-geometry terminology theory topology topos topos-theory tqft type type-theory universal variational-calculus

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to nForum
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMike Shulman
    • CommentTimeNov 16th 2012

    Created numeral.

    • CommentRowNumber2.
    • CommentAuthorTobyBartels
    • CommentTimeNov 16th 2012

    I'm not sure what's standard, but in my idiolect, ‘numeral’ means any term for a natural number, not only the canonical ones. I added some random things about non-canonical terms for natural numbers, although I didn't change the definition of ‘numeral’ at all.

    • CommentRowNumber3.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 16th 2012

    I’m not sure what’s standard, but in my idiolect, ‘numeral’ means any term for a natural number, not only the canonical ones.

    That’s true in mine as well, so that in the usual model (,+,,0,1)(\mathbb{N}, +, \cdot, 0, 1) there are multiple terms for the same number. But in the usual model (,s,0)(\mathbb{N}, s, 0), all terms are as Mike had written.

    • CommentRowNumber4.
    • CommentAuthorMike Shulman
    • CommentTimeNov 17th 2012

    So in your idiolects, “2+2” is a numeral?

    • CommentRowNumber5.
    • CommentAuthorTobyBartels
    • CommentTimeNov 17th 2012

    Whether 2+22 + 2 is a numeral technically depends on whether 22 and ++ are in the language; and then they might well be defined so that it evaluates to s(s(s(s(0))))s(s(s(s(0)))), and then one might identify these numerals. The really interesting additional numerals are the nonstandard ones, like the one in (the explicit formulation of) PA+¬Con(PA)PA + \neg{Con(PA)}.

    • CommentRowNumber6.
    • CommentAuthorMike Shulman
    • CommentTimeNov 17th 2012

    That’s really weird. I’m pretty sure I’ve only ever heard “numeral” used the way I defined it. For instance, that’s how it is used here, and I would expect Bob Harper to know standard terminology.

    • CommentRowNumber7.
    • CommentAuthorTobyBartels
    • CommentTimeNov 17th 2012

    If the meaning of ‘numeral’ in logic is to connect to its meaning in ordinary mathematical language, then ‘1313’ ought to be a numeral, even though this only beta-reduces to ‘s(s(s(s(s(s(s(s(s(s(s(s(s(0)))))))))))))s(s(s(s(s(s(s(s(s(s(s(s(s(0)))))))))))))’.

    But I'm not even claiming that I've seen ‘numeral’ used thus in logic, only that this is how I've always interpreted it.

    • CommentRowNumber8.
    • CommentAuthorjcmckeown
    • CommentTimeNov 17th 2012

    reduced Base-nn notations of integers are canonical; so are successor iterations. They’re different canons is all.

    • CommentRowNumber9.
    • CommentAuthorjcmckeown
    • CommentTimeNov 17th 2012

    More on that: base-nn notations naturally generalize to ordinal notations shorter than ε 0\epsilon_0; successors run out of steam comparatively soon.

    • CommentRowNumber10.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 17th 2012

    What Toby wrote in #7 reflects my (lay) understanding as well. But if Robert Harper is throwing his authority behind the current usage, then that should supersede any objection from me (although the second sentence of my #3 was intended actually as support).

    • CommentRowNumber11.
    • CommentAuthorMike Shulman
    • CommentTimeNov 18th 2012

    I don’t think I’ve ever seen a formal system in which ’13’ was a valid term of type nat. Not that one couldn’t write down such a system, of course, but in my experience base-10 representations are always regarded as metatheoretic abbreviations for the corresponding term written in the actual formal language under consideration. Sometimes people underline them to emphasize that. What system are you thinking of when you say that ’13’ beta-reduces to the corresponding term built out of 0 and s?

    • CommentRowNumber12.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 19th 2012

    I don’t think I’ve ever seen a formal system in which ’13’ was a valid term of type nat.

    Sorry, I think I expressed myself really badly (even if you, Mike, were mainly responding to Toby). I would agree with you here, but would underscore the word formal. At an informal or pre-formal level, however, people would call ’13’ an Arabic numeral for a certain number, and there are other numeration systems. As part of ordinary parlance, ’numeral’ thus refers to a notation or term that denotes a number, or at least that’s how I understand the word as used at the lay level. I think we can go on to distinguish such historical or ordinary-language usage from the usage in logic or formal systems. Am I making sense?

    • CommentRowNumber13.
    • CommentAuthorMike Shulman
    • CommentTimeNov 19th 2012

    @Todd: yes, I was mainly responding to #7, but thanks for the clarification! I think I agree with you, although I would emphasize that “numeral” refers informally to notations or terms that denote a number directly (for some meaning of “directly”). I think one can argue that 2+22+2 “denotes” 44 in some sense, but I would not call it a numeral.

    I’ve edited the page numeral to discuss both the informal and formal meanings; see what you think.

    • CommentRowNumber14.
    • CommentAuthorTobyBartels
    • CommentTimeNov 19th 2012
    • (edited Nov 19th 2012)

    What system are you thinking of when you say that ’13’ beta-reduces to the corresponding term built out of 0 and s?

    I don't know any such system that people are actually interested in, but it's easy to write down one with this property. The language would consist of constants 00 through 99, the unary operation ss, the binary operations ++ and \cdot and another binary operation written as concatenation (with precedence ahead of ++ and \cdot and associating on the left). Put in your axioms for 00 and ss (which don't affect beta-reduction anyway) and the usual recursive definitions for ++ and \cdot (which do affect beta-reduction). Note that 11 through 99 also have standard definitions in terms of 00 and ss. So it is left to define the operation written as concatenation; it is defined thus:

    abs(9)a+b. {a}{b} \coloneqq s(9) \cdot a + b .

    Now observe:

    13 s(9)1+3 s(s(8))s(0)+s(2) s(s(s(s(7)))0+s(s(s(7)))+s(1)) s(s(s(0+s(s(s(6))))+s(0))) s(s(s(s(s(0+s(s(s(5)))))+0))) s(s(s(s(s(s(0+s(s(s(4))))))))) s(s(s(s(s(s(s(0+s(s(s(3)))))))))) s(s(s(s(s(s(s(s(0+s(s(s(2))))))))))) s(s(s(s(s(s(s(s(s(0+s(s(s(1)))))))))))) s(s(s(s(s(s(s(s(s(s(0+s(s(s(0))))))))))))) s(s(s(s(s(s(s(s(s(s(s(0+s(s(0))))))))))))) s(s(s(s(s(s(s(s(s(s(s(s(0+s(0))))))))))))) s(s(s(s(s(s(s(s(s(s(s(s(s(0+0))))))))))))) s(s(s(s(s(s(s(s(s(s(s(s(s(0))))))))))))).\array{ 13 & \rightsquigarrow s(9) \cdot 1 + 3 \\ & \rightsquigarrow s(s(8)) \cdot s(0) + s(2) \\ & \rightsquigarrow s(s(s(s(7))) \cdot 0 + s(s(s(7))) + s(1)) \\ & \rightsquigarrow s(s(s(0 + s(s(s(6)))) + s(0))) \\ & \rightsquigarrow s(s(s(s(s(0 + s(s(s(5))))) + 0))) \\ & \rightsquigarrow s(s(s(s(s(s(0 + s(s(s(4))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(0 + s(s(s(3)))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(s(0 + s(s(s(2))))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(s(s(0 + s(s(s(1)))))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(s(s(s(0 + s(s(s(0))))))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(s(s(s(s(0 + s(s(0))))))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(s(s(s(s(s(0 + s(0))))))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(s(s(s(s(s(s(0 + 0))))))))))))) \\ & \rightsquigarrow s(s(s(s(s(s(s(s(s(s(s(s(s(0))))))))))))) .}
    • CommentRowNumber15.
    • CommentAuthorTobyBartels
    • CommentTimeNov 19th 2012

    As jcmckeown's post suggests, there's also a formal system in which s(s(s(s(s(s(s(s(s(s(s(s(0)))))))))))))))))s(s(s(s(s(s(s(s(s(s(s(s(0))))))))))))))))) beta-reduces to 1313. This has the same language as above, but now the operation written as concatenation is primitive while ss is defined as s+1s + 1. Also, 11 through 99 are primitive, and ++ is defined directly in terms of the primitive operations rather than (which would be circular) in terms of ss. (A similar remark applies to \cdot, although this doesn't appear in the beta-reduction of s(s(s(s(s(s(s(s(s(s(s(s(0)))))))))))))))))s(s(s(s(s(s(s(s(s(s(s(s(0))))))))))))))))).) The full definition of ++ is rather complicated, and there are many noncanonical terms that don't beta-reduce to any canonical ones, (such as 2(36)2(36), which is only provably equal to 5656), so this system is not as nice as the previous one.

    Another system in which s(s(s(s(s(s(s(s(s(s(s(s(0)))))))))))))))))s(s(s(s(s(s(s(s(s(s(s(s(0))))))))))))))))) beta-reduces to 1313 would have unary operators 00 through 99, written as postpositions, and an invisible constant. The recursive definition of ++ is much simpler now, such that I almost don't mind writing it down (but watch out for the hard-to-spot invisible constants):

    +; +b0b0; +b9b9; a0+a0; a0+b0(a+b)0; a0+b1(a+b)1; a0+b9(a+b)9; a1+a1; a1+b0(a+b)1; a1+b1(a+b)2; a1+b8(a+b)9; a1+b9(a+b+1)0; a9+b9(a+b+1)8. \array{ {} + {} \coloneqq {} ;\\ {} + b0 \coloneqq b0 ;\\ \vdots\\ {} + b9 \coloneqq b9 ;\\ a0 + {} \coloneqq a0 ;\\ a0 + b0 \coloneqq (a + b)0 ;\\ a0 + b1 \coloneqq (a + b)1 ;\\ \vdots\\ a0 + b9 \coloneqq (a + b)9 ;\\ a1 + {} \coloneqq a1 ;\\ a1 + b0 \coloneqq (a + b)1 ;\\ a1 + b1 \coloneqq (a + b)2 ;\\ \vdots\\ a1 + b8 \coloneqq (a + b)9 ;\\ a1 + b9 \coloneqq (a + b + 1)0 ;\\ \vdots\\\vdots\\\vdots\\ a9 + b9 \coloneqq (a + b + 1)8 .}

    And now I can essentially show the reduction:

    s(s(s(s(s(s(s(s(s(s(s(s(0))))))))))))))))) s(s(s(s(s(s(s(s(s(s(s(0)))))))))))))))))+1 s(s(s(s(s(s(s(s(s(s(0)))))))))))))))))+1+1 0+1+1+1+1+1+1+1+1+1+1+1+1+1 (+)1+1+1+1+1+1+1+1+1+1+1+1+1 1+1+1+1+1+1+1+1+1+1+1+1+1 (+)2+1+1+1+1+1+1+1+1+1+1+1 2+1+1+1+1+1+1+1+1+1+1+1 9+1+1+1+1 (+1)0+1+1+1 11+1+1 12+1 13.\array{ s(s(s(s(s(s(s(s(s(s(s(s(0))))))))))))))))) & \rightsquigarrow s(s(s(s(s(s(s(s(s(s(s(0))))))))))))))))) + 1 \\ & \rightsquigarrow s(s(s(s(s(s(s(s(s(s(0))))))))))))))))) + 1 + 1 \\ & \cdots \\ & 0 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 \\ & \rightsquigarrow ({} + {})1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 \\ & \rightsquigarrow 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 \\ & \rightsquigarrow ({} + {})2 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 \\ & \rightsquigarrow 2 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 \\ & \cdots \\ & \rightsquigarrow 9 + 1 + 1 + 1 + 1 \\ & \rightsquigarrow ({} + 1)0 + 1 + 1 + 1 \\ & \rightsquigarrow 11 + 1 + 1 \\ & \rightsquigarrow 12 + 1 \\ & \rightsquigarrow 13 }.

    This still has the poor feature that there are unreducible noncanonical terms: 00, 0000, 000000, etc, all of which are only provably equal (once you put in the appropriate axioms) to the invisible constant, as well as 0101, 005005, 000123000123, etc.

    This second system is almost one that people actually use, although the one that people actually use is a slightly more complicated one, in which the invisible constant only appears when one of the unary operations 00 through 99 is directly applied to it, and even 00 is never directly applied to it when one of these unary operations is applied to the result. If the recursive definitions of ++ and \cdot are slightly modified (to versions provably equivalent to mine), then we have a theorem that an expression with this restriction beta-reduces to another expression with this condition, now ending in fully canonical forms. My eight-year-old1 is learning this formal system now2 in school, so it's not too complicated, even though I don't want to write it all out.


    1. For those who knew me eight years ago and wonder why they never heard of this before: she is adopted, and actually even that process is not completed yet. 

    2. Well, they're taking a break to study division before completing the definition of multiplication in this system, but she does know the full definition of addition, in that she can beta-reduce correctly even though she could not write down the definition formally. 

    • CommentRowNumber16.
    • CommentAuthorTobyBartels
    • CommentTimeNov 19th 2012

    I have thought about making a kind of computer-algebra system that does calculations in the way that people are taught in schools. (This is the sort of thing that I think about when trying to teach people how to do them.) Since we need to make contact with the notation used by humans, the numerical part would use a system extending that in my comment #14. (Not the systems from #15, which are uglier; so I would just have sped-up algorithms proved to give results equivalent to the slow definitions.)

    • CommentRowNumber17.
    • CommentAuthorTobyBartels
    • CommentTimeNov 19th 2012

    I’ve edited the page numeral to discuss both the informal and formal meanings; see what you think.

    Looks fine!

    • CommentRowNumber18.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 19th 2012

    Looks fine!

    Ditto!

    • CommentRowNumber19.
    • CommentAuthorTodd_Trimble
    • CommentTimeNov 19th 2012

    For those who knew me eight years ago and wonder why they never heard of this before: she is adopted, and actually even that process is not completed yet.

    Well, well, well! I had no idea.

    • CommentRowNumber20.
    • CommentAuthorTobyBartels
    • CommentTimeNov 20th 2012

    You have to follow me on Facebook to know that kind of thing, I guess.

    • CommentRowNumber21.
    • CommentAuthorMike Shulman
    • CommentTimeNov 20th 2012

    Okay, I’m glad we’re all happy with the entry. (-:

    I am reminded that the Coq standard library also includes a definition of natural numbers using binary notation rather than unary. Obviously the same ideas would work for base 10 rather than base 2. Is that similar to your system?

    • CommentRowNumber22.
    • CommentAuthorTobyBartels
    • CommentTimeNov 20th 2012

    Is that similar to your system?

    Yes.

    • CommentRowNumber23.
    • CommentAuthorMike Shulman
    • CommentTimeDec 12th 2012

    I’m not sure whether there was any disagreement or confusion left over at the end of this discussion, but I just ran across the following quotation from Per Martin-Löf in “An intuitionistic theory of types: predicative part”, 1973:

    A closed normal term with type symbol NN, which obviously must have the form s(s(...s(0)...))s(s(...s(0)...)), is called a numeral.

    So I thought I would share it.

    • CommentRowNumber24.
    • CommentAuthorUrs
    • CommentTimeOct 14th 2014
    • (edited Oct 14th 2014)

    In discussion of formalization (or lack thereof) of ultrafinitism, by David Roberts on g+ here I am wondering if the concept of numerals is not a good way to capture what is at issue here.

    Given any installation of Coq, the system will confirm within a tiny amount of time that the term

      10^10^10^10^10^10^10^10 : Nat
    

    is well typed. On the other hand, bringing that term to canonical numeral form will fail to terminate on all available hardware, due to running out of (space and/or time) resources. I suppose.

    So in this sense the term in question here “exists” only in one of two possible senses. It exists as a term, but, within the given installation, not as a term of canonical form. This might be just the formalization of what is at stake in ultrafinitism. (?)

    • CommentRowNumber25.
    • CommentAuthorDavidRoberts
    • CommentTimeOct 14th 2014
    • (edited Oct 14th 2014)

    @Urs - I completely agree, and this agrees with what I see in Zeilberger’s work. One can say something like

    2 googol+2 googol googol 2^googol + 2^{googol^googol}

    but evaluating this will never happen. We are perfectly happy to say this is equal to 2 googol googol+2 googol2^{googol^googol} + 2^googol, but the statement is purely formal, like saying ’given a measurable cardinal…’ or ’given a proper class of Woodin cardinals…’

    • CommentRowNumber26.
    • CommentAuthorUrs
    • CommentTimeOct 14th 2014

    Okay, good.

    So that’s the sought-after formalization of ultrafinitism then: non-computability of sufficiently large canonical forms in type theory implemented on a machine with finite resources.

    Seems to be simple enough.(!?)

    • CommentRowNumber27.
    • CommentAuthorTobyBartels
    • CommentTimeOct 14th 2014

    But ultrafinitists (well, some of them, perhaps not Zeilberger) want to be able to compute freely with natural numbers without having to think that they may suddenly hit a wall. If you're on a machine with finite resources, then at some point you run out of resources, so that xx exists but x+1x + 1 does not. But ultrafinitism should include an axiom that every natural number has a successor. It just should not include the theorem that 10 10 1010^{10^10} exists.

    What makes formalization of ultrafinitism difficult is that non-ultrafinitists will claim that there is a proof that 10 10 1010^{10^10} exists using only the axiom that every natural number has a successor. But ultrafinitists don't accept that that proof exists! So in a way, what's really difficult is not a formalization of ultrafinitism as such but a formalization that is comprehensible to a non-ultrafinitist.

    • CommentRowNumber28.
    • CommentAuthorDavidRoberts
    • CommentTimeOct 14th 2014

    @Toby, in reality one doesn’t really ’hit a wall’, just a temporary pause until more resources are available. As one needs more space/power, it gets more difficult to get them. Hypothetically I could convince Google to let me use their total computing power, or the owners of that big data centre in Utah. More realistically I could purchase cloud time ’as needed’. There no hard cutoff until we reach total use of planetary resources. And even then…

    The important thing as you point out is that proofs with many many symbols are not accepted, not just large numbers.

    • CommentRowNumber29.
    • CommentAuthorTobyBartels
    • CommentTimeOct 14th 2014

    True; on one particular machine there may be a hard cutoff (and even that depends on what else that machine is doing), but not in computing as a whole.

    I actually like ultrafinitism as a potential way to formalize (if we can formalize ultrafinitism!) what is practically computable. Often people will say that computation problems in PP (polynomial-time) are practical, but this is only relative practicality in an infinite limit. If you have a polynomial with a large degree or even a large leading coefficient, then this is immediately impractical; and with any non-constant polynomial, the problem becomes eventually impractical. But if the degree, leading coefficient, and input to the polynomial are restricted to numbers that exist ultrafinitistically, then (perhaps!) we can conclude that the problem is practical.

    • CommentRowNumber30.
    • CommentAuthorUrs
    • CommentTimeOct 14th 2014
    • (edited Oct 14th 2014)

    I actually like ultrafinitism as a potential way to formalize […] what is practically computable.

    But is there much to be formalized here? It seems fairly trivial: Practically computable is everything that is computable within a given bound of resources, the bound depending on the actual practice. There seems little further to be said in general.

    (Maybe ultrafinitism is to mathematics as interpretation of quantum mechanics is to physics. )

    • CommentRowNumber31.
    • CommentAuthorMike Shulman
    • CommentTimeOct 14th 2014

    Maybe ultrafinitism is the internal language of some topos in which the “natural numbers” exist less and less as one goes out to infinity?

    • CommentRowNumber32.
    • CommentAuthorTobyBartels
    • CommentTimeOct 14th 2014

    OK, it's trivial in practice, but we still want to understand it in theory!

    I mean, there's a reason that people care whether something is in PP. But we realize that this says at once too much and too little: something can be practical but not in PP (because we're only computing it for a small nn where it can be done), and something can be in PP but not practical (because nn, the leading coefficient, or the exponent is large). The former problem I will let live; any problem may be practical for small nn, and we're usually happy when it is anyway. But the latter problem I would like to fix, by demanding that nn, the coefficients, and the exponent must exist.

    Actually, in practice, it's easy to find real-life problems in PP such that the problem is impractical at some nn that clearly exist, with the coefficients and the exponent also clearly existing. But there is still some sense that people find that problems in PP continue to be practical for a while. So abandon nn and look at the coefficients and the exponent.

    I think that Edward Nelson's concept of countable, addable, and multiplicable numbers is relevant here. Most ultrafinitists speak simply of feasible numbers (if they don't just speak of numbers and refuse to qualify it at all), but Nelson has a hierarchy of more and more restricted concepts of number. (Since the countable numbers are the largest class and ‘countable’ has other meanings, I prefer to use the word ‘feasible’ for them. Then a number is addable iff its sum with any feasible number is feasible, and a number is multiplicable iff its product with any addable number is addable.) Nelson does not consider exponentiable numbers, since they don't behave as well, but they can be defined.

    Then (these are now my definitions, not Nelson's, although he may also have thought of this) a polynomial is feasible if its degree is feasible and so are all of its coefficients; it's addable if its degree is feasible and its coefficients are all addable; it's multiplicable if its degree is addable and its coefficients are all multiplicable; and it's composable if its degree is exponentiable and its coefficients are all multiplicable. Or something like that.

    We then get more and more restrictive classes of polynomials, defining more and more practically computable problems, with stronger and stronger closure properties.

    • CommentRowNumber33.
    • CommentAuthorDavidRoberts
    • CommentTimeOct 14th 2014

    Yes, I wanted to mention Nelson’s hierarchy of feasible numbers, but I’d gone on long enough.

    @Mike, see http://mathoverflow.net/q/71386 Although perhaps a realisability style pretopos is better than a topos, since exponentials are suspect.

    • CommentRowNumber34.
    • CommentAuthorUrs
    • CommentTimeOct 14th 2014

    Thanks, Toby.

    Then I suppose that the specification of the resource bounds is essentially in the choice of what is called “feasible”. Right?

    • CommentRowNumber35.
    • CommentAuthorDavidRoberts
    • CommentTimeOct 16th 2014

    Toby,

    Mike mentions (at this blog post by Andrej Bauer) this paper by Aaronson: NP-complete Problems and Physical Reality. Sorta relevant.

    • CommentRowNumber36.
    • CommentAuthorUrs
    • CommentTimeDec 29th 2022

    I have expanded the first paragraph of the Idea-section, linking it to denotational semantics.

    diff, v10, current

    • CommentRowNumber37.
    • CommentAuthorUrs
    • CommentTimeDec 29th 2022

    and added pointer to:

    diff, v10, current