Talk:Proof: Difference between revisions

m
m (→‎Quantifiers: move re)
Line 236:
 
:::::: (I fixed my phrasing there.) 1. You are talking about smaller limits, here, and 2. you are introducing new terminology (e.g. "DList") without bothering to define it, and making less general statements. In other words, these are meaningless objections -- the point is that there is a limit. This means, in the context of this task, that there will be even numbers which cannot be added associatively because they cannot be added. This leaves us with two options: either (a) the type system is incomplete and does not represent this failure mode, (b) the type system does represent this failure mode. Traditionally, static type systems use (a) and run-time typing is needed for (b). One common failure mode (e.g. two's complement, fixed width) violates the peano axioms for approximately half of the representable combinations of values. Another common failure mode (inductive representation or variable width) also poses an additional constraint where time needed to complete the operation increases as the values increase in size. This second failure mode can be made so burdensome that it disguises the first failure mode, but that's not always a good thing. My problem, here, is that this task currently seems to be asking for a type system with no failure modes. Of course we know that any actual implementation of the peano axioms must eventually fail to implement some cases, but -- because of earlier objections here -- I am afraid that if I present an implementation that has either of these failure modes that people will object to it. So I am asking that this task requirement be made explicit. --[[User:Rdm|Rdm]] 10:08, 14 May 2012 (UTC)
 
::::::: This will be my final attempt to clarify this to you (please grab a book or article on type theory if you want to know how mathematicians and computer scientists define a type): The only implication of the fact that an actual implementation of a given logic/type theory runs on a machine with a finite amount of memory is that the set of results the compiler can give is extended from "I was able to verify the proof as correct" and "I was able to verify the proof as incorrect" with "I'm unable to verify the proof as either correct or incorrect because I ran out of memory". The third option must not occur here. The compiler should say "I was able to verify the proof as correct", assuring us that the property even + even = even hold for all countably infinite pairs of natural numbers.
::::::: Similarly if the theorem asked us to proof that reversing a linked list twice gives us back the original linked list, we should be able to prove this. This proof should hold independently of whether the machine has an 32-bit, 64-bit or even unbounded word size. Having a finite amount of memory will not prevent a computer working with infinities, it will only bound the maximum complexity of the proof - exactly similar to the situation where a mathematician only has a finite amount of paper to work on (and I've yet to meet a mathematician with an infinite supply of paper).
::::::: I'm strongly getting the impression that you've never managed to work out a proof by induction on paper, however, making this beyond you abilities to comprehend. In that case either do the appropriate background research or simply accept this on authority. —''[[User:Ruud Koot|Ruud]]'' 12:04, 14 May 2012 (UTC)
 
::: See also this link: http://okmij.org/ftp/Computation/uncountable-sets.html.
Line 260 ⟶ 256:
 
::: My current inclination is to believe that you cannot get [http://www.cs.nott.ac.uk/~txa/publ/ydtm.pdf the point] of dependently-typed programming languages / proof systems. — [[User:Ht|Ht]] 23:00, 13 May 2012 (UTC)
 
::::::: This will be my final attempt to clarify this to you (please grab a book or article on type theory if you want to know how mathematicians and computer scientists define a type): The only implication of the fact that an actual implementation of a given logic/type theory runs on a machine with a finite amount of memory is that the set of results the compiler can give is extended from "I was able to verify the proof as correct" and "I was able to verify the proof as incorrect" with "I'm unable to verify the proof as either correct or incorrect because I ran out of memory". The third option must not occur here. The compiler should say "I was able to verify the proof as correct", assuring us that the property even + even = even hold for all countably infinite pairs of natural numbers.
::::::: Similarly if the theorem asked us to proof that reversing a linked list twice gives us back the original linked list, we should be able to prove this. This proof should hold independently of whether the machine has an 32-bit, 64-bit or even unbounded word size. Having a finite amount of memory will not prevent a computer working with infinities, it will only bound the maximum complexity of the proof - exactly similar to the situation where a mathematician only has a finite amount of paper to work on (and I've yet to meet a mathematician with an infinite supply of paper).
::::::: I'm strongly getting the impression that you've never managed to work out a proof by induction on paper, however, making this beyond you abilities to comprehend. In that case either do the appropriate background research or simply accept this on authority. —''[[User:Ruud Koot|Ruud]]'' 12:04, 14 May 2012 (UTC)
Anonymous user