Talk:Proof: Difference between revisions

1,829 bytes added ,  12 years ago
no edit summary
m (→‎Quantifiers: indent)
No edit summary
Line 236:
 
:::::: (I fixed my phrasing there.) 1. You are talking about smaller limits, here, and 2. you are introducing new terminology (e.g. "DList") without bothering to define it, and making less general statements. In other words, these are meaningless objections -- the point is that there is a limit. This means, in the context of this task, that there will be even numbers which cannot be added associatively because they cannot be added. This leaves us with two options: either (a) the type system is incomplete and does not represent this failure mode, (b) the type system does represent this failure mode. Traditionally, static type systems use (a) and run-time typing is needed for (b). One common failure mode (e.g. two's complement, fixed width) violates the peano axioms for approximately half of the representable combinations of values. Another common failure mode (inductive representation or variable width) also poses an additional constraint where time needed to complete the operation increases as the values increase in size. This second failure mode can be made so burdensome that it disguises the first failure mode, but that's not always a good thing. My problem, here, is that this task currently seems to be asking for a type system with no failure modes. Of course we know that any actual implementation of the peano axioms must eventually fail to implement some cases, but -- because of earlier objections here -- I am afraid that if I present an implementation that has either of these failure modes that people will object to it. So I am asking that this task requirement be made explicit. --[[User:Rdm|Rdm]] 10:08, 14 May 2012 (UTC)
 
::::::: "the point is that there is a limit" - of course (this was my minor comment on minor comment for the minor point), but a type always has it cardinality (i.e. the cardinality of the corresponding set, for example, aleph-null for natural numbers or finite strings), this is unbeatable, it does not matter how many terms can be constructed physically. "This means, in the context of this task" - this means nothing in the context of this task. Again: just as you can prove `even + even = even` (for ''all'' natural numbers) by using a finite amount of paper and in finite number of steps, computer (proof system) can do this using a finite amount of memory in a finite time. Using the same axioms and deriving rules! In the Agda version there is no numbers which are directly tested for the statements of the task (well, only the first natural number is verified directly), the computation is done in a general form over terms and types. Imagine the following, you have the formulas (terms), you have a finite set of axioms and inference rules, there is a way to apply these rules to formulas to obtain new formulas, starting with the axioms. So using only terms of finite size, and applying a finite number of rules you can deduce the truth of the `even + even = even` statements in a suitable metatheory. Proof systems do just that, they do not directly test the statements for some finite set of test data. The proof itself is a compile-time type checking (in the case of languages with dependent types). Not all the math, but some formal mathematical systems is definitely systems that can be implemented: their symbolic data (the "math" itself) can be implemented, the reasoning in such systems can be implemented and so on. We can trust in proofs that helds in such systems, as well as we trust in "ordinary" proofs.
 
::: See also this link: http://okmij.org/ftp/Computation/uncountable-sets.html.
Anonymous user